Page 179«..1020..178179180181..190200..»

Category Archives: Robotics

Apple’s Swift Playgrounds coding app now supports robots, drones, and toys – The Verge

Posted: June 3, 2017 at 12:31 pm

Apple today announced that its education programming iPad app, Swift Playgrounds, will soon support robots and drones. That means young kids and students will be able to write their own Swift code to control any number of real-world toys and machines. The company is launching the feature next Monday, partnering with a number of top toy and robotics companies including LEGO, automated BB-8 toy maker Sphero, and drone company Parrot. Other companies on board for the launch are toy robot makers UBTECH and Wonder Workshop, as well as Skoog, the maker of a music cube that relies on Swift code to teach children how to compose songs.

Swift Playgrounds, launched last year during Apples 2016 Worldwide Developers Conference, is effectively a video game that teaches kids how to code using Apples Swift programming language. It breaks down how code functions at the most fundamental level and uses colorful environments and visual guides product manager Tim Triemstra even uses the game industry term cutscenes to explain the effects of code and the power of programming. The code appears on the left side of the iPad screen, either automated by the app to teach a lesson or typed in directly by the user, while an animation the code can manipulate plays out on the right.

Kids can now use Swift to send commands to toy robots and drones

Since the Playgrounds launch, Apple has partnered with a number of educational institutions around the country to get Swift built into introductory computer science curriculums and to make its Playgrounds app a fixture in classrooms. The company says Playgrounds has amassed 1 million unique users since launch. When we were designing Swift, from the very first days we wanted it to be everyones first programming language, Triemstra says. We wanted it to be approachable.

Now, with a significant number of Playground users and Swift picking up steam as a lightweight and more elegant way to build iOS apps, Apple is trying to expand its educational focus from software to hardware. Because Playgrounds will support all manner of robotics, including flying drones, the company hopes it will give young kids a whole new reason to engage with programming and learn the secrets of code. It could also do wonders for the popularity of Swift among the next generation of coders, and help cement the language as a fixture for young and eager roboticists.

Given the popularity of success of the Mindstorms robotics series, its a no-brainer that Apple got LEGO onboard for Playgrounds. In a demo at the iPhone makers Cupertino office, a LEGO representative broke down exactly how a Mindstorms EV3 kit can work with Playgrounds, connecting any number of robot-controlling modules to an iPad via Bluetooth. From there, you can see real-time data provided by the robots actuators, motors, and sensors, as well as program commands for fleshed out LEGO bots to receive and carry out. In preparation for the partnership, LEGO says its also designed 10 hours of lessons specifically for the Playgrounds app for kids to run through with a Mindstorms kit.

Similarly, Spheros transparent SPRK+ orb, which is already used to teach kids elements of robotics and programming, will work with the app. Kids will be able to program movements and games for the orb, with exercises breaking down the step-by-step process of development, including a real-world Pong game they can play with their feet as the paddles and the SPRK+ as the ball.

French drone maker Parrot is also buying into Apples new Playgrounds initiative, adding support for its Mambo, Rolling Spider, and Airborne drones. In another demo at Apples offices, a Parrot representative showed off how the Playgrounds app can be used to input commands for the drones to turn, flip in midair, and land in the palm of a users hand.

Apple stresses that none of these new features and hardware tie-ins for Playground are dependent on brand new or tie-in hardware. With LEGO, all you need is a Mindstorms EV3 kit, released as far back as 2013. The same goes for Spheros SPRK+ and Parrots trio of drones so long as you have the supported product, it will sync with Playgrounds and allow you to start controlling it with your own code.

Original post:

Apple's Swift Playgrounds coding app now supports robots, drones, and toys - The Verge

Posted in Robotics | Comments Off on Apple’s Swift Playgrounds coding app now supports robots, drones, and toys – The Verge

Video Friday: Robot Dance Teacher, Transformer Drone, and Pneumatic Reel Actuator – IEEE Spectrum

Posted: at 12:31 pm

The week is almost over, and so is the 2017 IEEE International Conference on Robotics and Automation (ICRA) in Singapore. We hope youve been enjoying our coverage, which has featured aquatic drones, stone-stacking manipulators, andself-folding soft robots. Well have lots more from the conference over the next few weeks, but for you impatient types, were cramming Video Friday this week with a special selection of ICRA videos.

We tried to include videos from many different subareas of robotics: control, vision, locomotion, machine learning, aerial vehicles, humanoids, actuators, manipulation, andhuman-robot interaction.Were posting the abstracts along with the videos, but if you have any questions about these projects, let us know and well get more details from the authors.

Well return to normal Video Friday next week. Have a great weekend everyone!

This letter presents a physical humanrobot interaction scenario in which a robot guides and performs the role of a teacher within a defined dance training framework. A combined cognitive and physical feedback of performance is proposed for assisting the skill learning process. Direct contact cooperation has been designed through an adaptive impedancebased controller that adjusts according to the partners performance in the task. In measuring performance, a scoring system has been designed using the concept of progressive teaching (PT). The system adjusts the difficulty based on the users number of practices and performance history. Using the proposed method and a baseline constant controller, comparative experiments have shown that the PT presents better performance in the initial stage of skill learning. An analysis of the subjects perception of comfort, peace of mind, and robot performance have shown significant difference at the p < .01 level, favoring the PT algorithm.

In this paper, we introduce the achievement of the aerial manipulation by using the whole body of a transformable aerial robot, instead of attaching an additional manipulator. The aerial robot in our work is composed by two-dimensional multilinks which enable a stable aerial transformation and can be employed as an entire gripper. We propose a planning method to find the optimized grasping form for the multilinks while they are on the air, which is based on the original planar enveloping algorithm, along with the optimization of the internal force and joint torque for the force-closure. We then propose the aerial approach and grasp motion strategy, which is devoted to the determination of the form and position of the aerial robot to approach and grasp effectively the object from the air. Finally we present the experimental results of the aerial manipulation which involves grasping, carrying and dropping different types of object. These results validate the performance of aerial grasping based on our proposed wholebody grasp planning and motion control method.

Unmanned rescue, observation, and/or research vehicles with high terrain adaptability, high speed, and high reliability are needed in difficult-to-reach locations. However, for most vehicles, high performance over rough terrain reduces the travel speed and/or requires complex mechanisms. We have developed a blade-type crawler robot with a very simple and reliable mechanism, which traverses uneven terrain at high speed. Moreover, the gyro wheel design stabilizes the success of this approach in improving the motion, ensuring robust traversal. The improvement in traveling speed and robustness over uneven terrain by our approach was confirmed by experiment.

Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new goals, and (2) data inefficiency, i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows better generalization. To address the second issue, we propose the AI2-THOR framework, which provides an environment with high-quality 3D scenes and a physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment.

A 3 DoF parallel cable driven body weight support (BWS) system has been developed for the University of Utahs Treadport Locomotion Interface, for purposes of rehabilitation, simulation of steep slopes, and display of reduced gravity environments. The Treadports large belt (6 by 10 feet) requires a multi-cable support system to ensure that the unloading forces are close to vertical. This paper presents the design and experimental validation, including the system model and force control.

We present a policy search method for learning complex feedback control policies that map from highdimensional sensory inputs to motor torques, for manipulation tasks with discontinuous contact dynamics. We build on a prior technique called guided policy search (GPS), which iteratively optimizes a set of local policies for specific instances of a task, and uses these to train a complex, high-dimensional global policy that generalizes across task instances. We extend GPS in the following ways: (1) we propose the use of a model-free local optimizer based on path integral stochastic optimal control (PI2), which enables us to learn local policies for tasks with highly discontinuous contact dynamics; and (2) we enable GPS to train on a new set of task instances in every iteration by using on-policy sampling: this increases the diversity of the instances that the policy is trained on, and is crucial for achieving good generalization. We show that these contributions enable us to learn deep neural network policies that can directly perform torque control from visual input. We validate the method on a challenging door opening task and a pick-and-place task, and we demonstrate that our approach substantially outperforms the prior LQR-based local policy optimizer on these tasks. Furthermore, we show that on-policy sampling significantly increases the generalization ability of these policies.

We define a system architecture for a large swarm of miniature quadcopters flying in dense formation indoors. The large number of small vehicles motivates novel design choices for state estimation and communication. For state estimation, we develop a method to reliably track many small rigid bodies with identical motion-capture marker arrangements. Our communication infrastructure uses compressed one-way data flow and supports a large number of vehicles per radio. We achieve reliable flight with accurate tracking (< 2cm mean position error) by implementing the majority of computation onboard, including sensor fusion, control, and some trajectory planning. We provide various examples and empirically determine latency and tracking performance for swarms with up to 49 vehicles.

This paper presents a system that consists of three robots to imitate the motion of top volleyball blockers. In a volleyball match, in order to score by spiking, it is essential to improve the spike decision rate of each spiker. To increase the spike decision rates, iterative spiking training with actual blockers is required. Therefore, in this study, a block machine system was developed that can be continuously used in an actual practice field to improve attack practice. In order to achieve the required operating speed and mechanical strength each robot has five degrees of freedom. This robot performs high speed movement on 9 m rails that are arranged in parallel with the volleyball net. In addition, an application with a graphical user interface to enable a coach to manipulate these robots was developed. It enables the coach to control block motions and change the parameters such as the robots positions and operation timing. Through practical use in the practice field, the effectiveness of this system was confirmed.

This paper contributes to quantifying the notion of robotic fitness by developing a set of necessary conditions that determine whether a small quadruped has the ability to open a class of doors or climb a class of stairs using only quasi-static maneuvers. After verifying that several such machines from the recent robotics literature are mismatched in this sense to the common human scale environment, we present empirical workarounds for the Minitaur quadrupedal platform that enable it to leap up, force the door handle and push through the door, as well as bound up the stairs, thereby accomplishing through dynamical maneuvers otherwise (i.e., quasi-statically) unachievable tasks.

We present a simple probabilistic framework for multimodal sensor fusion that allows a mobile robot to reliably locate and approach the most promising interaction partner among a group of people, in an uncontrolled environment. Our demonstration integrates three complementary sensor modalities, each of which detects features of nearby people. The output is an occupancy grid approximation of a probability density function over the locations of people that are actively seeking interaction with the robot. We show empirically that simply driving towards the peak of this distribution is sufficient to allow the robot to correctly engage an interested user in a crowd of bystanders.

Collisions between quadrotor UAVs and the environment often occur, for instance, under faulty piloting, from wind gusts, or when obstacle avoidance fails. Airspace regulations are forcing drone companies to build safer drones; many quadrotor drones now incorporate propeller protection. However, propeller protected quadrotors still do not detect or react to collisions with objects such as walls, poles and cables. In this paper, we present a collision recovery pipeline which controls propeller protected quadrotors to recover from collisions. This pipeline combines concepts from impact dynamics, fuzzy logic, and aggressive quadrotor attitude control. The strategy is validated via a comprehensive Monte Carlo simulation of collisions against a wall, showing the feasibility of recovery from challenging collision scenarios. The pipeline is implemented on a custom experimental quadrotor platform, demonstrating feasibility of real-time performance and successful recovery from a range of pre-collision conditions. The ultimate goal of the research is to implement a general collision recovery solution as a safety feature for quadrotor flight controllers.

State estimation techniques for humanoid robots are typically based on proprioceptive sensing and accumulate drift over time. This drift can be corrected using exteroceptive sensors such as laser scanners via a scene registration procedure. For this procedure the common assumption of high point cloud overlap is violated when the scenario and the robots point-of-view are not static and the sensors field-of-view (FOV) is limited. In this paper we focus on the localization of a robot with limited FOV in a semi-structured environment. We analyze the effect of overlap variations on registration performance and demonstrate that where overlap varies, outlier filtering needs to be tuned accordingly. We define a novel parameter which gives a measure of this overlap. In this context, we propose a strategy for robust non-incremental registration. The pre-filtering module selects planar macro-features from the input clouds, discarding clutter. Outlier filtering is automatically tuned at run-time to allow registration to a common reference in conditions of non-uniform overlap. An extensive experimental demonstration is presented which characterizes the performance of the algorithm using two humanoids: the NASA Valkyrie, in a laboratory environment, and the Boston Dynamics Atlas, during the DARPA Robotics Challenge Finals.

In this paper, we propose an epoch-making soft sheet actuator called Wavy-sheet. Inspired by gastropods locomotion, Wavy-sheet can generate continuous traveling waves on the whole soft body. It aims to be applied to a mobile soft mat capable of moving and transporting without damaging the object and the ground. The actuator, driven by pneumatics, is mainly composed of a couple of flexible rubber tubes and fabrics. The advantages are: i) many traveling waves can be generated by just three tubes, ii) the whole structure can adapt its own shape to the outer environment passively, and iii) only 10 mm in thickness and can generate waves with larger than 10mm in amplitude. In this paper, first, we describe the basic concept of Wavy-sheet, and then show the configuration and the principle of wave propagation. Next, fabrication methods are illustrated and the design methods are addressed. By using a prototype actuator, several experiments are conducted. Finally, we verify the effectiveness of the proposed actuator and its design methods.

Part handling in warehouse automation is challenging if a large variety of items must be accommodated and items are stored in unordered piles. To foster research in this domain, Amazon holds picking challenges. We present our system which achieved second and third place in the Amazon Picking Challenge 2016 tasks. The challenge required participants to pick a list of items from a shelf or to stow items into the shelf. Using two deep-learning approaches for object detection and semantic segmentation and one item model registration method, our system localizes the requested item. Manipulation occurs using suction on points determined heuristically or from 6D item model registration. Parametrized motion primitives are chained to generate motions. We present a full-system evaluation during the APC 2016 and componentlevel evaluations of the perception system on an annotated dataset.

For collaborative robots to become useful, end users who are not robotics experts must be able to instruct them to perform a variety of tasks. With this goal in mind, we developed a system for end-user creation of robust task plans with a broad range of capabilities. CoSTAR: the Collaborative System for Task Automation and Recognition is our winning entry in the 2016 KUKA Innovation Award competition at the Hannover Messe trade show, which this year focused on Flexible Manufacturing. CoSTAR is unique in how it creates natural abstractions that use perception to represent the world in a way users can both understand and utilize to author capable and robust task plans. Our Behavior Tree-based task editor integrates high-level information from known object segmentation and pose estimation with spatial reasoning and robot actions to create robust task plans. We describe the crossplatform design and implementation of this system on multiple industrial robots and evaluate its suitability for a wide variety of use cases.

In this paper, we present the mechatronic design of our Tactile Omnidirectional Robot Manipulator (TOMM), which is a dual arm wheeled humanoid robot with 6DoF on each arm, 4 omnidirectional wheels and 2 switchable end-effectors (1 DoF grippers and 12 DoF Hands). The main feature of TOMM is its arms and hands which are covered with robot skin. We exploit the multi-modal tactile information of our robot skin to provide a rich tactile interaction system for robots. In particular, for the robot TOMM, we provide a general control framework, capable of modifying the dynamic behavior of the entire robot, e.g., producing compliance in a non-compliant system. We present the hardware, software and middleware components of the robot and provide a compendium of the base technologies deployed in it. Furthermore, we show some applications and results that we have obtained using this robot.

We present an object-tracking framework that fuses point cloud information from an RGB-D camera with tactile information from a GelSight contact sensor. GelSight can be treated as a source of dense local geometric information, which we incorporate directly into a conventional point-cloud-based articulated object tracker based on signed-distance functions. Our implementation runs at 12 Hz using an online depth reconstruction algorithm for GelSight and a modified secondorder update for the tracking algorithm. We present data from hardware experiments demonstrating that the addition of contact-based geometric information significantly improves the pose accuracy during contact, and provides robustness to occlusions of small objects by the robots end effector.

Soft compliant materials and novel actuation mechanisms ensure flexible motions and high adaptability for soft robots, but also increase the difficulty and complexity of constructing control systems. In this work, we provide an efficient control algorithm for a multi-segment extensible soft arm in 2D plane. The algorithm separate the inverse kinematics into two levels. The first level employs gradient descent to select optimized arms pose (from task space to configuration space) according to designed cost functions. With consideration of viscoelasticity, the second level utilizes neural networks to figure out the pressures from each segments pose (from configuration space to actuation space). In experiments with a physical prototype, the control accuracy and effectiveness are validated, where the control algorithm is further improved by an optional feedback strategy.

This paper presents a systematic approach for the 3-D mapping of underwater caves. Exploration of underwater caves is very important for furthering our understanding of hydrogeology, managing efficiently water resources, and advancing our knowledge in marine archaeology. Underwater cave exploration by human divers however, is a tedious, labor intensive, extremely dangerous operation, and requires highly skilled people. As such, it is an excellent fit for robotic technology, which has never before been addressed. In addition to the underwater vision constraints, cave mapping presents extra challenges in the form of lack of natural illumination and harsh contrasts, resulting in failure for most of the state-ofthe-art visual based state estimation packages. A new approach employing a stereo camera and a video-light is presented. Our approach utilizes the intersection of the cone of the video-light with the cave boundaries: walls, floor, and ceiling, resulting in the construction of a wire frame outline of the cave. Successive frames are combined using a state of the art visual odometry algorithm while simultaneously inferring scale through the stereo reconstruction. Results from experiments at a cave, part of the Sistema Camilo, Quintana Roo, Mexico, validate our approach. The cave wall reconstruction presented provides an immersive experience in 3-D.

This paper investigates how a robot that can produce contingent listener response, i.e., backchannel, can deeply engage children as a storyteller. We propose a backchannel opportunity prediction (BOP) model trained from a dataset of childrens dyad storytelling and listening activities. Using this dataset, we gain better understanding of what speaker cues children can decode to find backchannel timing, and what type of nonverbal behaviors they produce to indicate engagement status as a listener. Applying our BOP model, we conducted two studies, withinand between-subjects, using our social robot platform, Tega. Behavioral and self-reported analyses from the two studies consistently suggest that children are more engaged with a contingent backchanneling robot listener. Children perceived the contingent robot as more attentive and more interested in their story compared to a non-contingent robot. We find that children significantly gaze more at the contingent robot while storytelling and speak more with higher energy to a contingent robot.

In this paper, we show that visual servoing can be formulated as an acceleration-resolved, quadratic optimization problem. This allows us to handle visual constraints, such as field of view and occlusion avoidance, as inequalities. Furthermore, it allows us to easily integrate visual servoing tasks into existing whole-body control frameworks for humanoid robots, which simplifies prioritization and requires only a posture task as a regularization term. Finally, we show this method working on simulations with HRP-4 and real tests on Romeo.

For the past few years, unmanned aerial vehicles (UAVs) have been successfully employed in several investigations and exploration tasks such as aerial inspection and manipulations. However, most of these UAVs are limited to open spaces distant from any obstacles because of the high risk of falling as a result of an exposed propeller or not enough protection. On the other hand, a UAV with a passive rotating spherical shell can fly over a complex environment but cannot engage in physical interaction and perform power tethering because of the passive rotation of the spherical shell. In this study, we propose a new mechanism that allows physical interaction and power tethering while the UAV is well-protected and has a good flight stability, which enables exploration in a complex environment such as disaster sites. We address the current problem by dividing the whole shell into two separate hemispherical shells that provide a gap unaffected by passive rotation. In this paper, we mainly discuss the concept, general applications, and design of the proposed system. The capabilities of the proposed system for physical interaction and power tethering in a complex space were initially verified through laboratory-based test flights of our experimental prototype.

We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the largescale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving.

We present the design, modeling, and implementation of a novel pneumatic actuator, the Pneumatic Reel Actuator (PRA). The PRA is highly extensible, lightweight, capable of operating in compression and tension, compliant, and inexpensive. An initial prototype of the PRA can reach extension ratios greater than 16:1, has a force-to-weight ratio over 28:1, reach speeds of 0.87 meters per second, and can be constructed with parts totaling less than $4 USD. We have developed a model describing the actuator and have conducted experiments characterizing the actuators performance in regards to force, extension, pressure, and speed. We have implemented two parallel robotic applications in the form of a three degree of freedom robot arm and a tetrahedral robot.

Humans utilize their torsos and arms while running to compensate for the angular momentum generated by the lower-body movement during the flight phase. To enable this capability in a humanoid robot, the robot should have human-like mass, a center of mass position, and inertial moment of each link. To mimic this characteristic, we developed an angular momentum control method using a humanoid upper body based on human motion. In this method, the angular momentum generated by the movement of the humanoid lower body is calculated, and the torso and arm motions are calculated to compensate for the angular momentum of the lower body. We additionally developed the humanoid upper-body mechanism that mimics the human link length and mass property by using carbon fiber reinforced plastic and a symmetric structure. As a result, the developed humanoid robot could generate almost the same angular momentum as that of human through human-like running motion. Furthermore, when suspended in midair, the humanoid robot produced the angular momentum compensation in the yaw direction.

IEEE Spectrums award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, drones, automation, artificial intelligence, and more. Contact us:e.guizzo@ieee.org

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Luke Skywalker, your new robotic hand is ready 18Feb2016

Attack eagles are training to become part of the Dutch National Police anti-drone arsenal 1Feb2016

The latest ATLAS is by far the most advanced humanoid robot in existence 24Feb2016

A robot that uses artificial sweat can cool its motors without bulky radiators 13Oct2016

Finally a gear system that could replace costly harmonic drives 19Oct2016

A live dragonfly with a cybernetic backpack and optical implants is now airborne 1Jun

We're at the IEEE International Conference on Robotics and Automation 2017 in Singapore 29May

Your weekly selection of awesome robot videos 26May

This little legged robot may one day walk straight out of a 3D printer 24May

Robot mounted on a turtle uses snacks and LEDs to control the animal's movements, and a fully immersive Matrix for turtles is not far behind 23May

Your weekly selection of awesome robot videos 19May

Your weekly selection of awesome robot videos 5May

We take an in-depth look at the new TurtleBot 3 Burger and Waffle from Robotis 2May

Your weekly selection of awesome robot videos 28Apr

The questions he was asked spoke volumes about Ubers suppliers, the secret Spider lidar, Googles early suspicions, and more 26Apr

Kid-proof robots are small, cheap, and help turn abstract concepts into interactive lessons 25Apr

Your weekly selection of awesome robot videos 21Apr

This transparent, soft robot fish propels itself by flapping fins made of dielectric elastomers 16Apr

Your weekly selection of awesome robot videos 14Apr

Two new prototypes tackle the challenge of making soft robots that can move 11Apr

Read the original:

Video Friday: Robot Dance Teacher, Transformer Drone, and Pneumatic Reel Actuator - IEEE Spectrum

Posted in Robotics | Comments Off on Video Friday: Robot Dance Teacher, Transformer Drone, and Pneumatic Reel Actuator – IEEE Spectrum

Super Soaker Inventor Takes Aim at Funding High School Robotics Teams – NBCNews.com

Posted: June 1, 2017 at 10:40 pm

He created one of the most popular toys on the planet but the inventor of the "Super Soaker" isn't done making a splash.

Lonnie Johnson is now focusing on new battery technology, but his most rewarding pursuit may be sharing his knowledge with a new generation of engineers.

The mild-mannered Johnson grew up in Mobile, Alabama at the height of the civil rights movement.

"There was a lot of fear, a lot of anxiety, a lot of stress," he remembered. "When I was a child the 'White-only' bathrooms were still very prevalent."

He turned that fear into motivation and a career as a NASA rocket scientist. But his "a-ha" moment came unexpectedly while he was designing a water pump. He had built testing the pump out in a bathroom when he noticed something.

"I thought to myself, 'Geez, this would make a neat water gun!'" he said. "At that point I decided to put my engineering hat on and design a high performance water gun."

That idea would change his life.

He built the first prototype for what became "The Super Soaker."

The toy, which first went on sale in the early 1990's, eventually topped $1 billion in sales. Johnson also went on to come up with the NERF gun and other toys.

"It's interesting that the Super Soaker gets so much attention," he said. "I really like to think of myself as a serious engineer!"

Now, he's getting serious about giving back. His nonprofit helps fund high school robotics teams. One of them the DISCbots from the nearby DeKalb International Student Center is made up of refugees from nine countries.

Kalombo Mukuca fled the Central African Republic a year ago. "Even babies -- they kill them," he said. "So we don't want to get killed."

Emanuel Tezera came to the United States from Ethiopia. "I want to fix something in this world," he said.

Incredibly, in just its second year, the DISCbots qualified for the world-wide robotics competition in Texas.

For Johnson, this idea may be his most rewarding.

"If I can have a positive impact," he said, "clearly it's something I want to do."

See the rest here:

Super Soaker Inventor Takes Aim at Funding High School Robotics Teams - NBCNews.com

Posted in Robotics | Comments Off on Super Soaker Inventor Takes Aim at Funding High School Robotics Teams – NBCNews.com

The No. 1 industry being threatened by robots – MarketWatch

Posted: at 10:40 pm

The robot revolution may not have replaced us yet, but automation is undoubtedly creeping its way into many careers.

Hundreds of jobs now require some level of robotics skills, a new analysis from career website Zippia found. Nearly 1,000 job titles now have robotics-related requirements, the study found, which it defined as degree of autonomy, a set of intended tasks, and ability to function without human intervention. Of 985 such jobs it found in its database, 492 job titles were listed multiple times in fields that had robotics-related requirements.

The manufacturing industry has already been hit by automation, which has impacted employment in the U.S. auto industry. But what other industries are next?

The health field is particularly at risk, with the highest share 68 positions on its list. The job where youll most likely have to work with a robot? Decal applicator, which entails affixing labels to a number of products, including bikes, cars and bottles. It found the category had 2,711 listings involving robotics almost four times as many as any other job. This doesnt mean robots have taken over those jobs, only that they are assisting some have suggested the robot revolution could lead to higher wages in industries where automation takes over.

Here are the job titles that most require candidates to interact with robots and automation, according to Zippia. decal applicators, dermatologists, applied anthropologists, urologists, computer-aided drafters, tank drivers, indirect fire infantrymen and women, robotic welders and robotics engineers.

Technology entrepreneurs like Mark Zuckerberg, chief executive of Facebook FB, and Bill Gates, co-founder of Microsoft MSFT, +0.37% have warned about the threat of automation and artificial intelligence stealing tens of millions of jobs. A total of 25 million jobs will be? eliminated by technology by 2027, a study from market research company Forrester Research found, and a separate study from economists from the Massachusetts Institute of Technology and Boston University argued that six workers will lose their positions for every robot added.

However, the automation revolution isnt all bad for human workers: 15 million new jobs will be created in the U.S. over the next decade as a result of technology and 25% of jobs will be transformed, and not replaced. These positions are largely in finance, medicine and farming, the study found.

Original post:

The No. 1 industry being threatened by robots - MarketWatch

Posted in Robotics | Comments Off on The No. 1 industry being threatened by robots – MarketWatch

Announcing the agenda for TechCrunch Sessions: Robotics … – TechCrunch

Posted: at 10:40 pm

TechCrunch is holding its first ever event focused solely on robotics on July 17 at MITs Kresge Auditorium in Cambridge, Mass., and today were really pleased to roll out the agenda forTC Sessions: Robotics.

The events purpose is to convene two very different worlds in robotics the nascent startup and venture scene and the deeply established research, government and corporate worlds. We think weve got our arms around all that and more. Anyone who attends TC Sessions: Robotics will learn how tomorrows robotics companies and technologies are going to populate our workplaces, homes, and everything in between while also learning where to make smart bets for employment, investment and education.

The agenda is packed with the top scientists, executives and companies in the robotics world. And its not done yet. Look for announcements in the coming days ofworkshops, a few remaining speakers, and our pitch-off contestants.

Please set aside July 17 and join TechCrunch, our speakers, and attendees for an amazing day of robotics. Get your tickets while they last. Interested in sponsorship? More information is available here.General questions?Reach out here.

9:00 AM 9:05 AM Opening Remarks fromMatthew Panzarino

9:05 AM 9:25 AM Whats Next at MITs Computer Science and Artificial Intelligence Laboratory withDaniela Rus (MIT CSAIL)

9:25 AM 9:50 AM Is Venture Ready for Robotics?withManish Kothari (SRI), Josh Wolfe (Lux Capital) and Helen Zelman (Lemnos)

10:10 AM 10:35 AM Collaborative Robots At WorkwithClara Vu (VEO), Jerome Dubois (6 River Systems) and Holly Yanco (UMass Lowell)

10:35 AM 10:55 AM Coffee Break

10:55 AM 11:20 AM Building A Robotics Startup from Angel to Exit with Helen Greiner (CyPhy Works),Andy Wheeler (GV) and Elaine Chen (Martin Trust Center for MIT Entrepreneurship)

11:20 AM 11:30 AM Soft Robotics (Carl Vause) Demo

11:30 AM 11:55 AM Re-imagineering Disney Robotics with Martin Buehler (Disney Imagineering)

12:00 PM 1:00 PM Lunch and Workshops TBA

1:00 PM 1:20 PM Robots at Amazonwith Tye Brady (Amazon Robotics)

1:20 PM 1:55 PM When Robots Fly with Buddy Michini (Airware), Andreas Raptopoulos (Matternet) and Anil Nanduri (Intel)

1:55 PM 2:15 PM Packbot, Roomba and Beyondwith Colin Angle (iRobot)

2:15 PM 2:35 PM Building Better BionicsSamantha Payne (Open Bionics) and TBA

2:35 PM 2:45 PM Demo TBA

2:45 PM 3:05 PM The Future of Industrial Robotics with Sami Atiya (ABB)

3:05 PM 3:25 PM Coffee Break

3:25 3:35 PM Demo TBA

3:35 PM 4:15 PM Robotics Startup Pitch-off (Judges and contestants TBA)

4:15 PM 4:35 PM The Age Of The Household RobotwithGill Pratt (Toyota Research Institute)

4:35 PM 4:55 PM Building The Robot Brain withHeather Ames (Neurala) andBrian Gerky (OSRF) and TBA

4:55 PM 5:20 PM Robots, AI and HumanitywithDavid Barrett (Olin), David Edelman (MIT) and Dr. Brian Pierce (DARPA) and TBA

5:20 PM 5:25 PM Wrap Up

5:25 PM -7:00 PM Reception

See more here:

Announcing the agenda for TechCrunch Sessions: Robotics ... - TechCrunch

Posted in Robotics | Comments Off on Announcing the agenda for TechCrunch Sessions: Robotics … – TechCrunch

Lego’s new programmable robotics kit is up for preorder – The Verge

Posted: at 10:40 pm

Lego just opened preorders for Lego Boost, the simple programmable robotics kit it announced early this year. The $159.99 kit includes two motors, a color and distance sensor, and the parts needed to build a Lego cat, robot, guitar, vehicle, or imitation 3D printer. Kids can control their creations with an Android or iOS app, using a basic drag-and-drop programming system. Preorders will begin shipping in late July.

Boost is one of several options for programmable Lego. Outside the well-known Lego Mindstorms, theres also the educational WeDo platform, as well as newly announced support for Apples educational programming app, Swift Playgrounds. Boosts text-free drag-and-drop programming is designed for younger children than Mindstorms is, and its meant to be more for play than education. We tried it at CES, and while its far more limited than something like Mindstorms (and costs a lot of money), its also pretty fun!

See the original post here:

Lego's new programmable robotics kit is up for preorder - The Verge

Posted in Robotics | Comments Off on Lego’s new programmable robotics kit is up for preorder – The Verge

Why Rat-Brained Robots Are So Good at Navigating Unfamiliar Terrain – IEEE Spectrum

Posted: at 10:40 pm

3.Engineering Cognition Photo: Dan Saelinger

If you take a common brown rat and drop it into a lab maze or a subway tunnel, it will immediately begin to explore its surroundings, sniffing around the edges, brushing its whiskers against surfaces, peering around corners and obstacles. After a while, it will return to where it started, and from then on, it will treat the explored terrain as familiar.

Roboticists have long dreamed of giving their creations similar navigation skills. To be useful in our environments, robots must be able to find their way around on their own. Some are already learning to do that in homes, offices, warehouses, hospitals, hotels, and, in the case of self-driving cars, entire cities. Despite the progress, though, these robotic platforms still struggle to operate reliably under even mildly challenging conditions. Self-driving vehicles, for example, may come equipped with sophisticated sensors and detailed maps of the road ahead, and yet human drivers still have to take control in heavy rain or snow, or at night.

The lowly brown rat, by contrast, is a nimble navigator that has no problem finding its way around, under, over, and through the toughest spaces. When a rat explores an unfamiliar territory, specialized neurons in its 2-gram brain fire, or spike, in response to landmarks or boundaries. Other neurons spike at regular distancesonce every 20 centimeters, every meter, and so oncreating a kind of mental representation of space [PDF]. Yet other neurons act like an internal compass, recording the direction in which the animals head is turned [PDF]. Taken together, this neural activity allows the rat to remember where its been and how it got there. Whenever it follows the same path, the spikes strengthen, making the rats navigation more robust.

So why cant a robot be more like a rat?

The answer is, it can. At the Queensland University of Technology (QUT), in Brisbane, Australia, Michael Milford and his collaborators have spent the last 14 years honing a robot navigation system modeled on the brains of rats. This biologically inspired approach, they hope, could help robots navigate dynamic environments without requiring advanced, costly sensors and computationally intensive algorithms.

An earlier version of their system allowed an indoor package-delivery bot to operate autonomously for two weeks in a lab. During that period, it made more than 1,100 mock deliveries, traveled a total of 40 kilometers, and recharged itself 23 times. Another version successfully mapped an entire suburb of Brisbane, using only the imagery captured by the camera on a MacBook. Now Milfords group is translating its rat-brain algorithms into a rugged navigation system for the heavy-equipment maker Caterpillar, which plans to deploy it on a fleet of underground mining vehicles.

Milford, whos 35 and looks about 10 years younger, began investigating brain-based navigation in 2003, when he was a Ph.D. student at the University of Queensland working with roboticist Gordon Wyeth, whos now dean of science and engineering at QUT.

At the time, one of the big pushes in robotics was the kidnapped robot problem: If you take a robot and move it somewhere else, can it figure out where it is? One way to solve the problem is SLAM, which stands for simultaneous localization and mapping. While running a SLAM algorithm, a robot can explore strange terrain, building a map of its surroundings while at the same time positioning, or localizing, itself within that map.

Wyeth had long been interested in brain-inspired computing, starting with work on neural networks in the late 1980s. And so he and Milford decided to work on a version of SLAM that took its cues from the rats neural circuitry. They called it RatSLAM.

There already were numerous flavors of SLAM, and today they number in the dozens, each with its own advantages and drawbacks. What they all have in common is that they rely on two separate streams of data. One relates to what the environment looks like, and robots gather this kind of data using sensors as varied as sonars, cameras, and laser scanners. The second stream concerns the robot itself, or more specifically, its speed and orientation; robots derive that data from sensors like rotary encoders on their wheels or an inertial measurement unit (IMU) on their bodies. A SLAM algorithm looks at the environmental data and tries to identify notable landmarks, adding these to its map. As the robot moves, it monitors its speed and direction and looks for those landmarks; if the robot recognizes a landmark, it uses the landmarks position to refine its own location on the map.

But whereas most implementations of SLAM aim for highly detailed, static maps, Milford and Wyeth were more interested in how to navigate through an environment thats in constant flux. Their aim wasnt to create maps built with costly lidars and high-powered computersthey wanted their system to make sense of space the way animals do.

Rats dont build maps, Wyeth says. They have other ways of remembering where they are. Those ways include neurons called place cells and head-direction cells, which respectively let the rat identify landmarks and gauge its direction. Like other neurons, these cells are densely interconnected and work by adjusting their spiking patterns in response to different stimuli. To mimic this structure and behavior in software, Milford adopted a type of artificial neural network called an attractor network. These neural nets consist of hundreds to thousands of interconnected nodes that, like groups of neurons, respond to an input by producing a specific spiking pattern, known as an attractor state. Computational neuroscientists use attractor networks to study neurons associated with memory and motor behavior. Milford and Wyeth wanted to use them to power RatSLAM.

They spent months working on the software, and then they loaded it into a Pioneer robot, a mobile platform popular among roboticists. Their rat-brained bot was alive.

But it was a failure. When they let it run in a 2-by-2-meter arena, Milford says, it got lost even in that simple environment.

Milford and Wyeth realized that RatSLAM didnt have enough information with which to reduce errors as it made its decisions. Like other SLAM algorithms, it doesnt try to make exact, definite calculations about where things are on the map its generating; instead, it relies on approximations and probabilities as a way of incorporating uncertaintiesconflicting sensor readings, for examplethat inevitably crop up. If you dont take that into account, your robot ends up lost.

That seemed to be the problem with RatSLAM. In some cases, the robot would recognize a landmark and be able to refine its position, but other times the data was too ambiguous. After not too long, the accrued error was bigger than 2metersthe robot thought it was outside the arena!

In other words, their rat-brain model was too crude. It needed better neural circuitry to be able to abstract more information about the world.

So we engineered a new type of neuron, which we called a pose cell, Milford says. The pose cell didnt just tell the robot its location or its orientation, it did both at the same time. Now, when the robot identified a landmark it had seen before, it could more precisely encode its place on the map and keep errors in check.

Again, Milford placed the robot inside the 2-by-2-meter arena. Suddenly, our robot could navigate quite well, he recalls.

Interestingly, not long after the researchers devised these artificial cells, neuroscientists in Norway announced the discovery of grid cells, which are neurons whose spiking activity forms regular geometric patterns and tells the animal its relative position within a certainarea. [For more on the neuroscience of rats, see AI Designers Find Inspiration in Rat Brains.]

Our pose cells werent exactly grid cells, but they had similar features, Milford says. That was rather gratifying.

The robot tests moved to bigger arenas with greater complexity. We did a whole floor, then multiple floors in the building, Wyeth recalls. Then I told Michael, Lets do a whole suburb. I thought he would kill me.

Milford loaded the RatSLAM software into a MacBook and taped it on the roof of his red 1994 Mazda Astina. To get a stream of data about the environment, he used the laptops camera, setting it to snap a photo of the street ahead of the car several times per second. To get a stream of the data about the robot itselfin this case, his carhe found a creative solution. Instead of attaching encoders to the wheels or using an IMU or GPS, he used simple image-processing techniques. By tracking and comparing pixels on sequences of photos from the MacBook, his SLAM algorithm could calculate the vehicles speed as well as direction changes.

Milford drove for about 2 hours through the streets of the Brisbane suburb of St. Lucia [PDF], covering 66 kilometers. The result wasnt a precise, to-scale map, but it accurately represented the topology of the roads and could pinpoint exactly where the car was at any given moment. RatSLAM worked.

It immediately drew attention and was widely discussed because it was very different from what other roboticists were doing, says David Wettergreen, a roboticist at Carnegie Mellon University, in Pittsburgh, who specializes in autonomous robots for planetary exploration. Indeed, its still considered one of the most notable examples of brain-inspired robotics.

But though RatSLAM created a stir, it didnt set off a wave of research based on those same principles. And when Milford and Wyeth approached companies about commercializing their system, they found many keen to hear their pitch but ultimately no takers. A colleague told me we should have called it NeuroSLAM, Wyeth says. People have bad associations with rats.

Thats why Milford is excited about the two-year project with Caterpillar, which began in March. Ive always wanted to create systems that had real-world uses, he says. It took a lot longer than I expected for that to happen.

We looked at their results and decided this is something we could get up and running quickly, Dave Smith, an engineer at Caterpillars Australia Research Center, in Brisbane, tells me. The fact that its rat inspired is just a cool thing.

Underground mines are among the harshest man-made places on earth. Theyre cold, dark, and dusty, and due to the possibility of a sudden collapse or explosion, theyre also extremely dangerous. For companies operating in such an extreme environment, improving their ability to track machines and people underground is critical.

In a surface mine, youd simply use high-precision differential GPS, but that obviously doesnt work below ground. Existing indoor navigation systems, such as laser mapping and RF networks, are expensive and often require infrastructure thats difficult to deploy and maintain in the severe conditions of a mine. For instance, when Caterpillar engineers considered 3D lidar, like the ones used on self-driving cars, they concluded that none of them can survive underground, Smith says.

One big reason that mine operators need to track their vehicles is to plan how they excavate. Each day starts with a dig plan that specifies the amount of ore that will be mined in various tunnels. At the end of the day, the operator compares the dig plan to what was actually mined, to come up with the next days dig plan. If youre feeding in inaccurate information, your plan is not going to be very good. You may start mining dirt instead of ore, or the whole tunnel could cave in, Smith explains. Its really important to know what youve done.

The traditional method is for the miner to jot down his movements throughout the day, but that means he has to stop what hes doing to fill out paperwork, and hes often guessing what actually occurred. The QUT navigation system will more accurately measure where and how far each vehicle travels, as well as provide a reading of where the vehicle is at any given time. The first vehicle will drive into the mine and map the environment using the rat-brain-inspired navigation algorithm, while also gathering images of each tunnel with a low-cost 720p camera. The only unusual feature of the camera is its extreme ruggedization, which Smith says goes well beyond military specifications.

Subsequent vehicles will use those results to localize themselves within the mine, comparing footage from their own cameras with previously gathered images. The vehicles wont be autonomous, Milford notes, but that capability could eventually be achieved by combining the camera data with data from IMUs and other sensors. This would add more precision to the trucks positioning, allowing them to drive themselves.

The QUT team has started collecting data within actual mines, which will be merged with another large data set from Caterpillar containing about a thousand hours of underground camera imagery. They will then devise a preliminary algorithm, to be tested in an abandoned mine somewhere in Queensland, with the help of Mining3, an Australian mining R&D company; the Queensland government is also a partner on the project. The system could be useful for deep open-pit mines, where GPS tends not to work reliably. If all goes well, Caterpillar plans to commercialize the system quickly. We need these solutions, Smith says.

For now, Milfords team relies on standard computing hardware to run its algorithms, although they keep tabs on the latest research in neuromorphic computing. Its still a bit early for us to dive in, Milford says. Eventually, though, he expects his brain-inspired systems will map well to neuromorphic chip architectures like IBMs True North and the University of Manchesters SpiNNaker. [For more on these chips, see Neuromorphic Chips Are Destined for Deep Learningor Obscurity, in this issue.]

Will brain-inspired navigation ever go mainstream? Many developers of self-driving cars, for instance, invest heavily in creating detailed maps of the roads where their vehicles will drive. The vehicles then use their cameras, lidars, GPS, and other sensors to locate themselves on the maps, rather than having to build their own.

Still, autonomous vehicles need to prove they can drive in conditions like heavy rain, snow, fog, and darkness. They also need to better handle uncertainty in the data; images with glare, for instance, might have contributed to a fatal accident involving a self-driving Tesla last year. Some companies are already testing machine-learning-based navigation systems, which rely on artificial neural networks, but its possible that more brain-inspired approaches like RatSLAM could complement those systems, improving performance in difficult or unexpected scenarios.

Carnegie Mellons Wettergreen offers a more tantalizing possibility: giving cars the ability to navigate to specific locations without having to explicitly plan a trajectory on a city map. Future robots, he notes, will have everything modeled down to the millimeter. But I dont, he says, and yet I can still find my way around. The human brain uses different types of models and mapssome are metric, some are more topological, and some are semantic.

A human, he continues, can start with an idea like Somewhere on the south side of the city, theres a good Mexican restaurant. Arriving in that general area, the person can then look for clues as to where the restaurant may be. Even the most capable self-driving car wouldnt know what to do with that kind of task, but a more brain-inspired system just might.

Some roboticists, however, are skeptical that such unconventional approaches to SLAM are going to pay off. As sensors like lidar, IMUs, and GPS get better and cheaper, traditional SLAM algorithms will be able to produce increasingly accurate results by combining data from multiple sources. People tend to ignore the fact that SLAM is really a sensor fusion problem and that we are getting better and better at doing SLAM with lower-cost sensors, says Melonee Wise, CEO of Fetch Robotics, a company based in San Jose, Calif., that sells mobile robots for transporting goods in highly dynamic environments. I think this disregard causes people to fixate on trying to solve SLAM with one sensor, like a camera, but in todays low-cost sensor world thats not really necessary.

Even if RatSLAM doesnt become practical for most applications, developing such brainlike algorithms offers us a window into our own intelligence, says Peter Stratton, a computer scientist at the Queensland Brain Institute who collaborates with Milford. He notes that standard computings von Neumann architecture, where the processor is separated from memory and data is shuttled between them, is very inefficient.

The brain doesnt work anything like that. Memory and processing are both happening in the neuron. Its computing with memories, Stratton says. A better understanding of brain activity, not only as it relates to responses to stimuli but also in terms of its deeper internal processesmemory retrieval, problem solving, daydreamingis whats been missing from past AI attempts, he says.

Milford notes that a lot of types of intelligence arent easy to study using only animals. But when you observe how rats and robots perform the same tasks, like navigating a new environment, you can test your theories about how the brain works. You can replay scenarios repeatedly. You can tinker and manipulate your models and algorithms. And unlike with an animal or an insect brain, he says, we can see everything in a robots brain.

This article appears in the June 2017 print issue as Navigate Like a Rat.

See the article here:

Why Rat-Brained Robots Are So Good at Navigating Unfamiliar Terrain - IEEE Spectrum

Posted in Robotics | Comments Off on Why Rat-Brained Robots Are So Good at Navigating Unfamiliar Terrain – IEEE Spectrum

‘Swarmathon’ Robotics Team Competes at Cape Canaveral – UC Merced University News

Posted: at 10:40 pm

The future of robotics is not in using one super-powerful robot for all tasks, but rather to use a coalition of simpler robots collaboratively, Carpinexplained.

Todays engineers, drawing inspiration from nature, design cooperative robots to accomplish tasks impossible for individualrobots.

A lot of the ideas implemented in robots are inspired by biological algorithms, Meraz explained. How do ants forage for food? And how do you translate those foraging algorithms torobots?

But why ants? Ant societies exhibit efficient group problem solving. A kind of higher intelligence, absent in solitary ants, emerges from the swarm. Some biologists even refer to ant societies assuperorganisms.

Essentially, you can look at the swarm as a single organism with distributed sensors that are all communicating with each other, Merazexplained.

This is what NASA wants from Swarmies cooperative rovers that mimic the efficient foraging behavior of antswarms.

With this in mind, UC Merceds team comprised of Meraz, Jose Manuel Gonzalez Hermosillo, Jesus Sergio Gonzalez Castellon, Navvaran Mann, James Nho, Jesus Salcedo and Carlos Diaz developed code to turn their trio of Swarmies into a tiny colony of roboticants.

But this was no easy task. Nor was it their only obligation. NASA also requires Swarmathon teams to engage inoutreach.

The team worked with students from Atwaters Buhach Colony High School to build SumoBots. As the name suggests, SumoBots are robots designed to push each other around, the winner being the SumoBot that forces its opponent out of thering.

But for many involved, the highlight of the year was the Swarmathon itself. Six team members traveled to NASAs Kennedy Space Center to compete. This was the first year UC Merced participated in the 2-year-oldevent.

Pitted against 18 other teams, UC Merced came in 11thoverall.

We spent a lot of time on this, and it was really hard, Meraz said What we thought wed progress to and what we did progress to were verydifferent.

However, the team did not come away empty-handed. They won second prize for their technical report describing the methods, experiments and results of their efforts. They also won third prize for their outreach report, which documented their work at BuhachColony.

Plus, Swarmathon helped two team members secure coveted summer internships. Meraz was invited to spend the summer in the lab of Melanie Moses, the UNM robotics professor who oversees Swarmathon, while Diaz and Gonzalez will stay on campus to work in Carpins lab on a project funded byUSDA/NIFA.

And Meraz and company have no plans to quitnow.

We are definitely competing again next year, Meraz said. I've already recruited a few of the top students that were in my Intro to Robotics course with Stefano Carpin, so we will go in with much moreexperience.

Original post:

'Swarmathon' Robotics Team Competes at Cape Canaveral - UC Merced University News

Posted in Robotics | Comments Off on ‘Swarmathon’ Robotics Team Competes at Cape Canaveral – UC Merced University News

Apple brings dancing robots and backflipping drones into Swift Playgrounds – TechCrunch

Posted: at 10:40 pm

Code rules everything around me. And you.

Really: be it the stoplight you stared at this morning, or the train you rode in on, or the lil robot vacuum keeping your floor ever-so-slightly cleaner while youre away, code is everywhere.

But even for people whove put the time in to learn to program, jumping from software to hardware can be a challenge even if said challenge is just figuring out where to start. How do you program something real? How do you build something that moves?

Apple is taking steps to tackle that problem by bringing third-party hardware think robots, drones, and musical instruments into its learn-to-code platform, Swift Playgrounds.

Unfamiliar with Swift Playground? Thats okay its only about a year old. Swift Playgrounds is an iPad app that Apple built to teach people (not just kids, Apple notes whenever talking about it) to code. On one half of the screen, users write code actual, live, Swift code (albeit code thats generally executing on top of a more complicated engine behind the curtain) to complete challenges. On the other half, said code runs at the tap of a button. One lesson might have you move a character around a board one step at a time to teach you how functions work; another might have you tweak the mechanics of a brick breaker game to teach you about variables.

A little over a million people have used Swift Playgrounds since launch, Apple tells me. With todays news, Apple is working with a handful of companies to bring hardware into the mix. Folks like:

Some of these teams had already started tapping into Swift Playgrounds on their own, with Apple having opened Playgrounds content creation to third party developers from the beginning the aforementioned Wonder Workshop, for example, has been offering Swift Playground lessons for a few months now. Apple embracing the integrations really just formalizes thing, makes it simpler to tie third-party Bluetooth devices into Swift Playgrounds, and adds a bunch of support from Apples end.

This is a smart move on Apples part, and one thats pretty characteristic for the company. Apple has used education as a foot-in-the-door (with varying degrees of success) for decades, from sending Apple Is to schools in the 70s (to prove their prowess over the big ol mainframes of the time) to donating tens of thousands of iPads to schools just last year.

But this move potentially helps them introduce themselves to a new level of student the budding hardware engineer early on.

Take the LEGO Mindstorms integration, for example. Mindstorms is already used in robotics clubs around the world.

LEGO already has its own development tools for Mindstorms, including one for the iPad. But now they get a solid, ultra newbie-friendly coding platform in Swift Playgrounds. One with its own teaching platform built right in, and one where the most complicated bits of the system (the underlying engine) are largely maintained by Apple.

Apple, meanwhile, gets to pop up in those aforementioned robotics clubs (the stomping grounds of many a lifelong engineer) and say Hey kids! Learning your first programming language to make that robot dance? Check out our programming language, Swift! Oh, and do it on an iPad!

It all just makes sense.

These new third-party tie-ins will start working with the release of Swift Playgrounds 1.5, which Apple tells me should hit the App Store on June 5th.

See original here:

Apple brings dancing robots and backflipping drones into Swift Playgrounds - TechCrunch

Posted in Robotics | Comments Off on Apple brings dancing robots and backflipping drones into Swift Playgrounds – TechCrunch

Europe regulates robotics: Summer school brings together researchers and experts in robotics – Robohub

Posted: at 10:40 pm

After a successful 2016 first edition, our next summer school cohort onThe Regulation of Robotics in Europe: Legal, Ethical and Economic Implicationswill take place in Pisa at the Scuola SantAnna, from 3- 8 July.

When the Robolaw project came to an end and we presented our results before the European Parliament we clearly perceived that a leap was needed not only in some approaches to regulation but also in the way social scientists, as well as engineers, are trained.

Indeed, in order to undergo technical analysis in law and robotics, without being lured into science fiction, an adequate understanding of the peculiarities of the systems being studied is required. A bottom-up approach, like the one adopted by Robolaw and its guidelines, is essential.

Social scientists, and lawyers in particular, often lack such knowledge and thus tend to either make unreasonable assumptions of technological developments that are farfetched or simply unrealistic or misperceive what the pivoting point in the analysis is going to be. The notion of autonomy is a fitting example. The consequence, however, is not simply bad scientific literature, but potentially inadequate policies being developed, thence wrong decision even legislative ones being adopted, impacting research and development of new applications, while overlooking relevant issues and impairments.

Similarly, engineers working in robotics are often confronted with philosophical and legal debates involving the devices they research that they are not always equipped to understand. Those debates are instead precious for they allow to identify societal concerns and expectations that can be used to orient research strategically, and engineers ought also participate and have a say.

Ultimately, it is everybodys interest to better address existing and emerging needs, fulfilling desires and avoiding eliciting often ungrounded fears. This is what the European Union understands as Responsible Research and Innovation, but it also the prerequisite for the diffusion of new applications in society and the emergence of a sound robotic industry. Moreover, the current tendency in EU regulation favouring by design approaches whereby privacy or other rights need to be enforced through the very functioning of the device require technicians to consider such concerns early on, during the development phase of their products.

A common language needs thus be created, to avoid a babel-tower effect, that preserves each ones peculiarities and specificities, yet allowing close cooperation.

A multidisciplinary approach, grounded in philosophy ethics in particular , law and law and economics methodologies economics and innovation management, and engineering is required.

With that idea in mind, we competed and won a Jean Monnet grant a prestigious funding action of the EU Commission, mainly directed towards the promotion of education and teaching activities with a project titled:Europe Regulates Robotics andorganized the first edition of the Summer School The Regulation of Robotics in Europe: Legal, Ethical and Economic Implications in 2016.

The opening event of the Summer School saw the participation of MEP (Member of the EU Parliament) Mady Delvaux-Stehres, who presented what was then the draft recommendation now approved of the EU Parliament on the Civil Law Rules of Robotics, Mihalis Kritikos a Policy Analyst of the Parliament , who personally contributed to the drafting of that document, Maria Chiara Carrozza former Italian minister of University Education and Research, professor of robotics and member of the Italian Senate discussing Italian political initiatives. We also had entrepreneurs, such as Roberto Valleggi, and engineers coming from the industry, such as Arturo Baroncelli from Comau and academia, Fabio Bonsignorio, who also taught the course.

Classes dealt with methodologies the Robolaw approach notions of autonomy, liability and different liability models privacy, benchmarking and robot design, machine ethics and human enhancement through technology, innovation management and technology assessment. Students also had the chance to visit the Biorobotics Institute laboratories in Pontedera (directed by Prof. Paolo Dario) and see many different applications and research being carried out, directly explained by the people who work on them.

The most impressive part was, however, our class. We managed to put together a truly international group of young and bright minds, many of which already enrolled in a PhD program in law, philosophy, engineering and management coming from Universities such as Edinburgh, London School of Economics, Sorbonne, Cambridge, Vienna, Bologna, Suor Orsola, Bicocca, Milan, Hannover, Pisa, Pavia and Freiburg. Other came from prominent European robotic industries, were practitioners, entrepreneurs, policy makers from EU institutions.

At the end of the Summer School, some presented their research on a broad set of extremely interesting topics, such as driverless car liability and innovation management, machine ethics and the trolley problem, anthrobotics and decision-making in natural and artificial systems.

We had vivid in class debates. A true community was created that is still in contact today. Five of our students actively participated in the 2017 European Robotics Forum in Edinburgh andmore are working and publishing on such matters.

We can say we truly achieved our goal! However, the challenge has just begun. We want to reach out to more people and replicate this successful initiative. A second edition of the Summer School will take place again this year in Pisa at the Scuola SantAnna from July 3rd to 8th and we are accepting applications until June 7th.

I am certain we will manage again to put together an incredible group of individuals, and I cant wait to meet our new class. On our side, we are preparing a lot of interesting surprises for them, including the participation of policy makers involved in the regulation of robotics at EU level to provide a first-hand look at what is happening in the EU.

More information about the summer school can be found on our website here.

Registration to the summer school can be found here.

More:

Europe regulates robotics: Summer school brings together researchers and experts in robotics - Robohub

Posted in Robotics | Comments Off on Europe regulates robotics: Summer school brings together researchers and experts in robotics – Robohub

Page 179«..1020..178179180181..190200..»