The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Robotics
Welcome to robot university (only robots need apply) – MIT Technology Review
Posted: November 12, 2019 at 6:45 am
One of the unsung heroes of the AI revolution is a little-known database called ImageNet. Created by researchers at Princeton University, ImageNet contains some 14 million images, each of them annotated by crowdsourced text that explains what the image shows.
ImageNet is important because it is the database that many of todays powerful neural networks cut their teeth on. Neural networks learn by looking at the images and accompanying textand the bigger the database, the better they learn. Without ImageNet and other visual data sets like it, even the most powerful neural networks would be unable to recognize anything.
Now roboticists say they want to try a similar approach with video to teach their charges how to interact with the environment. Sudeep Dasari at the University of California, Berkeley, and colleagues are creating a database called RoboNet, consisting of annotated video data of robots in action. For example, the data might include numerous instances of a robot moving a cup across a table. The idea is that anybody can download this data and use it train a robots neural network to move a cup too, even if it has never interacted with a cup before.
Dasari and co hope to build their database into a resource that can pre-train almost any robot to do almost any taska kind of robot university, which the team calls RoboNet.
Until now, roboticists have had limited success in teaching their charges how to navigate and interact with the environment. Their approach is the standard machine-learning technique that ImageNet helped popularize.
Robonet
They start by recording the way a robot interacts with, say, a brush to move it across a surface. Then they take many more videos of its motion and use the data to train a neural network on how best to perform the action.
The trick, of course, is to have lots of datain other words, countless hours of video to learn from. And once a robot has mastered brush-moving, it must go through the same learning procedure to move other almost anything else, be it a spoon or a pair of spectacles. If the environment changes, these learning systems generally have to start all over again.
The common practice of re-collecting data from scratch for every new environment essentially means re-learning basic knowledge about the worldan unnecessary effort, say Dasari and co.
Sign up for The Download your daily dose of what's up in emerging technology
RoboNet gets around this. We propose RoboNet, an open database for sharing robotic experience, they say. So any robot can learn from the experience of another.
To kick-start the database, the team has already recorded some 15 million video frames of tasks using seven different types of robot with different grippers in a variety of environments.
Dasari and co go on to show how to use this database to pre-train robots for tasks they have never before attempted. And they say robots trained with this approach perform better than those that have been conventionally trained on even more data.
The RoboNet data is available for anyone to use. And of course, Dasari and co hope other research teams will start contributing their own to make RoboNet a vast resource of robo-learning.
Thats impressive work that has significant potential. This work takes the first step towards creating robotic agents that can operate in a wide range of environments and across different hardware, say the team.
Of course, there are significant challenges ahead. For example, researchers must work out how best to use the datathe jury is still out on the most effective training regimes. We hope that RoboNet will inspire the broader robotics and reinforcement learning communities to investigate how to scale reinforcement learning algorithms to meet the complexity of the real world, they say.
The result is both impressive and thought-provoking: a kind of robot university that can give any robot the skills it needs to learn.
ImageNet has been a key factor in making machine vision as good as humans at recognizing objects. If RoboNet is only half as successful, it will be an impressive gain.
Ref: arxiv.org/abs/1910.11215 : RoboNet: Large-Scale Multi-Robot Learning
Excerpt from:
Welcome to robot university (only robots need apply) - MIT Technology Review
Posted in Robotics
Comments Off on Welcome to robot university (only robots need apply) – MIT Technology Review
High school robotics team works to protect ocean life – MLive.com
Posted: October 27, 2019 at 3:12 pm
STOCKBRIDGE, MI - Bob Richards robotics class at Stockbridge Senior High School operates like a business, says senior Sylvia Whitt.
Whitt acts as a team leader or CEO, while the other students serve specific roles on an assembly line," she said.
The assembly line pumps out cutting edge and award-winning technology. This year, Richards and his students earned a $10,000 grant from Lemelson-MIT InvenTeam to build a deep-sea surveillance robot called the Emperor Micro-Lander.
Stockbridge is one of just 14 schools to receive the honor. They previously won the grant back in 2016 to monitor chemical pollution for the Great Lakes Fisheries Commission and U.S. Fish and Wildlife Service. Richards team also finished eighth in the world at the 2017 International Student Remotely Operated Vehicles Competition.
This is a group that has demonstrated success over time with Mr. Richards, said Anthony Perry, the project liaison for the Lemelson-MIT Program. We knew we could trust that they would produce technology with a practical purpose.
The Emperor Micro-Lander is designed to submerge to 300 feet underwater to take video, images and sensory readings of aquatic life. Most importantly, Richards said, it will be compact enough to travel in small kayaks near coral reefs, which would ground larger survey vessels. The class plans to utilize the Micro-Lander off the coast of American Samoa.
The problem with the big landers, you need a crane...on the back of your research vessel to lower it into the ocean and recover it, Richards said. Just the vessel time itself would be about $40,000 a day... So what were proposing is to use that same type of technology to build a smaller lander.
In Richards 1 p.m. class, each student plays a role with specific crafts. Sophomores Brooklyn Rochow, Hythem Beydoun and Julianna Rook work on an electrical fuse to activate the landers recovery, while fellow students laser-cut the HDPE plastic for a rover that will help the lander navigate the ocean floor.
Mary Lewandowski| MLive.com
Brooklyn Rochow works on building the robot Emperor Micro-Lander during Bob Robert's robotics class at Stockbridge High School on Thursday, Oct. 24, 2019. The students received a $10,000 grant to build Emperor Micro-Lander, which will be able to monitor aquatic life in the National Marine Sanctuary of American Samoa.
Whitt said Richards fosters the assembly line environment and trusts younger students to pick up complex concepts. He had her working on batteries before she even reached high school.
When they asked me to be a part of that high school class, I was in eighth grade, Whitt said. I was like, 100 percent, sign me up. I started working on really complicated stuff, but after working on it for a while, it becomes second nature.
Richards previously served 20 years as an ordinance specialist in the Army before Stockbridge hired him to teach in 2002. That military experience has provided him with connections to secure the proper equipment and material to build the aquatic technology.
In addition, they have saved equipment from previous trips to National Marine Sanctuaries off the coast of American Samoa and Palau. Alro Plastics in Jackson has provided high-density polyethylene plastic for the devices buoyancy, and Richards class has earned separate grants such as the Marshall Plan for Talent grant from the University of Michigan.
The material and organization of the class makes it feel like a factory, Whitt said, supplying aquatic monitoring technology needed in the South Pacific.
Off the coast of an island such as American Samoa, extensive coral reefs provide a food web that connects all living things, the EPA said in 2017. Warmer waters bleach the coral, making it inedible for the local fish and various ecosystems, the EPA stated.
The Emperor Micro-Lander can collect data on water temperature, as well other factors such as salinity or pollution. For Whitt, addressing problems like these is where the project starts.
Hey, we have this problem, or we noticed a problem ourselves, she said about the brainstorming process. We think we can do something for that. From there, we keep testing to see if we can get to the point where (our project) can be used.
Richards describes every incoming class as a group on a mission. His Stockbridge students look to be on task to protecting waters that sit 7,000 miles away near American Samoa.
Here is the original post:
High school robotics team works to protect ocean life - MLive.com
Posted in Robotics
Comments Off on High school robotics team works to protect ocean life – MLive.com
Local robotics team promotes STEM with pumpkin drop – WTAP-TV
Posted: at 3:12 pm
PARKERSBURG, W. Va. Dark Side Robotics, Wood County's first robotic competition team, hosted a pumpkin drop with an educational twist focusing on STEM.
Students from several schools around Wood County gathered at WVUP for the staple fall event, although this is the first year the robotics team held its own.
The goal of the competition is to design an enclosure to protect a pumpkin from damage when dropped form several hundred feet in the air. The students were hoisted in the air with the assistance of a Vienna fire truck ladder to see if their pumpkin both survives the fall and lands closes to a target.
Though the event was a fun-filled, organizers say it's a way for the kids to experience STEM education, which is simply problem solving.
"The important end result of participating in STEM is the idea of giving them a valid problem to solve and then respecting their ability to research and think critically and come up with a solution and execute", said Amy Stewart, advisor of Dark Side Robotics.
Dark Side Robotics competition team are comprised of middle and high school students around the area. They are only one of four operating teams in the state of West Virginia.
View original post here:
Local robotics team promotes STEM with pumpkin drop - WTAP-TV
Posted in Robotics
Comments Off on Local robotics team promotes STEM with pumpkin drop – WTAP-TV
The best of the BEST shine bright at WSU robotics competition – KSN-TV
Posted: at 3:12 pm
WICHITA, Kan. (KSNW) A robotics competition put on by Wichita State University at Koch Arena for high school students drew hundreds of little inventors with bright, new ideas for disaster relief.
The best of the Kansas BEST, Boosting Engineering Science and Technology, attracted more than 500 student competitors from different schools.
The contestant were given a raw set of materials to see what they could build to help in a natural disaster where power lines are down. The goal, clean up debris and get power back up and running.
We figured we did pretty good, in the top 20 at least. It is hard to be up in the top five or top ten but we might be able to make it, said Layne Hiebert, Kansas BEST Competitor. It really helps up understand what people actually do and how we could help them in the future.
There was no shortage of creativity, excitement, or entertainment during the event. Teams stayed charged from the support of cheering sections and motivated by band performances.
Originally posted here:
The best of the BEST shine bright at WSU robotics competition - KSN-TV
Posted in Robotics
Comments Off on The best of the BEST shine bright at WSU robotics competition – KSN-TV
Minimal navigation solution for a swarm of tiny flying robots to explore an unknown environment – Science
Posted: at 3:11 pm
Abstract
Swarms of tiny flying robots hold great potential for exploring unknown, indoor environments. Their small size allows them to move in narrow spaces, and their light weight makes them safe for operating around humans. Until now, this task has been out of reach due to the lack of adequate navigation strategies. The absence of external infrastructure implies that any positioning attempts must be performed by the robots themselves. State-of-the-art solutions, such as simultaneous localization and mapping, are still too resource demanding. This article presents the swarm gradient bug algorithm (SGBA), a minimal navigation solution that allows a swarm of tiny flying robots to autonomously explore an unknown environment and subsequently come back to the departure point. SGBA maximizes coverage by having robots travel in different directions away from the departure point. The robots navigate the environment and deal with static obstacles on the fly by means of visual odometry and wall-following behaviors. Moreover, they communicate with each other to avoid collisions and maximize search efficiency. To come back to the departure point, the robots perform a gradient search toward a home beacon. We studied the collective aspects of SGBA, demonstrating that it allows a group of 33-g commercial off-the-shelf quadrotors to successfully explore a real-world environment. The application potential is illustrated by a proof-of-concept search-and-rescue mission in which the robots captured images to find victims in an office environment. The developed algorithms generalize to other robot types and lay the basis for tackling other similarly complex missions with robot swarms in the future.
Swarms of tiny autonomous flying robots hold great promise. Tiny flying robots can move in narrow spaces, can be so cheap that they may become disposable, and are safe in the presence of humans (1, 2). Moreover, whereas the individual robots may be inherently limited in their abilities both in terms of cognition and in terms of actions, together they may solve very complex problems. This kind of problem-solving ability is abundant in nature. Two well-known examples are the shortest path finding by swarms of ants (3) and collective selection of profitable food resources by honeybees through waggle dances (4).
The core principle of swarm robotics is that the individual robots obey relatively simple control rules, merely based on their local sensory inputs and local communication with their neighbors. This principle fits well with the limited resources of tiny robots. Moreover, not relying on any central processing promises a high robustness of the system. A single failing robot will not endanger task execution because its role will be fulfilled by one of the many other robots. In addition, together, small robots will potentially be able to perform taskssuch as surveillance, construction, or explorationquicker and more robustly. In the past few decades, a large body of research investigating swarm robotics has formed. For instance, in the Swarm-Bots project, controllers have been evolved for small driving robots to complete tasks such as gap crossing, which required them to attach themselves to each other to form a bridge, and movement of objects bigger than each individual (5). Moreover, swarms of robots have been demonstrated in applications ranging from constructing small preplanned structures (6) to forming shapes with their own bodies (7, 8) and to performing tasks such as dispersion, aggregation, and surveillance (9).
Concerning flying robot swarms, the major challenge lies in achieving autonomous robot navigation and coordination between the robots in real-world environments. There have been impressive shows with many simultaneously flying robots, such as Intels Shooting Star drones (10), which were used in the 2017 Super Bowl halftime and the 2018 Winter Olympics. However, those robots purely followed preprogrammed Global Positioning System (GPS)based trajectories, so they did not make local decisions based on their surroundings. In contrast, in (11, 12) and, more recently, (13), swarms of flying robots performed coordinated swarming behaviors together in outdoor environments. In the latter study, the main behavioral parameters were optimized with an evolutionary process such that robots stayed together, even in the presence of no-fly zones. The studies (1113) all still crucially relied on GPS. The flying robots communicated their GPS locations to each other to determine the relative locations to other robots that serve as input to the local controllers. In all above studies, the swarms essentially flew in open environments or, in the case of (13), had access to a global map of no-fly zones.
Navigation of a swarm of tiny flying robots in a cluttered, GPS-denied environment has been an unsolved problem. The major challenge derives from the highly restricted nature of these tiny robots. The mainstream solution to navigation consists of simultaneous localization and mapping (SLAM) based on camera images (14) or laser range finders (15). However, typical, metric SLAM methods make detailed three-dimensional (3D) maps, which is very demanding in terms of computational and memory resources. State-of-the-art SLAM methods, like large-scale direct monocular SLAM (LSD-SLAM) (16), often need to be computed by an external ground station computer (17). Multirobot SLAM, in which a group of robots jointly creates and maintains a shared map of the environment (18), places an additional load on the communication bandwidth. One can also opt for the slightly more efficient visual inertial odometry (19). However, this is subject to drift. To illustrate the challenge of navigating with tiny flying robots, we show (Fig. 1) the processing power of the robot used in our experiments (Bitcrazes Crazyflie 2.0) beside two recent, state-of-the-artembedded processing units used in SLAM approaches. Flying robots like the Crazyflie use about 7 W to fly. To not substantially affect their flight time, the processing should therefore use only a fraction of this power. The Crazyflie carries an STM32F4 microprocessor, with a clock speed of 168 MHz and 192 kB of random-access memory (RAM). Typical state-of-the-art robots used for autonomous flight [e.g., (20, 21)] use processors like the NVIDIA TX2, which has a six-core central processing unit, each with a clock speed of 2 GHz, a 256-core NVIDIA graphics processing unit, and 8 GB of RAM. Hence, we need to solve the navigation problem with orders of magnitude less memory and processor speed. This calls for a completely different navigation strategy.
(A) Crazyflie 2.0 with the flow and multi-ranger expansion decks and (B) the autopilot (STM32F4) compared with the specifications of the NVIDIA TX2, the Odroid-C2, and a laptop (Dell Latitude E7450). Note that the Dell specifications do not fit within the chart (as indicated with the top triangles).
One potential approach to efficient navigation is to draw inspiration from biology, for instance, by looking at honeybee navigation strategies. Honeybees navigate by combining path integration with landmark recognition (22). Path integration is well understood and can be implemented with very limited systems, as in the recently presented AntBot (23). Whereas walking insects can count their steps for path integration, flying insects rely more heavily on the integration of optical flow (24). Path integration alone does not suffice for navigation because it drifts over time. This drift can be cancelled by means of landmark detection and visual homing, but it is not obvious how landmark recognition is performed by biological systems. The dominant model is the snapshot model (25), in which pictures are stored of the surroundings and later compared with the visual inputs. Unfortunately, current implementations of landmark recognition still require substantial processing and memory [e.g., (26)], making it unsuitable for navigation by tiny robots. Moreover, they mostly thrive on texture-rich environments, which are commonly found in nature but not in repetitive man-made environments.
Biological systems provide interesting suggestions for arriving at the minimal requirements for navigation. The maps created by metric SLAM can be used for navigating from any point to any other point in the map. The navigation strategies followed by insects suggest that it may be possible to save on computation and memory by requiring less accurate maps. Biological navigation strategies show a parallel with topological SLAM (27), in which a robot only stores important landmarks and their relations in terms of distance and direction. This no longer allows a robot to travel anywhere in the explored space with high accuracy, but this may not be necessary for successful behavior. In some cases, it may only be important to explore and come back to the nest, i.e., to only perform accurate homing. A navigation strategy that only demands homing and does not rely on computationally complex visual navigation has strong potential for downscaling to tiny robots.
The main contribution of this article is a minimal autonomous navigation solution for a swarm of tiny flying robots to explore an unknown, unstructured environment and subsequently to come back to the departing point. Exploration here means to move through as large a part of an unknown environment as possible, with a goal to gather application-dependent information. The proposed navigation solution was implemented in a swarm of tiny flying robots and shown to work in a large real-world indoor environment that has no external infrastructure for exact positioning. Moreover, in the same environment, we illustrate how the solution enabled a specific proof-of-concept search-and-rescue exploration mission, in which the swarm gathered images to find victims in the environment.
Particularly, we introduce the swarm gradient bug algorithm (SGBA). As the name suggests, the method is inspired by bug algorithms (2830), which originated as simple maze-solving algorithms. The core concept is that navigating from A to B is performed not by planning in a global map with known obstacles but by reacting to obstacles as they come within range of the sensors. This way of dealing with obstacles results in a highly computationally efficient navigation. However, existing bug algorithms in the literature remain rather theoretical and are not suitable for application to navigation in real, GPS-denied environments because they typically rely on either a known global position or perfect odometry. For example, in (31, 32), real-world robots used their wheel odometry for navigation within an indoor environment; nevertheless, the testing environments were too small to experience the full extent of the possible odometry drift. A flying robot typically relies on visual odometry and, due to the vibrations and texture dependence, is even more prone to odometry inaccuracies than a driving robot. When realistic levels of odometry drift were introduced, the navigation performance of bug algorithms from the literature dropped steeply (33).
The bug algorithm proposed in this article, SGBA, departs substantially from existing bug algorithms because it has been designed explicitly for allowing a swarm of tiny robots to explore a real-world, GPS-denied environment. Figure 2B shows the main concept: A swarm of robots departs their base station for outbound travel. Each robot has a different preferential direction toward which it will try to go. When robots encounter obstacles, they follow the obstacles contours. This process is called wall following in the bug algorithm literature. When a robots preferred direction is free of obstacles again, it will continue to follow its preferred direction. As soon as the robots battery is around 60%, it starts inbound travel (Fig. 2C). To come back to the original location, the robots use a mix of (coarse) odometry and, on longer time scales, an observable gradient to the base station. In our experiments, we used the received signal strength intensity (RSSI) to a radio beacon located at the base station. Because bug algorithms do not make maps, there is a danger that they get stuck in loops without realizing it. For instance, this can happen in rooms where there is only one way out, which may be missed when the robot leaves the wall in its preferred direction, just to navigate to the opposite wall again. Hence, during both outbound and inbound travel, the odometry was also exploited to detect short-term loops that could result in robots getting stuck in a particular part of the environment. The robots also needed to avoid each other and to communicate their desired direction to each other. In the experiments, we used wireless onboard inter-robot communication to both these ends. Specifically, for the intra-swarm collision avoidance, the inter-robot RSSI was used instead of communicating a global position (which is not known by the robots). Moreover, when robots noticed the presence of other robots in the direction of their preferred heading, they adapted their preference, thus enhancing the exploration for the outbound flight.
(A) A simplified state machine of SGBA derived from the one presented in Materials and Methods. (B) Outbound travel of SGBA. The purple shading illustrates the local signal strength around each drone used for intra-swarm avoidance. (C) Inbound travel. The pink shading represents the signal strength of the wireless beacon at the ground station to which the drones navigate. The interswarm avoidance is still active on the inbound flight but is not depicted. The fuchsia arrow at each drones position illustrates the robots estimated direction to the beacon.
A simplified version of the finite state machine (FSM) of SGBA can be found in Fig. 2A. The SGBA method and the FSM are presented in more detail in Materials and Methods. The innovation of SGBA lies in its suitability for the real-world properties of tiny, exceptionally computationally restricted robots and in the combination of the various subcomponents. Many of the subcomponents themselves have already been proposed in the literature. For instance, traveling toward a wireless beacon with a bug algorithm was also proposed in (34, 35). However, the proposed methods in those studies were too sensitive to the real-world noise of RSSI measurements. Because of the difficulties of real-world interference, refraction, and scattering of the signal, the experiments eventually involved an infrared beacon instead, which was visible from all locations in the environment. This setup would not be useful in a real scenario. The work in (36) used the gradient of real-world, noisy RSSI values to guide exploration on a real robot (without a bug algorithm type of behavior). However, the platform still required a full SLAM method on board because the precise positions of the RSSI samples were needed to estimate the location of the Wi-Fi source. In contrast to these methods, we have implemented a home beacon search tactic that dealt with real-world, noisy, 2.4-GHz, Wi-Fi RSSI values and did not rely on exact positioning. Another asset is SGBAs use of multiple robots. The idea of using multiple robots for bug algorithms was first forwarded and studied in simulation in (37). However, they used it to explore the local obstacle boundary and not for efficiently exploring the environment. The swarming mechanisms of SGBA involve (i) imprinting of different initial preferential directions, (ii) collision avoidance, and (iii) adaptation of preferential directions when robots notice that their preferred direction overlaps too much with that of another robot. More elaborate swarming mechanisms are possible, but the results show that these straightforward mechanisms, which do not require accurate relative positions between the robots, already significantly increase exploration efficiency.
From the explanation above, it can be deduced that SGBA requires five main functionalities: (i) following a given direction; (ii) wall following; (iii) odometry; (iv) inter-robot detection, communication, and avoidance; and (v) a gradient-based search toward the departure point. Although we mainly focused on flying robots in this article, the five functionalities can be implemented with various types of hardware and software on different types of robots. We used flying robots for the real-world experiments and driving robots in the simulation experiments detailed below. The difference in implementation of SGBA includes, for instance, that functionality (iii) is performed with optical flowbased odometry on the flying robot and wheel-based odometry on the driving robot. Hence, the SGBA algorithm can be applied to different types of mobile robots with limited resources, as long as they are endowed with the above required functionalities.
We performed both simulation and real-world experiments to gauge the performance of SGBA as an autonomous navigation solution for exploration missions. Navigation here means spreading out in the environment, covering the environment as much as possible, and coming back to the departure point. The two main performance metrics are (i) the area coverage and (ii) the return rate of the robots. With these metrics, we mainly assessed the navigational characteristics of SGBA because these are important to exploration missions in general. We varied the number of robots to investigate the advantages of the swarming aspect of SGBA. After the main simulation and real-world experiments focusing on the navigation performance, we also investigated a search-and-rescue scenario in which the robots had to find possible victims in the environment. For this scenario, the robots carried onboard cameras and secure digital (SD) cards for storing images of the environment because transmitting live streams was not feasible. When returning to the base station, the robots could upload the images to the base station, and a human end user could look at the images to find the victims. Hence, only returning robots provided useful information on the task. Because this will be the case in many real-world exploration scenarios where SGBA is a suitable method, we considered as a third performance metric, (iii) the area covered by returning robotstermed coverage returned. In the specific search-and-rescue experiment, we evaluated whether the victims were present in the images.
We first implemented the SGBA in simulation. The goal of the simulation experiments was to investigate the performance of the algorithm in many artificially generated environments. Moreover, in simulation, we could perform extensive experiments for gathering sufficient statistics on trends such as the relation between the performance and the number of robots. As a simulator, we have chosen ARGoS (Fig. 3A) because it has especially been developed for multirobot systems (38). A Robot Operating System (ROS) environment was used to connect the SGBA controller, the automatic environment generator, and the simulator together, in a similar manner as in (33), by using ARGoS for ROS (39) [all code repositories for the simulation experiments can be found in (40)]. In simulation, the robots were adapted ARGoS foot-bots, which were originally modeled on the MarXbot (41). These ground-based nonholonomic robots are different from the airborne holonomic robots used in the real-world experiments (see text S5 for an overview of how we implemented SGBAs five functionalities on the simulated foot-bot and on the flying robots used in the real-world experiments). The use of ground-based robots in the simulation experiments illustrates that SGBA can be applied to different types of robots.
The results of the simulation environments with (A) a representation of the ARGoS simulator and the modified simulated foot-bot. Two example environments and trajectories are shown for (B) six robots and (C) for four robots. (D and E) The results of 2, 4, 6, 8, or 10 robots in 100 procedurally generated environments for each configuration, in the coverage (not including nonaccessible areas), and the return rate. Three types of coverage are shown in (D): coverage total (area covered by all robots), coverage returned (area covered only by the robots that have returned), and coverage per robot (area that a single robot has covered). The exact computation of the covered area can be found in text S4.1. Last, in (E), the return rate is shown, i.e., the portion of robots that successfully returns to the base station after exploration. Both bar graphs of (D) and (E) show the mean as the SD, of which the specific values can be found in text S4.1.
The simulated robots started around the home beacon in the middle of the environment. With SGBA, each of them sequentially started moving into their preferred direction, which in this case were the angles 45, 135, 135, and 45 [the modulus of the robots identification number (ID) from 4 determines the preferred direction]. In simulation, the outbound travel lasted for 5 min. After spreading out into the environment, the simulated robots tried to head back to the home beacon within another 5 min (10 min of total simulation time), for which they used the noisy and locally perturbed RSSI of the base station beacon. Figure 3 (B and C) shows two examples of the simulated experiments with four and six robots. We experimented with 2, 4, 6, 8, and 10 robots per simulated environment. Per test configuration, 100 environments were produced with the procedural environment generator, as developed in (33). The coverage statistics can be found in Fig. 3D and the return rates in Fig. 3E. Despite their extremely restricted onboard resources, 10 small robots were able to explore, on average, 90% of a simulated 20 m by 20 m environment in 10 min [dark blue bar in Fig. 3D, example trajectory in Fig. 3 (B and C)].
The utility of the collective aspect of SGBA is shown by the dark blue bars in Fig. 3D; adding more robots leads to a higher coverage in the same amount of time. The trend of the bars indicates that the coverage is subject to a law of diminishing returns; adding two more robots has more effect when going from two to four robots than when going from 8 to 10 robots. The results suggest that this effect is mainly due to covering the same areas in the environment and not due to robots interfering with each other. Namely, the coverage per robot (Fig. 3D, yellow bars) and the return rate (Fig. 3E, light blue bars) did not decrease for the studied number of robots. The return rate is also of interest for the envisaged proof-of-concept search-and-rescue mission, in which the robots would store images onboard and only robots that return to the base station provide information on the task. The return rate was lower than 100%, mainly because SGBA can lead to suboptimal paths back to the base station (too slow for the total mission time). The coverage by the group of returned robots is shown in turquoise in Fig. 3D. Last, the variances of all performance characteristics became smaller for higher number of drones, showing that adding more agents increased the certainty of the outcome. This seems mainly due to the reduced effect of variations in individual performance on total coverage. For instance, with two robots, an early failing robot could halve the coverage, whereas with 10 robots, the effect would be much less noticeable, underlining swarm robustness. The logging data can be found in (42), and statistical tests of the simulation results can be found in text S4.1, which show that the trend of the total coverage and coverage returned are significantly related to the total number of robots. This is not the case for the coverage per robot and the return rate.
Last, we have studied the contributions of the different swarming mechanisms in SGBA: (i) sending them off in different directions, (ii) performing an avoidance maneuver when close to another robot, and (iii) changing preferential direction. All three mechanisms contributed to reducing collisions and increasing coverage. For example, for the full SGBA with six robots, there were, on average, 0.2 collisions per exploration trial. When sending off all robots in the same direction, this rose to 1.7 collisions. When only switching off the avoidance maneuvers, there were, on average, 1.2 collisions per trial. Text S6 contains these test results.
Subsequently, we performed real-world experiments. The goal of these experiments was to show that SGBA works in the real world and to investigate whether the results align with the findings from simulation. Particularly, we implemented SGBA on the small commercial off-the-shelf (COTS) Crazyflie 2.0 drone developed by Bitcraze AB (43). The hardware package of the drones consisted of the following modules. The multi-ranger deck (44) was used for obstacle detection and wall following. It has four tiny laser rangers that point to the front, left, right, and back. The flow deck (45) was used for coarse visual odometry. It consists of a downward looking camera that determines translational optical flow and a downward pointing laser ranger that scales the flow to obtain height and translational velocity. Bitcrazes own communication hardware, Crazyradio, with the 2.4-GHz Wi-Fi band (46) was used for ranging to other drones and to the wireless beacon. The firmware running on the Crazyflies can be found in (40). It also served as a communication channel between the drones for exchanging desired headings. These three light-weight and low-power hardware modules were sufficient for our navigation solution. We performed the experiments in an empty hallway of the faculty of Aerospace Engineering at Delft University of Technology because it allowed us to perform extensive testing (Fig. 4A; see text S1 for a more detailed description). We conducted real-world tests with two, four, and six Crazyflies at the same time. For each number of drones, five different flights were performed.
The results of the real-world experiments with (A) a representation of the environment used and the Crazyflie 2.0s with the necessary expansion decks. Several example trajectories are shown for (B) four robots and (C) for six robots from their onboard odometry (adjusted by means of the external cameras). The results of two, four, or six robots for five flights in each configuration are shown in (D) for the coverage (not including the nonaccessible areas in gray) and in (E) for the return rate (cyan bars), with a pie chart additionally indicating the percentages of real-worldrelated issues, which prevented a successful return to the base station. Both bar graphs of (D) and (E) show the mean as the SD, of which the specific values can be found in text S4.2.
As in the simulation experiments, the robots started in the middle of the environment. From here, they flew with SGBA to their own preferred outbound flight direction for about one third of their battery life, which was about 2 min. Afterward, they needed to return again using the RSSI of the home beacon. Along the entire path, they avoided each other by using the RSSI of the inter-robot connection, which was handled by the communication scheme presented in Materials and Methods. Because we did not have access to ground-truth global coordinates, we determined the coverage performance in terms of the number of visited rooms, excluding the hallway.
SGBA also allowed tiny robots in the real world to explore the environment, with six tiny drones, on average, flying into 83% of the open rooms in the 40 m by 12 m environment within 7.5 min (Fig. 4D, dark blue bar). Figure 4 (B and C) shows two example trajectories with four and six Crazyflies, respectively. The trajectories show that when a drone entered a room, it typically flew along its complete boundaries. Hence, upon entry, we considered the room covered. Note that the rather accurate trajectories in Fig. 4 (B and C) have only been reconstructed for visualization purposes and did not play any role in the navigation. The trajectories were plotted on the basis of the coarse onboard odometry, adjusted with the video footage of the external cameras on the scene using post-processing (text S3 explains the procedure and shows the difference between the original and adjusted odometry). The trajectories show how, generally, the drones explored different parts of the environment, thanks to the different preferential directions. In Fig. 4C, an example can be seen where drone 5 lost connection with the beacon in the far top-left room of the environment, so the external camera had to provide the additional trajectory information. Losing the connection was no problem for autonomous navigation because the FSM ran on board the Crazyflie. The visual odometry and wall-following behaviors allowed the drone to escape the room and to reconnect with the home beacon. Video compilations of the flight from Fig. 4C can be found in movie S1.
Figure 4D shows that, as in simulation, a clear trend can be seen that the total coverage increases with the number of drones. However, the increase of the coverage by returned drones seems less steep than in simulation. The reason for this is that both the coverage per robot and the return rate (Fig. 4E) slightly decreased when adding more Crazyflies. This is due to many issues, such as hardware malfunctions, sensing failures, and (even for six drones) a collision between two drones (see the pie chart of Fig. 4E). Having a limited battery capacity is a real-world problem as well but was taken into account in the simulation in the form of a time limit for the results in Fig. 3. Concerning the collision avoidance, in all 15 real-world experiments, there were 54 encounters between drones and only one collision. This corresponds to a 98% success rate of the implemented avoidance maneuver. The logged data can be found in (42), and statistical tests of the real-world results can be found together with a table with the numbers of encounters and collisions in text S4.2. In addition, the real-world results (Fig. 4D) show that the trend of the total coverage is significantly related to the total number of drones.
Last, to illustrate the potential application of SGBA, we applied it to a search-and-rescue exploration mission. The light-weight Crazyflies carried a mission-relevant payload: A forward-looking camera and an SD card (see Materials and Methods for the exact setup), which resulted in a flight time of 5.5 min in total. This extra camera allowed storage of images captured during flight for inspection by a human. Although the proposed navigation solution worked with COTS Crazyflie modules, for the proof-of-concept search-and-rescue mission, we had to make a custom, lighter weight, and lower power laser-ranging module to accommodate the extra camera and SD card. This custom ranging module replaced the Crazyflies multi-ranger module.
The experiment simulated a search-and-rescue scenario, in which two human-size wooden figures were placed in two different rooms in the hallway. The same starting position was used for the four-cameraequipped Crazyflies as in the previous test. To cope with the limited flight time of the prototypes, we chose to perform only the outbound flight. The trajectories in Fig. 5C were inferred from the onboard camera footage in combination with the external cameras.
The results of the experiment in which the Crazyflies carry a camera to detect victims in the environment. (A and B) Screenshots of the external cameras capturing the Crazyflies during their flight. (C) Trajectory of the four Crazyflies (inferred from the onboard and external camera). (D and E) Screenshots of the onboard Hubsan camera, with the two human-shaped silhouettes captured during the exploration flight.
Both victims were found by the drones. In Fig. 5 (A and B), we can see that Crazyflies 1 and 4 were flying in the rooms where the victims were located. Drone 4 was able to capture the victim on its onboard camera (Fig. 5E). However, Crazyflie 1 stopped recording right before it flew into the room with the victim. Luckily, the victim was spotted by drone 3 from another angle (Fig. 5D). This example shows the advantage of using swarming, which can yield redundant observations in an exploration task. A video compilation can be found in movie S2.
As explained, the drones did not make a map of their environment for navigation. After the drones came back, their collected images could be downloaded to the base station. A human end user can then go through the images, and, when finding a victim, look at the video of the robot in fast-forward to find the location. If a map is desired by the end user, the base station computer could also generate a map based on the onboard images with state-of-the-art SLAM methods [e.g., (16)]. This map generation may be challenging due to real-world factors such as quick motions, motion blur from vibrations, lack of texture, etc.
The purpose of this work was to develop an alternative for navigating a group of tiny, resource-limited flying robots in an unknown, GPS-denied environment. We presented the SGBA, which fit onboard the Crazyflie robot, weighing a mere 33 g. Owing to the STM32F4 microprocessor and added sensing capabilities (multi-ranger and flow-deck expansion decks), it navigated through a real office environment. Moreover, while communicating with its peers, it avoided other Crazyflies and increased coverage in the overall exploration task. The algorithm enabled a group of small and limited flying robots to fully autonomously navigate in a real environment by using their onboard sensors and processing capabilities. Still, there are several elements to consider for extending this work to bigger and more complex environments as seen, for instance, in real-world search-and-rescue scenarios.
There are several options that would improve navigation performance. First, we expect that using Ultra Wide Band (UWB) instead of the Crazyradio PA would substantially improve both localization with respect to the beacon and sensing of other drones [see e.g., (47)]. The Crazyradio is heavily influenced by other 2.4-GHz sources (Wi-Fi), and the inter-Crazyflie chatter and the RSSI-based distance measurements are particularly noisy and heading dependent. The good overall results show the robustness of SGBA. Still, communication could be considerably improved by using UWB communication with time-of-flight ranging. For instance, a single DecaWave DWM1000 module can provide ranging to another such module for a distance up to 290 m, through walls and obstacles, with ~10-cm accuracy (48). Because SGBA only needs ranging to a single beacon, it can use the full extent of the UWB range to notably improve the inbound flight.
In addition, the collision avoidance between drones may benefit from using UWB. The experimental results showed an increase in coverage when adding more drones to the swarm. However, the increase follows a law of diminishing returns. We expect this phenomenon to be fundamental because having more robots necessarily means covering more of the same area when they start from the same point, and it also means that robots will spend more time avoiding each other. Still, in our current implementation, using too many drones also led to communication overload during the flight. In our current communication scheme (see Materials and Methods), the more drones there were, the slower they communicated with each other. Increasing the number of drones may cause problems of miscoordination or, even worse, a higher likelihood of interdrone crashes. We saw in the real-world results (Fig. 4E) that the latter did not occur with two and four drones, but it did once with six drones. The optimal number of drones will depend on the size of the environment and, with the current implementation of interswarm avoidance, the communication hardware and protocol. Because of UWBs greater robustness with respect to interference and its higher throughput, we expect it to also improve the scalability of the current proposed scheme for drone collision avoidance.
During the real-world experiments, there was almost always a connection between the home beacon and all drones. In Fig. 4C, a disconnected drone kept executing its tasks autonomously because the FSM runs fully on board. It was therefore able to get out of a communication dead zone eventually. However, the question still arises: How will the robots be able to get home if the beacon is lost completely? Even with the earlier mentioned UWB improvement, there are situations where the environment is larger than the range of the beacon. A useful addition to cope with the home beacon loss problem in bigger environments is to make more use of the swarm. As the Crazyflies are communicating with each other, they can also be used as a beacon themselves. As soon as a drone loses connection with the homing beacon, it could try to find another Crazyflie that is still connected to the beacon and navigate toward that position first, reconnect with the original home beacon, and resume its navigation to the starting position [a strategy that reminds of chains of robots as used in, e.g., (49), but that would be more economical in terms of the number of used robots]. Yet, this requires that at least several Crazyflies always need to stay connected to the home beacon and therefore are limited in their own missions.
Improving the robots sensing capabilities would also improve the results. Specifically, the multi-ranger deck proved to be sufficient for our test environment, yet there are limitations. For instance, it cannot see very thin objects and relies on the flow deck to work properly. Although the flow deck and the existing sensor fusion provided stable velocity-driven flight, the dark floor in the office environment turned out to be challenging. Therefore, a Crazyflie would occasionally drift and move into a direction where obstacles were present in the blind spots of the multi-ranger. A higher robustness to collisions from a protective cage, as proposed in (50), would help. The wall following and obstacle detection can also be made more robust. A possible solution is to add a light-weight vision system, as in (51, 52). Vision can provide distance estimates in an entire field of view and aid the velocity estimation and odometry by means of frontal optical flow [see (53)]. This would reduce problems with textureless floors. Even so, a fundamental limitation of using the multi-rangers, cameras, or optical flow sensors is that they will be ineffective if the surroundings are filled with smoke. In that case, different sensors such as sonar or radar can be used, whereas the navigation can remain identical.
The high efficiency of SGBA in terms of sensing, computation, and memory comes at the cost of navigation efficiency. Not building a global map and not performing computationally expensive optimal path planning results in suboptimal paths. Drones can revisit rooms multiple times or can visit rooms that were already visited by other drones. This could perhaps be solved in a relatively efficient manner, e.g., involving visual landmark recognition. Still, the experimental results have shown that multiple measurements from the same area can be beneficial. Camera footage can get temporarily occluded or even lost, as happened with drone 1 in Fig. 5C. Moreover, the fact that drones views overlap with each other can make a substantial difference in the data collection if not all robots are able to return.
We illustrated the potential of SGBA by implementing it on the smallest possible commercially available quadrotor. However, the discussion above suggests that the method would perhaps be even more successful on a custom-designed drone. One option is to implement SGBA on a smaller platform while keeping a similar performance. Making SGBA work on a smaller drone is possible because a custom design would not have to be as modular and easy to use as the Crazyflie decks. That this is possible is already shown by the lighter and more energy-efficient custom laser ranger deck that was made for the proof-of-concept search-and-rescue mission. Another option is to implement SGBA on a slightly bigger drone for better performance. We expect that using a slightly bigger drone with better sensing, communication devices, and more battery capacity would notably improve the return rate of the drones because it would reduce collisions both with obstacles and with other drones, make the inbound flight more efficient, and extend the flight time available for returning. Even if such a drone could have a bit more processing available, the current proposed navigation solution remains of high interest because it will leave much room for other types of functionalities. This may be used by vision algorithms to enhance the navigation or by other algorithms performing mission-specific tasks.
In the future, more processing power will become available to small robots [see, e.g., (54)]. In comparison with 3D SLAM, SGBA will always be available to smaller robots. For instance, it is not unthinkable that SGBA may be applied to the 80-mg RoboBee (55). Furthermore, small robots will have to use their onboard computing power for all tasks that they need to perform autonomously. It is essential for small robots to have computationally efficient algorithms for all tasks they perform. Using SGBA implies that there is more computational power and memory available for other mission-relevant tasks. Hence, we expect SGBA to remain relevant even with the further progress in the miniaturization of computing devices.
Last, we suggest application scenarios for which the developed swarm exploration seems suitable. We performed a preliminary investigation into SGBAs use for search and rescue. However, the proposed method is also suitable for other tasks because it allows a swarm of small robots to quickly explore a potentially unknown environment. Hence, we believe that it is also suitable for exploring an unknown cave or inspecting the inside of a building that is about to collapse. Apart from unknown environments, SGBA can also be applied to known environments, for instance, in surveillance applications. In the case of surveillance for security, one may typically think of robots performing regular trajectories, e.g., along the propertys perimeter, but a less predictable swarm-based surveillance may be better in countering unwanted intruders. In many applications, robots will need to operate for longer times. For example, in an inventory-tracking scenario, a swarm of safe, tiny drones may buzz around the warehouse, continuously flying out to scan products and then returning to base to recharge. Similar setups may also serve blue algae monitoring by little robot boats or floor cleaning by small garbage collection robots. A swarm of tiny robots has benefits because the robots are safe, cheap, navigate in narrow spaces, and, as a group, can quickly cover relatively large areas.
To conclude, we presented a minimal navigation solution, the SGBA, that allows tiny flying robots to successfully explore a real-world environment. In our experiments, the Crazyflie robots only used their inertial measurement unit, four tiny 1D laser range finders, an optical flow deck, and a very light 2.4-GHz radio chip. The processing fit easily in the single 32-bit, 168-MHz, 196-kB RAM microcontroller of the Crazyflie in addition to all flight control code. Instead of building a map of the environment like conventional SLAM techniques, our navigation solution consists of a combination of simple behaviors and behavioral transitions to accomplish the complex tasks of autonomous exploration and homing. The guiding principle here is to trade off properties such as path optimality and accuracy with resource efficiency, allowing for autonomous swarm navigation. We believe that this principle can offer inspiration for solving other complex robotic tasks as well with swarms of cheap and safe tiny robots.
Here, we explain the exploration and homing strategy of SGBA, starting with the navigation of a single robot and then expanding to larger numbers of robots. Afterward, we explain the hardware used for the real-world experiments.
We start our explanation of the FSM with the outbound travel of a single robot. Figure 6 (A and B) illustrates the entire FSM, where the robot starts at Init. For the outbound travel, it is important to realize that the robot just needs to explore the available space and does not need to go to a specific location. Therefore, it will only be assigned a preferred heading. After it encounters, follows, and then leaves an obstacle, it will follow that same heading again (Fig. 6C). Of course, there will be heading drift over time. In the case of the Crazyflie robots used in the real-world experiments, the drift was ~0.10/s (48 over the 8-min flight time). Still, because the main goal of the heading estimate was to send multiple robots into roughly different directions, the drift did not significantly affect SGBAs performance.
(A) FSM of the SGBA with (B) a legend of symbols. Its individual subsystems are illustrated as follows: (C) Outbound navigation in which the robot attempts to follow its goal heading, following obstacle contours on the way, and (D) local direction preferences based on angle of attack, i.e., the angle that the robots trajectory makes with the wall and the principle of the loop detection. The addition to the state machine of the gradient search toward the beacon at the home location for the inbound travel is given in blue, with (E) the gradient search method during the straight parts of the wall following. Here, the robot tries to estimate the direction toward the beacon by integrating information on the received signal strength based on its heading along the way. Swarm coordination addition to the state machine for the outbound flight (green), where (F) shows that the robot will change its goal heading if another drone (with higher priority) has its preferred heading in the same direction. In case the drones are even closer, the one with the lowest priority will move out of the way completely for both inbound and outbound travel.
After the robot detects an obstacle with its front laser range sensor, it will start the wall-following behavior. First, it chooses an initial local direction, which decides whether to follow the wall on the right- or left-hand side. We chose a local direction policy based on the strategies of DistBug (56) and FuzzyBug (57), namely, by adopting the angle of attack as the robot approaches the wall. With the current hardware of the four laser range sensors in all four directions of the horizontal plane, the robot could easily determine the angle of the wall by evaluating whether the side range sensors are triggered in combination with the front one. The main assumption here is that the wall needs to be straight. However, if this is not the case, this does not mean that the strategy will fail. If the local direction ends up being a less optimal one, this will be corrected for at a later time. From here on, the robot starts following the boundary of the obstacle and the wall.
SGBA uses memory for loop detection. Memoryless bug algorithms are prone to getting stuck in loops because they may encounter an obstacle, perform wall following, and then leave the obstacle in a direction, which will lead them back to exactly the same obstacle. This will lead to an endless loop, devastating the navigation performance. An example of this can happen in a room, where a robots preferred direction is away from the only door in the room. It may then enter the room and travel through the room until it detects the wall on the opposite side of the room. Subsequently, it will follow the walls of the room until it is following the wall with the door entry. However, because its preferred direction is away from the door, it may leave that wall again before reaching the door, travelling to the opposite wall again. The occurrence of this type of loop is why during wall following SGBA keeps track of its position relative to the location where the robot first detected the obstacle (termed the hit point). If the robot tracks back, due to the environment characteristics, and crosses the area behind the hit point, it will detect that as a loop. This means that once the robot leaves the obstacle and encounters another, which is usually the same hit point as last time, it will not base its local direction on the current wall angle but on the reverse of the direction chosen at the previous saved hit point. This position tracking is illustrated in Fig. 6D and is done completely with relative position estimations of the onboard odometry. Because this procedure is only used for local decision-making within small rooms, this was a sufficient tactic to handle loops within our experiment environment. However, this probably will not prevent a potential loop in large areas because the drift will be too severe. We have studied the effect that SGBAs loop detection has on the return rate (text S6). The results show that for one robot, the return rate dropped substantially when there was no loop detection, but for six robots, the effect was less evident. Upon detailed inspection, we noticed that inter-robot encounters are responsible for getting stuck robots out of a loop. With this, they can cope with the lack of a proper loop detection, which is an interesting feature of the swarming element of SGBA.
After a few minutes, either after a time threshold has passed or based on the remaining voltage of the battery, the robot needs to return to its base station. This is extremely important for robots that store their measurements onboard and do not stream their results to the operator. To achieve this, SGBA keeps track of the gradient of the filtered RSSI (see text S2 for raw measurements) while it is performing the wall following, as seen in Fig. 6E. During the straight parts of the procedure, it has a circular buffer, corresponding to the heading of the robot, where the values in the buffer track the directions in which the RSSI has increased over time. In the direction of the RSSI increase, the buffer value is incremented, whereas the value in the opposite (180) direction is decremented to give it a lower influence. For an RSSI decrease, the exact opposite procedure is done, and for no RSSI change, the buffer values stay the same. Both incrementation and decrementation, based on the RSSIs derivative, are done for every N meter, where N is a decimal number defined by the user. Every N k meter, where k is a scalar value, a vanishing function is applied to decrease the influence of older RSSI measurements. This RSSI change in function of the heading allows the robots to estimate the direction to the home beacon, which they will use for the return travel any time they are not forced to follow an obstacle or wall. Because the RSSI increase is noisy and irregular, this will usually not be an exact angle but a coarse indication of where the beacon is. This proved to be enough for the robot to return to its home base. Any drift in the robots heading estimate is not problematic for the inbound travel because the direction to the home beacon is determined with respect to this internal heading representation.
A single robot could use the SGBA-FSM by itself to navigate. However, it only has a limited battery capacity and therefore will not be able to explore the entire environment. For this reason, it is more advantageous to use a swarm of robots. However, using multiple robots poses a new set of problems. First, the robots need to avoid each other, and second, they should coordinate the search with each other to achieve maximum dispersion and avoid conflicts. This was done with their communicated information and range measurements (range or RSSI) and implemented by the move out of way state in Fig. 6A. For collision avoidance, if two robots come really close to each other (Rother < Rth_coll), the robot with the high-priority (and in our case lower ID, IDlow) will have the right of way. The low-priority robot (IDhigh) will perform an action enabling IDlow to smoothly move past it. After staying out of IDlows way until Rother > Rth_coll, IDhigh resumes navigation. For the coordination of the search, the robots dynamically adapt their preferred heading. Initially, each robot is assigned a preferred heading, chosen out of K different directions. If during outbound travel a robot comes nearby another one (Rother < Rth) and these robots have a similar preferred heading, then the low-priority robot IDhigh will change the sign of its preferred heading and carry on (Fig. 6F). The next time it leaves the obstacle, it will therefore move away from the search area of the robot with the higher priority, IDlow. A change in preferred heading is triggered earlier than a collision avoidance action (Rth_coll < Rth), and it is only performed during outbound travel. During inbound travel, the preferred heading is going toward to the home beacon. Hence, during inbound travel, the range between robots is only used for collision avoidance. The different implementations for simulated and real-world robotic platforms can be found in text S5.
For the experiments, we used Bitcrazes platform Crazyflie 2.0 (43), augmented with the flow deck v2.0 (45) and the multi-ranger (44) expansion decks, which can be seen in Fig. 7A. An alternative battery with more capacity was added for a longer flight time, namely, the Turnigy nano-tech 300 mAh (1S 45-90C) LiPo battery, providing an average power supply of 3.7 V. To make sure the entire path in front of the drone is free of obstacles using the 20 field-of-view laser ranger, the minimum required detection range is 50 cm, for which the VL53L1xs (58) laser ranger on the multi-ranger is sufficient. The flow deck contains a PMW3901MB optical flow sensor (59) to detect motion, with an additional VL53L1x for height detection and control. Within the existing state estimation (60), the Crazyflie achieved excellent hover and velocity control, and optical flow was detected on most surfaces. Nevertheless, dark colors should be avoided. The dark low-texture floors in our real-world environment were challenging (Fig. 4A), and the flyable height where motion detection was still reliable was only 0.5 m.
(A) Crazyflie used for outbound and inbound travel and (B) the assembly used for the video recording of the environment. (C) Components on the Crazyflie, including weight and battery consumption. (D) Total of six Crazyflies used including six Crazyradio PAs and (E) the communication scheme shown for the six-drone experiment. Here, a counter is regulating when the drone will transmit a message (msg) to another drone (for counter 1: drone 1 to 2, drone 2 to 3, etc.). Between the regulated counter, the drone transmits its message to another drone with a time offset based on its ID. Six PAs were used for the six communication channels to receive logging of the Crazyflies for statistics; however, these can be replaced by one if no telemetry is required.
For the onboard video recording experiments, we designed a custom expansion board, which included configuration of the lower power VL53L0x time-of-flight (61) sensors (the predecessor of the VL53L1xs on Bitcrazes multi-ranger deck) and a camera module, meant as a spare part of the Hubsan X4 H107C RC Quadcopter (62). This camera module carries an SD card to record the videos captured during the SGBA navigation of the Crazyflies. This configuration is displayed in Fig. 7B. The weight of the platforms and the average power consumption per expansion board are shown in Fig. 7C, which resulted in approximate flight times of 7.5 min for the left-hand Crazyflie configuration and 5.5 min for the right-hand Crazyflie configuration in Fig. 7.
To fully execute the SGBA, a communication protocol (Fig. 7E) has been flashed into the NRF51 microprocessor, which handles the Crazyradio communication (2.4-GHz Wi-Fi protocol and Bluetooth), and the power distribution. Each drone has its own unique ID number; We consider numbers one to six in this explanation. This ID also indicates in which channel (ID 10 + 20) the Crazyflie communicates with the computer for logging the onboard variables, as can be seen in Fig. 7E. This separation of channels was done to reduce interference between the Crazyflies. The variable logging of each Crazyflie to the computer was done at 0.5 Hz, which includes the position estimation, the RSSI of the beacon and other robots, the SGBA status of the state machine, etc. In our experiments, because we needed to receive the onboard data of each Crazyflie separately for the statistics in this paper, each drone had its own individual Crazyradio PA (Fig. 7D). This reduced the possibility of package loss of the telemetry data; however, technically, only one beacon is necessary. If the Crazyradio PA quickly switches and transmits empty packages on all the available channels, the SGBA does not require any additional knowledge except for the RSSI value.
The communication between the Crazyflies was done with a counter to prevent package loss due to message collisions (Fig. 7E). The counter regulates when one drone will send a package to another drone, which will be incremented every 0.5 s. For this, it switches to the primary transmitter (PT) mode, changes its communication channel to the other drones channel, sends the message within a short timeframe, and switches back immediately to its own channel in primary receiver mode to receive messages from the other Crazyflies and to receive an RSSI of the home beacon. Between the regulating counter increments, the moment to switch to PT depends on the drones ID. This should prevent the Crazyflies from simultaneously sending messages, therefore reducing the possibility of interdrone package loss. The information that the Crazyflies send to each other is their ID and preferred heading. This is necessary for changing direction on the outbound travel. At the same time, the receiving drone also knows the signal strength and hereby has an indication of the proximity of the other robot. Not all Crazyflies send a similar number of messages. The highest priority robot (lowest ID = highest priority) transmits to every channel because all others would need to avoid it, and the lowest priority robot does not send a message at all because it needs to avoid everybody else.
In the experiments, the earlier-mentioned counter was regulated by the computer; however, each Crazyflie would be able to do this by itself after clock synchronization of the autopilots. The separation of channels on the Crazyradio modules was necessary to enable stable communication between Crazyflies. It should be possible to put multiple Crazyflies on one channel; however, the number of usable channels and the number of robots per channel are limited. This poses a restriction for the total number of robots that the 2.4-GHz Crazyradio protocol is useful for; however, this could be further scaled by using UWB instead and a more sophisticated scheduling protocol such as self-organized time-division multiple access (STDMA) as in (63).
robotics.sciencemag.org/cgi/content/full/4/35/eaaw9710/DC1
Text S1. Real-world test environment
Text S2. RSSI measurements
Text S3. From odometry to trajectory
Text S4. Analysis and statistics
Text S5. SGBA implementation details simulation versus real world
Text S6. SGBA submodule analysis
Fig. S1. Overview of the real-world environment.
Fig. S2. RSSI measurements.
Fig. S3. Odometry versus trajectory.
Fig. S4. Transcripts hallway video.
Fig. S5. Odometry correction.
Fig. S6. Coverage calculation simulation.
Fig. S7. SGBA simulation versus real world.
Fig. S8. Simulated collisions SGBA.
Fig. S9. Loop detection check.
Table S1. Statistics of the simulation tests.
Table S2. End status of the real-world tests with two drones.
Table S3. End status of the real-world tests with four drones.
Table S4. End status of the real-world tests with six drones.
Table S5. Coverage of the real-world tests with two drones.
Table S6. Coverage of the real-world tests with four drones.
Table S7. Coverage of the real-world tests with six drones.
Table S8. Statistics of the real-world tests.
Table S9. Real-world collisions.
Movie S1. Video six-drone test configuration.
Movie S2. Video four-drone victim search.
R. Menzel, J. Fuchs, A. Kirbach, K. Lehmann, U. Greggers, Navigation and communication in honey bees, in Honeybee Neurobiology and Behavior (Springer, 2012), pp. 103116.
K. H. Petersen, R. Nagpal, J. K. Werfel, Termes: An autonomous robotic system for three-dimensional collective construction (Robotics: Science and Systems VII, 2011).
G. Vsrhelyi, C. Virgh, G. Somorjai, N. Tarcai, T. Szrnyi, T. Nepusz, T. Vicsek, Outdoor flocking and formation flight with autonomous aerial robots, paper presented at the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Chicago, IL, 14 to 18 September 2014.
S. Hauert, S. Leven, M. Varga, F. Ruini, A. Cangelosi, J.-C. Zufferey, D. Floreano, Reynolds flocking in reality with fixed-wing robots: Communication range vs. maximum turning rate, paper presented at the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, 25 to 30 September 2011.
M. T. Lazaro, L. M. Paz, P. Pinies, J. A. Castellanos, G. Grisetti, Multi-robot SLAM using condensed measurements, paper presented at the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, 3 to 7 November 2013.
J. Engel, T. Schps, D. Cremers, LSD-SLAM: Large-scale direct monocular SLAM, paper presented at the 2014 European Conference on Computer Vision (ECCV) Zurich, Switzerland, 6 to 12 September 2014.
C. Forster, S. Lynen, L. Kneip, D. Scaramuzza, Collaborative monocular slam with multiple micro aerial vehicles, paper presented at the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3 to 7 November 2013.
A. Denuelle, M. V. Srinivasan, A sparse snapshot-based navigation strategy for UAS guidance in natural environments, paper presented at the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16 to 21 May 2016.
A. Sankaranarayanan, M. Vidyasagar, A new path planning algorithm for moving a point object amidst unknown obstacles in a plane, paper presented at the IEEE International Conference on Robotics and Automation (ICRA), Cincinnati, OH, 13 to 18 May 1990.
Q. M. Nguyen, L. N. M. Tran, T. C. Phung, A study on building optimal path planning algorithms for mobile robot, paper presented at the 2018 4th International Conference on Green Technology and Sustainable Development (GTSD), Ho Chi Minh City, Vietnam, 23 to 24 November 2018.
K. Taylor, S. M. LaValle, I-Bug: An intensity-based bug algorithm, paper presented at the 2009 IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, 12 to 17 May 2009.
J. N. Twigg, J. R. Fink, L. Y. Paul, B. M. Sadler, RSS gradient-assisted frontier exploration and radio source localization, paper presented at the 2012 IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, 14 to 18 May 2012.
M. Chinnaaiah, K. Anusha, B. Bharat, M. Divya, P. S. Raju, S. Dubey, Deliberation of curvature type Obstacles: A new approach using FPGA based robot, paper presented at the 2018 International Conference on Control, Power, Communication and Computing Technologies (ICCPCCT), Kannur, India, 23 to 24 March 2018.
M. Bonani, V. Longchamp, S. Magnenat, P. Rtornaz, D. Burnier, G. Roulet, F. Vaussard, H. Bleuler, F. Mondada, The marXbot, a miniature mobile robot opening new perspectives for the collective-robotic research, paper presented at the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18 to 22 October 2010.
C. De Wagter, S. Tijmons, B. D. Remes, G. C. de Croon, Autonomous flight of a 20-gram flapping wing MAV with a 4-gram onboard stereo vision system, paper presented at the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May to 7 June 2014.
N. T. Jafferis, E. F. Helbling, M. Karpelson, R. J. J. N. Wood, Untethered flight of an insect-sized flapping-wing microscale aerial vehicle. 570, 491 (2019).
I. Kamon, E. Rimon, E. Rivlin, Range-sensor based navigation in three dimensions, paper presented at the Proceedings 1999 IEEE International Conference on Robotics and Automation (ICRA), Detroit, MI, 10 to 15 May 1999.
Acknowledgments: We thank V. Trianni for helpful comments on the early manuscript. We also thank L. Schrover and J. Siebring for allowing us to use a portion of the building of the TU Delft faculty of Aerospace Engineering for our experiments. We also acknowledge the great help of E. van der Horst for his advice on setting up the communication protocol. Moreover, we are grateful to Bitcraze ABs for help with multiple hardware-related issues with the Crazyflie. Funding: This work was financed by the Dutch Research Council (NWO) under grant no. 656.000.006 within the Natural Artificial Intelligence Programme. Author contributions: All authors contributed to the conception of the project and were involved in the analysis of the results and revising and editing the manuscript. G.C.H.E.d.C. and K.N.M. have forwarded the idea of a swarm bug algorithm with a beacon for homing. K.N.M. has designed and implemented the SGBA specifics for the experiments with support of C.D.W. and G.C.H.E.d.C. The manuscript was primarily written by K.N.M., C.D.W., G.C.H.E.d.C., and K.T., and H.J.K. revised the intermediate versions. K.M. contributed to this work while at Delft University of Technology and has moved to Bitcraze AB. K.T. contributed to this work while at the University of Liverpool and now has moved to DeepMind. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the article are present in the paper or the Supplementary Materials.
Read the original post:
Posted in Robotics
Comments Off on Minimal navigation solution for a swarm of tiny flying robots to explore an unknown environment – Science
Robotics Industry Insights – Robots Embrace the Daily … – Robotics Online
Posted: at 3:11 pm
by Winn Hardin, Contributing Editor Robotic Industries Association Posted 10/25/2019
Deburring and Deflashing Versus Sanding and Polishing: Understanding Material Terminology
Material removal applications vary significantly, and so do their automated solutions. Knowing the terms can help guide potential customers to the best sources for information, technology, and service.
__________________________________
Deburring is about removing extraneous material from parts manufactured by subtractive processes, such as cutting, grinding, or milling, according to 3Ms Scott Barnett. Removing large burrs is often a two-step process. For some deburring applications, Barnett suggests a chamfer, or angled cut can be used. For deburring or deflashing applications where material stresses should be avoided, Barnett recommends a radius, where the force and position of the rotating surface of a compliant abrasive are actively controlled through electrical or pneumatic means.
__________________________________
Deflashing is used to remove material from parts made by additive processes, such as injection molding of plastic. Depending on the material, size, and other application requirements, flash material can be removed by any number of rotational media, from orbital sanders to grinders and other hard tools.
__________________________________
Sanding is a well-known surface finishing technique used in a multitude of ways, such as refining the finish of manufactured stone for sinks and tubs. However, understanding how media interact with different materials, including the details of wear, heat, and finish, can require an expert, says Britta Iwen, global marketing manager at 3M.
__________________________________
Polishing applications appear to be more cosmetically-driven than grinding and dimensioning applications. But 3Ms Carl Doeksen says to beware of that common wisdom. Polishing might appear to always be a cosmetic thing, but thats not always the case. A 6ra finish on a tungsten carbide-coated actuator is critically important from a functional/structural integrity standpoint. The surface roughness properties, achieved through precise polishing, allow the hydraulics to operate under extreme conditions and interact properly with the gaskets. Polishing might also require a compound, coolant or other dispensed material to assure no thermal damage during the production process.
__________________________________
In truth, a single work cell can require one, some, or all of these automated processes. Both sanding and grinding are multistep processes, explains PushCorps Max Falcone. For sanding, you start by knocking down high spots and then go to a finer grit. Polishing is the last step. With grinding or deflashing, you might use a mill or a diamond abrasive, cut or snag the flashing out, and then finish with a grinding wheel.
There should be a lot of praise for todays robots, not the least of which is that robots never get rubbed the wrong way or complain about tough working conditions, grinding work environments, or abrasive bosses. This is true particularly in material removal and finishing applications, such as grinding and sanding, where their human counterparts face safety risks on multiple fronts.
When it comes to grinding and sanding applications, a major driver to automate the process is worker safety, explains Max Falcone, business development manager at PushCorp, Inc. Especially when you look at an application like random orbital sanding, which is a high-vibration application. Many customers have stated to us during visits that applications such as random orbital sanding and grinding have contributed to their work loss days. Nerve damage and carpal tunnel syndrome have been mentioned numerous times as a driver to automate these processes.
Sanding and grinding applications can also adversely affect air quality, even when used in conjunction with industrial vacuums and air filtration systems. Sanding applications, in particular,can create very dusty environments, adds Louis Bergeron, integration coach at Robotiq. As a result, it can be very hard to find good workers and to keep them. And until recently, there were automation solutions for only a few areas of the finishing operation.
Workforce challenges are accelerating this trend. Weve been helping customers use abrasives in robotics for 30 years, says Scott Barnett, abrasives application engineering manager at 3M, But its really taken off in the past 3 years: our customers just cant find workers.
Risk, Reward
While robot sanding, grinding, deflashing, and deburring have been around for 30 years or more, market adoption has been slow, mainly due to the complexity of automating the task, leading to high costs. According to Robotiqs Bergeron, most metalworking shops arent laid out with robotics in mind. So when you want to automate the process, you have to start from scratch, he says. Many times, everything has to be changed. The way you handle the parts. The floor plan has to change if youre adding industrial robots. Its a big step to go from manual to an automated solution. . . . It takes a lot of skill to [automate surface finishing] well. The process has a lot of variables that affect the end result. You need to handle the robot speed, the force applied, the media, the kind of media you have, media wearing and changeout, the number of passes to achieve a specified result, the angle of the tooling, part position, and more.
So why even try to automate material removing and finishing operations? Ask Ron Potter, director of robotics technology for Factory Automation Systems (FAS) and his client MTI Baths, maker of solid stone tubs and sinks. A tub that used to take them four to five hours now takes less than two hours for a more than 150% increase in productivity, Potter says.
Part of that productivity increase comes from the robots greater strength compared to human sanders, grinders, and metalworkers. One of the things you can do with automation is you can run larger, heavier, stronger types of media than, say, a manual grinding wheel, adds PushCorps Falcone. I recently visited a customer who was running an 8-inch fiber disk for grinding. By moving to a robot, they moved to a 24-inch diamond saw, which reduces heat put into the part and eliminated warpage. So when you run a robot with larger media, youre getting productivity improvement along with safer operations while enabling applications that werent possible before. Theres no way a manual operator could safely operate a 24-inch disk in all of the different orientations needed, for example. Overall the whole process is now better.
Finding a Good Application
Everybody knows you can teach a robot to weld or do pick-and-place, but very few people broadly feel that you can teach a robot to do grinding or polishing operations, notes Carl Doeksen, global robotics and automation leader at 3M. Theres a perception in the industry that grinding and sanding especially when youre doing high-end finishes that its a black art, that [the business] only has two or three people in the plant that really know how to finish these parts. [These workers] are in charge of the quality. But thats changing. Today, any polishing, grinding, or finishing application can be automated.
Doekson continues, However, the question remains: Can an integrator make money on the job? Thats a separate question. It may not be economically practical to do it. And thats certainly where we help integrators and industry to make those tough calls.
Sometimes a client decides on a robot, and they want to use it on 100% of their parts, says Robotiqs Bergeron. We have to explain that you spend 20% of the money to handle 80% of the parts. Going to 100% of the parts can ruin your margin.
Robot Programming Advances
Programming a robots movements regardless of whether the application requires deflashing, deburring, sanding, or polishing can account for a significant portion of the work cells cost, especially if the part is contoured or complex.
Material removal robots are typically programmed in one of two ways: off-line programming, possibly automated based on a CAD model of the part; and teach pendant or teach point programming. We see off-line programming improving and getting better and better, but in many cases, it requires extensive programming skills and cannot suitably account for part variation, says 3Ms Doeksen. As off-line programming [leveraging CAD models] improves, we see the masses being able to do more of this type of application. If your application requires you to teach an extensive number of points on a grinding job, that can be prohibitive.
While off-line programming based on CAD models has advanced to the point that robots can be using a milling tool to create celebrity statues and Stanley Cup reproductions, Robotiq is among the companies working to simplify the teach pendant programming process.
Robotiqs new Universal Robotsbased sanding kit reduces the time required to program an application for the robotic sanding of contoured surfaces from hours to minutes. All of these applications need to be tuned because there are a lot of factors to program the robot path, says Bergeron. By moving the robot to only six points that define the contoured surface and telling the system the number of passes and interval between passes, we can generate a sanding robot program in minutes. If youre not satisfied with the results, you can increase the number of passes, for example, and perform another trial. To do that with teach pendant programming, youd have to use a ruler, intervals, find points, and ID each angle from that point. If you want to make a change, start all over again.
Force Compliance for Precision Work
In addition to programming, determining the correct amount of force to apply to the material removal tool is an important consideration for any robotic work cell.
For example, when MTI Baths asked Factory Automation Systems to automate its stone tub and sink sanding operations, Potter opted to use the low-profile FANUC force sensor on the end of FANUCs M-710iC/45M robot. When we analyzed the application, speed wasnt critical, but reach was, says Potter. The robot we chose had 2,600 millimeters of reach, while the low-profile force sensor made it easier to provide active force compliance control in sinks and other compact internal surface areas. Using a larger active compliance tool on the end of the robot would have been too tight for sanding sinks and smaller products.
To program the application, FAS scanned the tub and imported the 3D model into FANUCs RoboGuide robot programming software. They then broke the tub down into nine separate surfaces, both inside and outside the tub, each with its own programming segment. it was determined that two passes with 80-grit paper, followed by another pass with 180 grit, resulted in a more uniform finish than manual sanders could achieve because they have to constantly move around the tub. If youre moving around the object, you inevitably miss areas and the consistency suffers, said Potter. To accommodate changes in part positioning and part variations, the robot starts the process by touching the tub in eight programmed points to determine its position in the workspace.
According to 3Ms Barnett, flat parts make for much simpler automated solutions. Often you can use position-based programming and not worry about force control beyond passive compliance, such as a foam pad [to accommodate part variation]. . . . Variability in parts is one of the biggest challenges for robotic applications. Today, in-line scanning systems, used to measure actual parts in 3D to accommodate part variation, are finding their way into commercial practice when applications can support the additional cost.
Passive [force] compliance is great, adds PushCorps Falcone. Its a significant cost saving where a passive unit makes sense. Theyre good for long planar moves, like the top of a table and the edge. But if youre sanding a gas tank on a motorcycle, you need active compliance because the robot needs to move unhindered and control the force at all times. Active force compliance is like cruise control on a car. You set it for 50 mph, and cruise control tries to keep it there regardless if youre going uphill or down. Active force compliance allows you to set it for 5 pounds of force, for example, regardless of whether youre sanding the top of the table or the underside the force will stay consistent.
When you look at other approaches, such as through-arm torque sensors and other collaborative solutions, theyre trying to control hundreds of pounds through six axes of motion while maintaining only 5 pounds of force, continues Falcone. Having the force compliance unit on the end of the robot makes for much faster response times, much more nimble solutions.
Whats Next: CAD to Completion
Despite the challenges of robotic material removal applications, there are good reasons to keep an eye on this market. According to 3Ms Doeksen, the big robot applications, such as welding, assembly, handling, and painting, are saturated markets but there are huge opportunities for customers and integrators in the white space of material removal. In some areas, it comes down to: Do you want to compete against 10 companies for a welding application or be the only show in town for material removal?
And the application base may get a huge boost in the arm in the near future, continues Doekson. The most exciting area for these applications to me is 3D-printed parts and additive manufacturing, which is another industry thats accelerating like crazy. When they get into metal and mission-critical parts, the post-processing of those parts is the process constraint. Typically, if youre going to print a metal part thats going into a jet engine, theres a ton of post-processing that needs to be done. In the future, you can visualize an additive manufacturing cell, where the robot takes the output and post-processes it in place. Thats whats coming. And its going to create some new challenges and opportunities for all of us.
Originally published by RIA via http://www.robotics.org on 10/25/2019
Read the original post:
Robotics Industry Insights - Robots Embrace the Daily ... - Robotics Online
Posted in Robotics
Comments Off on Robotics Industry Insights – Robots Embrace the Daily … – Robotics Online
WPI gets $3m grant to explore concept of robots in the workplace – Worcester Telegram
Posted: at 3:11 pm
Telegram & Gazette Staff
SundayOct27,2019at9:48AM
WORCESTER Worcester Polytechnic Institute has received a five-year, $3 million National Science Foundation grant to conduct research and training in the area of robotic workplace assistants.
Cagdas Onal, a mechanical engineering professor at WPI and principal investigator on the grant, said the funding specifically will support an interdisciplinary research program called "Future of Robots in the Workplace - Research & Development (FORW-RD)," which he said is intended to help students "attain diverse skills needed to navigate opportunities and challenges to shape, guide, and lead the transition to a robot-assisted workplace."
The project is expected to train 120 masters and PhD students from the mechanical engineering, robotics, computer science, materials science, and user experience design fields at WPI.
Onal said the idea for the project came from talks he had with WPI humanities and arts professor Yunus Telliel, who is also a co-principal investigator on the grant.
"In our discussions, we talked about the impact and what this means for the future of how we work," Onal said. "For example, if the worker isnt there physically, are they actually responsible for the actions of this robot? Could they still find meaning in their job? There are so many different aspects to consider."
In addition to delving into interdisciplinary technical and professional skills, the program will also explore upon the ethical, social, economic, and legal considerations for using robots in the workplace.
"We need to continually ask questions so that advisors and students can think about larger issues," Onal said. "This program is not just about doing technology development and figuring out new algorithms. Its about making sure the programs are perceived correctly and done right for everyones benefit."
See original here:
WPI gets $3m grant to explore concept of robots in the workplace - Worcester Telegram
Posted in Robotics
Comments Off on WPI gets $3m grant to explore concept of robots in the workplace – Worcester Telegram
Spheros STEM robotics kit goes on sale – TechCrunch
Posted: at 3:11 pm
Following a crowdfunding campaign that raised an impressive $1 million earlier this year, Spheros STEM/STEAM kit RVR is now on sale. Announced back in February as part of the Colorado companys first-ever Kickstarter campaign, RVR presents a bit of a change for Sphero in a number of key ways.
For starters, its a move away from the remote control ball design that has defined the majority of its projects since the Orbotix days. RVR is a four-wheel system and more than that, in keeping with Spheros relatively recent focus on education, it is aimed at helping kids learn languages like Python and JavaScript.
Its also designed to help teach some of the fundamentals of robotics. Though, as Sphero notes, its still drivable right out of the box, with minimal assembly required. Beyond that, users can hook in third-party boards like Raspberry Pi and Arduino through the USB port, along with products from the recently acquired littleBits.
When we launched RVR on Kickstarter earlier this year, we were blown away by the response, co-founder and Chief Creative Officer Adam Wilson said in a release. Our community of makers, developers, and teachers all rallied around RVR to make it a huge success even before they could get their hands on one. RVR has significantly extended our reach to makers of all ages, and of all coding abilities. We cant wait to see what everyone creates with RVR.
The system starts at $250 through Spheros online shop and select retailers.
Continue reading here:
Posted in Robotics
Comments Off on Spheros STEM robotics kit goes on sale – TechCrunch
Meet Mammoth, the massive Battlebot that’s stomping its way to combat robotics fame – Technical.ly Brooklyn
Posted: at 3:11 pm
Ricky Willems had been building robots since he was in kindergarten, mainly for pure enjoyment. But once he stumbled upon combat robotics several years ago, Willems saw an opportunity to bring his talents to a bigger stage.
As soon as I realized it wasnt just things I was going to build out of spare parts in my grandparents shed, I thought, Somethings going to happen one day, Willems said.
Willems came up with the idea to form a group dedicated to building a combat robot two years ago while working at Stanley Black & Decker in Towson.
He built a few small robots himself before teaming up with coworker Brice Farrell, who previously made an appearance on robot-fighting TV series BattleBots.
Farrell also fell in love with combat robotics at an early age. His first experience building a robot sparked a motivation to learn more, despite a first attempt that was less than calculated.
I put a chainsaw on top of a Roomba, which was a terrible robot, but so much fun so it grew from there, Farrell said. Id always loved watching robots fight each other, just the weird what-ifs of what happens if a giant flipper smacks into a spinning blade of death.
Anna Goodridge worked in The Makerspace by Stanley Black & Decker at the time and had experience building race cars. She brought that experience to the team early on, as well.
Our team motto is: We just show up and build things. So people who are good at that are good team members, Goodridge said.
The first Mammoth was built last year. The group built a smaller version of the bot for a small, untelevised showcase in Orlando as a proof of concept. They received widespread support from peers in the robotics community and quickly got the attention of BattleBots producers.
The group took two months to beef up Mammoth, transforming it into the largest Battlebot ever. Mammoth stands at a staggering height of 6 feet 2 inches, spanning 6 feet 4 inches wide and is 8 feet 6 inches long.
Its equipped with a spinner that allows it to flip opponents or launch other bots out of the arena during battle, and its wide base gives it a massive range of attack.
Building this behemoth didnt come without challenges, though. The group had to balance work life and bot life, prioritize which parts of the robot to build and nail down sponsorships in order to boost marketing.
And they navigated each of those areas without assurance from BattleBots that they would actually appear on the show.
We didnt know if this was actually going to happen or not, Willems said. We knew the general design we wanted, but in terms of the components and design of the subsystems that was almost entirely motivated by, What can we do very inexpensively, very quickly and not hyper-experimental?
The show, filmed in Long Beach, California, was shot over a two-week period in April before airing over the summer. The team fought every other day, but their 2-2 record was not good enough to advance to the finals.
Despite the exit, the team continues to build, and bring Mammoth to the world. Two weeks ago, they made an appearance at Dev Day, the Baltimore Innovation Week event produced by Harbor Designs and Manufacturing and 1100 Wicomico.
They showed off Mammoths impressive stature and let attendees operate their miniature version of the bot.
We really opened a lot of peoples eyes to Battlebots and what combat robotics can be, Goodridge said.
Now, the team is preparing Mammoth to face off in the same Orlando competition where they debuted the first prototype. The event, titled Robot Ruckus this year, takes place on Nov. 9.
Were hoping to show up with a Mammoth thats twice as powerful as what we brought on television and really make a splash down there, Willems said.
Farrell said the team is looking to get back on BattleBots next season with a revamped Mammoth.
We have general plans to make it twice as fast, three times as powerful and 10 times more fire, Farrell said.
The team is also looking to expand its outreach through networking with local sponsors and organizations.
There isnt much representation in combat robotics in the Maryland area, so wed like to grow that and reach out more, Willems said.
View post:
Posted in Robotics
Comments Off on Meet Mammoth, the massive Battlebot that’s stomping its way to combat robotics fame – Technical.ly Brooklyn
New Robots and Recurring Sales Push Intuitive Surgical Even Higher – Motley Fool
Posted: at 3:11 pm
Robotics, artificial intelligence, and augmented reality get a lot of press these days, some of it espousing the good while some of it decrying its inevitable toll on human jobs. Technology at its best doesn't replace people, though; it helps them perform better. That's on display every quarter when Intuitive Surgical (NASDAQ:ISRG) and its da Vinci robots report earnings.
The company shipped 275 new da Vinci systems during the third quarter of 2019, raising the total number in operation worldwide by 12% to 5,406. Robotic-assisted surgeries are still a small fraction of the total number of procedures, and as adoption continues to steadily rise, this robotics company's recurring-sales engine will continue to get even stronger.
Intuitive Surgical is soon to get some competition via the world's largest medical device maker, Medtronic (NYSE:MDT). Intuitive is also early on in promoting two new da Vinci systems: the da Vinci SP (single-port) designed for single incision deep-tissue access, and the da Vinci Ion for lung biopsies. Only four SP systems were placed in operation for a total installed base of 38, and the first three Ion systems were sold.
Competition and the slow rollout of new robots didn't phase results, though, with revenue growth accelerating to 23% year over year in the quarter, bringing year-to-date growth up to 19%.
Metric
Nine Months Ended Sept. 30, 2019
Nine Months Ended Sept. 30, 2018
Change
Revenue
$3.20 billion
$2.68 billion
19%
Adjusted operating income
$1.28 billion
$1.13 billion
13%
Adjusted earnings per share
$9.28
$8.03
16%
Data source: Intuitive Surgical.
The bulk of the 275 new system placements were still made up of the flagship X and Xi models for more-general procedures, and innovating new uses for those machines will continue to be the driving force for the company. Case in point: Total procedures performed in Q3 were up 20%, even though the total installed base grew by just 12%. Surgeons using da Vinci are finding new ways to put the system to work, and all of that is fueling the steady and predictable sales model at Intuitive.
Image source: Intuitive Surgical.
Purchasing a da Vinci robot can be a big commitment for hospitals. A new system runs as much as a couple of million dollars. To help encourage adoption, Intuitive has been offering lease options, with 33% of new system placements being made under such terms in Q3.
Getting as many da Vinci robots in operation as possible is a good strategy, because system sales don't make up the bulk of revenue. Most of that comes from instruments and accessories, as well as services.
Sales Segment
Revenue, Nine Months Ended Sept. 30, 2019
YOY Change
System sales
$930 million
18%
Instruments and accessories
$1.74 billion
22%
Services
$534 million
14%
YOY = year over year. Data source: Intuitive Surgical.
As for services, CEO Gary Guthart had this to say on the Q3 earnings call:
We're working on computing and real-time cloud technologies to allow for tests from telementoring to augmented reality. We now have over 20 active telementoring sites that together have supported hundreds of cloud-enabled, real-time surgery sessions as we progress in building real-time cloud capabilities. Feedback on the utility of these sites for case observations and mentoring has been supportive. In augmented reality, we're working through logistics and installation of our firstIRIS accounts [augmented reality that gives surgeons a 3D view of a patients anatomy] to gather customer and clinical feedback. We expect first clinical cases on the IRIS system in the next few months. Lastly, our surgical simulation products have become widely adopted in the installed base with more than 3,200 da Vinci simulators in the field.
And on the instruments and accessories side, Intuitive also got U.S. Food and Drug Administration approval to start selling a couple of new single-use staplers in Q3. The acquisition of endoscopeand camera maker Scholly Fiberoptic was also completed in the period for an undisclosed (and likely small) sum of money, further strengthening the company's portfolio of attachments available for use with its machines.
In short, the last quarter was a strong one, and the new systems rollout and a steadily climbing installed base mean Intuitive's real moneymaker, recurring instruments and accessory sales, will keep getting better. For investors looking for a consistent growth play, this medical robotics company is a must-have.
Originally posted here:
New Robots and Recurring Sales Push Intuitive Surgical Even Higher - Motley Fool
Posted in Robotics
Comments Off on New Robots and Recurring Sales Push Intuitive Surgical Even Higher – Motley Fool