The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Robotics
How the U.S. Army Is Turning Robots Into Team Players – IEEE Spectrum
Posted: September 24, 2021 at 10:57 am
This article is part of our special report on AI, The Great AI Reckoning.
"I should probably not be standing this close," I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervousit's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.
The robot, named RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to "go clear a path." It's then up to the robot to make all the decisions necessary to achieve that objective.
The ability to make decisions autonomously is not just what makes robots useful, it's what makes robots robots. We value robots for their ability to sense what's going on around them, make decisions based on that information, and then take useful actions without our input. In the past, robotic decision making followed highly structured rulesif you sense this, then do that. In structured environments like factories, this works well enough. But in chaotic, unfamiliar, or poorly defined settings, reliance on rules makes robots notoriously bad at dealing with anything that could not be precisely predicted and planned for in advance.
RoMan, along with many other robots including home vacuums, drones, and autonomous cars, handles the challenges of semistructured environments through artificial neural networksa computing approach that loosely mimics the structure of neurons in biological brains. About a decade ago, artificial neural networks began to be applied to a wide variety of semistructured data that had previously been very difficult for computers running rules-based programming (generally referred to as symbolic reasoning) to interpret. Rather than recognizing specific data structures, an artificial neural network is able to recognize data patterns, identifying novel data that are similar (but not identical) to data that the network has encountered before. Indeed, part of the appeal of artificial neural networks is that they are trained by example, by letting the network ingest annotated data and learn its own system of pattern recognition. For neural networks with multiple layers of abstraction, this technique is called deep learning.
Even though humans are typically involved in the training process, and even though artificial neural networks were inspired by the neural networks in human brains, the kind of pattern recognition a deep learning system does is fundamentally different from the way humans see the world. It's often nearly impossible to understand the relationship between the data input into the system and the interpretation of the data that the system outputs. And that differencethe "black box" opacity of deep learningposes a potential problem for robots like RoMan and for the Army Research Lab.
In chaotic, unfamiliar, or poorly defined settings, reliance on rules makes robots notoriously bad at dealing with anything that could not be precisely predicted and planned for in advance.
This opacity means that robots that rely on deep learning have to be used carefully. A deep-learning system is good at recognizing patterns, but lacks the world understanding that a human typically uses to make decisions, which is why such systems do best when their applications are well defined and narrow in scope. "When you have well-structured inputs and outputs, and you can encapsulate your problem in that kind of relationship, I think deep learning does very well," says Tom Howard, who directs the University of Rochester's Robotics and Artificial Intelligence Laboratory and has developed natural-language interaction algorithms for RoMan and other ground robots. "The question when programming an intelligent robot is, at what practical size do those deep-learning building blocks exist?" Howard explains that when you apply deep learning to higher-level problems, the number of possible inputs becomes very large, and solving problems at that scale can be challenging. And the potential consequences of unexpected or unexplainable behavior are much more significant when that behavior is manifested through a 170-kilogram two-armed military robot.
After a couple of minutes, RoMan hasn't movedit's still sitting there, pondering the tree branch, arms poised like a praying mantis. For the last 10 years, the Army Research Lab's Robotics Collaborative Technology Alliance (RCTA) has been working with roboticists from Carnegie Mellon University, Florida State University, General Dynamics Land Systems, JPL, MIT, QinetiQ North America, University of Central Florida, the University of Pennsylvania, and other top research institutions to develop robot autonomy for use in future ground-combat vehicles. RoMan is one part of that process.
The "go clear a path" task that RoMan is slowly thinking through is difficult for a robot because the task is so abstract. RoMan needs to identify objects that might be blocking the path, reason about the physical properties of those objects, figure out how to grasp them and what kind of manipulation technique might be best to apply (like pushing, pulling, or lifting), and then make it happen. That's a lot of steps and a lot of unknowns for a robot with a limited understanding of the world.
This limited understanding is where the ARL robots begin to differ from other robots that rely on deep learning, says Ethan Stump, chief scientist of the AI for Maneuver and Mobility program at ARL. "The Army can be called upon to operate basically anywhere in the world. We do not have a mechanism for collecting data in all the different domains in which we might be operating. We may be deployed to some unknown forest on the other side of the world, but we'll be expected to perform just as well as we would in our own backyard," he says. Most deep-learning systems function reliably only within the domains and environments in which they've been trained. Even if the domain is something like "every drivable road in San Francisco," the robot will do fine, because that's a data set that has already been collected. But, Stump says, that's not an option for the military. If an Army deep-learning system doesn't perform well, they can't simply solve the problem by collecting more data.
ARL's robots also need to have a broad awareness of what they're doing. "In a standard operations order for a mission, you have goals, constraints, a paragraph on the commander's intentbasically a narrative of the purpose of the missionwhich provides contextual info that humans can interpret and gives them the structure for when they need to make decisions and when they need to improvise," Stump explains. In other words, RoMan may need to clear a path quickly, or it may need to clear a path quietly, depending on the mission's broader objectives. That's a big ask for even the most advanced robot. "I can't think of a deep-learning approach that can deal with this kind of information," Stump says.
While I watch, RoMan is reset for a second try at branch removal. ARL's approach to autonomy is modular, where deep learning is combined with other techniques, and the robot is helping ARL figure out which tasks are appropriate for which techniques. At the moment, RoMan is testing two different ways of identifying objects from 3D sensor data: UPenn's approach is deep-learning-based, while Carnegie Mellon is using a method called perception through search, which relies on a more traditional database of 3D models. Perception through search works only if you know exactly which objects you're looking for in advance, but training is much faster since you need only a single model per object. It can also be more accurate when perception of the object is difficultif the object is partially hidden or upside-down, for example. ARL is testing these strategies to determine which is the most versatile and effective, letting them run simultaneously and compete against each other.
Perception is one of the things that deep learning tends to excel at. "The computer vision community has made crazy progress using deep learning for this stuff," says Maggie Wigness, a computer scientist at ARL. "We've had good success with some of these models that were trained in one environment generalizing to a new environment, and we intend to keep using deep learning for these sorts of tasks, because it's the state of the art."
ARL's modular approach might combine several techniques in ways that leverage their particular strengths. For example, a perception system that uses deep-learning-based vision to classify terrain could work alongside an autonomous driving system based on an approach called inverse reinforcement learning, where the model can rapidly be created or refined by observations from human soldiers. Traditional reinforcement learning optimizes a solution based on established reward functions, and is often applied when you're not necessarily sure what optimal behavior looks like. This is less of a concern for the Army, which can generally assume that well-trained humans will be nearby to show a robot the right way to do things. "When we deploy these robots, things can change very quickly," Wigness says. "So we wanted a technique where we could have a soldier intervene, and with just a few examples from a user in the field, we can update the system if we need a new behavior." A deep-learning technique would require "a lot more data and time," she says.
It's not just data-sparse problems and fast adaptation that deep learning struggles with. There are also questions of robustness, explainability, and safety. "These questions aren't unique to the military," says Stump, "but it's especially important when we're talking about systems that may incorporate lethality." To be clear, ARL is not currently working on lethal autonomous weapons systems, but the lab is helping to lay the groundwork for autonomous systems in the U.S. military more broadly, which means considering ways in which such systems may be used in the future.
The requirements of a deep network are to a large extent misaligned with the requirements of an Army mission, and that's a problem.
Safety is an obvious priority, and yet there isn't a clear way of making a deep-learning system verifiably safe, according to Stump. "Doing deep learning with safety constraints is a major research effort. It's hard to add those constraints into the system, because you don't know where the constraints already in the system came from. So when the mission changes, or the context changes, it's hard to deal with that. It's not even a data question; it's an architecture question." ARL's modular architecture, whether it's a perception module that uses deep learning or an autonomous driving module that uses inverse reinforcement learning or something else, can form parts of a broader autonomous system that incorporates the kinds of safety and adaptability that the military requires. Other modules in the system can operate at a higher level, using different techniques that are more verifiable or explainable and that can step in to protect the overall system from adverse unpredictable behaviors. "If other information comes in and changes what we need to do, there's a hierarchy there," Stump says. "It all happens in a rational way."
Nicholas Roy, who leads the Robust Robotics Group at MIT and describes himself as "somewhat of a rabble-rouser" due to his skepticism of some of the claims made about the power of deep learning, agrees with the ARL roboticists that deep-learning approaches often can't handle the kinds of challenges that the Army has to be prepared for. "The Army is always entering new environments, and the adversary is always going to be trying to change the environment so that the training process the robots went through simply won't match what they're seeing," Roy says. "So the requirements of a deep network are to a large extent misaligned with the requirements of an Army mission, and that's a problem."
Roy, who has worked on abstract reasoning for ground robots as part of the RCTA, emphasizes that deep learning is a useful technology when applied to problems with clear functional relationships, but when you start looking at abstract concepts, it's not clear whether deep learning is a viable approach. "I'm very interested in finding how neural networks and deep learning could be assembled in a way that supports higher-level reasoning," Roy says. "I think it comes down to the notion of combining multiple low-level neural networks to express higher level concepts, and I do not believe that we understand how to do that yet." Roy gives the example of using two separate neural networks, one to detect objects that are cars and the other to detect objects that are red. It's harder to combine those two networks into one larger network that detects red cars than it would be if you were using a symbolic reasoning system based on structured rules with logical relationships. "Lots of people are working on this, but I haven't seen a real success that drives abstract reasoning of this kind."
For the foreseeable future, ARL is making sure that its autonomous systems are safe and robust by keeping humans around for both higher-level reasoning and occasional low-level advice. Humans might not be directly in the loop at all times, but the idea is that humans and robots are more effective when working together as a team. When the most recent phase of the Robotics Collaborative Technology Alliance program began in 2009, Stump says, "we'd already had many years of being in Iraq and Afghanistan, where robots were often used as tools. We've been trying to figure out what we can do to transition robots from tools to acting more as teammates within the squad."
RoMan gets a little bit of help when a human supervisor points out a region of the branch where grasping might be most effective. The robot doesn't have any fundamental knowledge about what a tree branch actually is, and this lack of world knowledge (what we think of as common sense) is a fundamental problem with autonomous systems of all kinds. Having a human leverage our vast experience into a small amount of guidance can make RoMan's job much easier. And indeed, this time RoMan manages to successfully grasp the branch and noisily haul it across the room.
Turning a robot into a good teammate can be difficult, because it can be tricky to find the right amount of autonomy. Too little and it would take most or all of the focus of one human to manage one robot, which may be appropriate in special situations like explosive-ordnance disposal but is otherwise not efficient. Too much autonomy and you'd start to have issues with trust, safety, and explainability.
"I think the level that we're looking for here is for robots to operate on the level of working dogs," explains Stump. "They understand exactly what we need them to do in limited circumstances, they have a small amount of flexibility and creativity if they are faced with novel circumstances, but we don't expect them to do creative problem-solving. And if they need help, they fall back on us."
RoMan is not likely to find itself out in the field on a mission anytime soon, even as part of a team with humans. It's very much a research platform. But the software being developed for RoMan and other robots at ARL, called Adaptive Planner Parameter Learning (APPL), will likely be used first in autonomous driving, and later in more complex robotic systems that could include mobile manipulators like RoMan. APPL combines different machine-learning techniques (including inverse reinforcement learning and deep learning) arranged hierarchically underneath classical autonomous navigation systems. That allows high-level goals and constraints to be applied on top of lower-level programming. Humans can use teleoperated demonstrations, corrective interventions, and evaluative feedback to help robots adjust to new environments, while the robots can use unsupervised reinforcement learning to adjust their behavior parameters on the fly. The result is an autonomy system that can enjoy many of the benefits of machine learning, while also providing the kind of safety and explainability that the Army needs. With APPL, a learning-based system like RoMan can operate in predictable ways even under uncertainty, falling back on human tuning or human demonstration if it ends up in an environment that's too different from what it trained on.
It's tempting to look at the rapid progress of commercial and industrial autonomous systems (autonomous cars being just one example) and wonder why the Army seems to be somewhat behind the state of the art. But as Stump finds himself having to explain to Army generals, when it comes to autonomous systems, "there are lots of hard problems, but industry's hard problems are different from the Army's hard problems." The Army doesn't have the luxury of operating its robots in structured environments with lots of data, which is why ARL has put so much effort into APPL, and into maintaining a place for humans. Going forward, humans are likely to remain a key part of the autonomous framework that ARL is developing. "That's what we're trying to build with our robotics systems," Stump says. "That's our bumper sticker: 'From tools to teammates.' "
This article appears in the October 2021 print issue as "Deep Learning Goes to Boot Camp."
From Your Site Articles
Related Articles Around the Web
Excerpt from:
How the U.S. Army Is Turning Robots Into Team Players - IEEE Spectrum
Posted in Robotics
Comments Off on How the U.S. Army Is Turning Robots Into Team Players – IEEE Spectrum
Blue White Robotics Announces $37M Series B Funding Led by Insight Partners to Revolutionize Autonomous Farming – Business Wire
Posted: at 10:57 am
TEL AVIV, Israel--(BUSINESS WIRE)--Blue White Robotics, a platform that provides Robots-as-a-Service (RaaS) that enables farms to run themselves autonomously, today announces $37M in Series B funding, led by New York-based global private equity and venture capital firm Insight Partners. Entre Capital co-led this Series B after having seeded Blue White Robotics and participated in its Series A round. They are joined by Clal Insurance, Jesselson Family Office, Peregrine VC, and Regah Ventures who also made significant contributions in this round. With the trust of their market-leading clients and partners, they will use this new funding to increase the rapid adoption of these technologies, drive new US sales, and attract key talent for their all-star international team.
Blue White Robotics creates a cohesive experience across farming operations year-round from sprays and harvesting to disking and seeding. By retrofitting existing infrastructure with intelligent autonomous algorithms, the robot tractors improve farm productivity, precision, and worker safety. Additionally, the Blue White Robotics platform collects and distributes data that creates new services to increase yields and reduce inputs for the growing autonomous operation. The company's values of "Fellowship, Love of the land, and Innovation" carry through all aspects of the company's mission to revolutionize agriculture through autonomy. The autonomous farming technology company now enjoys $50M of investment since its inception in 2017.
With this new round of investment by some truly world-changing leaders, we have the power to continue our vision for a safer, smarter, and productive autonomous farm for the 21st-century, explains Ben Alfi, co-founder and CEO of Blue White Robotics. Our amazing team is excited by this renewed commitment to solve the many issues facing our modern farmer and the food system as a whole.
Farming is an industry that has seen little progress since the advent of the tractor, and its time for farmers to enjoy the same advancements in technology as others. Blue White Robotics value proposition is unparalleled in agriculture technology, and they truly stand by the need for Autonomy, Now," said Daniel Aronovitz, Vice President at Insight Partners. The companys ease of adoption will allow the product to quickly scale within the industry and enter new markets. Were excited to partner with Blue White Robotics as they grow.
At Entre we have a tagline, 'Partnering with the exceptional to build the impossible, explains Avi Eyal, Managing Partner of Entre Capital. The exceptional competence of the Blue White Robotics team with their Robot as a Service (RaaS) has made the disruption of the agricultural industry now possible. Weve backed Blue White Robotics from the start and are proud to continue backing them now.
About Blue White Robotics
Blue White Robotics heralds the revolution in agriculture with our autonomous farm platform, creating an easily adopted system for Robots-as-a-Service (RaaS). We believe that solving the biggest issues facing agriculture - diminishing labor resources, increasing climate uncertainty, and climbing costs - can be solved through a combination of software and hardware that compliments existing farming infrastructure. This network of interconnected technologies allows for an increase in precision, safety, and productivity.
About Insight Partners
Insight Partners is a leading global venture capital and private equity firm investing in high-growth technology and software ScaleUp companies that are driving transformative change in their industries. Founded in 1995, Insight Partners has invested in more than 400 companies worldwide and has raised, through a series of funds, more than $30 billion in capital commitments. Insight's mission is to find, fund, and work successfully with visionary executives, providing them with practical, hands-on software expertise to foster long-term success. Across its people and its portfolio, Insight encourages a culture around a belief that ScaleUp companies and growth create opportunity for all. For more information on Insight and all its investments, visit insightpartners.com or follow us on Twitter @insightpartners.
About Entre Capital
Entre Capital manages over $650M across six funds and has invested in startups such as monday.com, Snap, Stripe, Deliveroo, PostMates, Riskified, FundBox, Toka Cyber, Kuda Bank, Stash, PillPack, Gusto, Cazoo, Coupang, Glovo, and over 100 other companies. With offices in Israel, UK, and the US, Entre Capital has realized 26 exits and IPOs and its portfolio has 14 unicorns.
About Clal Insurance
Clal Insurance is one of Israels leading investment groups; it owns and manages a diversified portfolio that encompasses leading industrial, technology, biotech and retail companies Clal Insurance is responsible for managing assets including Clal policy holders, pension, provident, advanced training and profit-sharing (Manager's Insurance) funds, as well as the Clal Group's equity and insurance reserves nostro.
About Jesselson Family Office
Jesselson is a family office based in Tel Aviv. Investing in a variety of industries including RE and privet equity in the venture space Jesselson has been focusing on food and Ag ventures technology.
About Peregrine
Founded in 2001, Peregrine Ventures is a leading Israeli capital fund that focuses on high-tech companies in various stages in the fields of life sciences, pharma, digital health, and more. We specialize in identifying opportunities before they are sufficiently mature for ordinary investors and provide our know-how in handling early-stage companies, through the growth stage with the ability to leverage financial government support.
See the original post:
Posted in Robotics
Comments Off on Blue White Robotics Announces $37M Series B Funding Led by Insight Partners to Revolutionize Autonomous Farming – Business Wire
From fighter pilot to robotics pioneer: An interview with Missy Cummings – McKinsey
Posted: at 10:57 am
In this episode of the McKinsey Global Institutes Forward Thinking podcast, co-host MichaelChui speaks with Mary Missy Cummings, one of the first female fighter pilots in the US Navy and now a professor in the Duke University Pratt School of Engineering and the Duke Institute for Brain Sciences, as well as the director of Dukes Humans and Autonomy Laboratory.
Cummings talks about her life as a fighter pilot and her journey into automation and robotics. She also answers questions like:
An edited transcript of this episode follows. Subscribe to the series on Apple Podcasts, Google Podcasts, Spotify, Stitcher, or wherever you get your podcasts.
Michael Chui (co-host): Hi, and welcome to Forward Thinking. Im Michael Chui. Well, the first order of business is to say welcome, Janet. Janet is an MGI senior editor and she is joining me to become co-host of our Forward Thinking podcast.
Audio
Janet Bush (co-host): Thanks very much, Michael. Its a wonderful chance to talk to some amazing people. Speaking of which, todays interviewee is extraordinary. Why dont you tell us a bit about her?
Michael Chui: Todays interview is with Missy Cummings. She was one of the US Navys first-ever female fighter pilots. Now she is a leading academic in the fields of engineering, automation, and robotics. In this podcast, she shares her thoughts, among other things, on automation in airplanes and cars.
Janet Bush: Fascinating and very much an area that MGI has been researching. And Missy is also a trailblazer for women in two areas dominated by men: the military and, as she says in the podcast, technology. Cant wait to hear what she says. Over to you, Michael, for this one.
Michael Chui: Missy Cummings is a professor in the Duke University Pratt School of Engineering and the Duke Institute for Brain Sciences, and is the director of the Humans and Autonomy Laboratory. Missy Cummings, welcome to the podcast.
Missy Cummings: Thank you for having me.
Michael Chui: First of all, Id love to hear a little bit about how you got to where you are today.
Missy Cummings: I was a military brat, which means that I moved around a lot as a kid. I spent the later part of my childhood, high school, in Memphis, Tennessee. I sort of consider that home, but my parents are no longer alive. North Carolina is my home right now. Ive been here for the last eight years.
My dad was in the Navy. I dont think its that big of a surprise that I went into the Navy. I went to the US Naval Academy for college. And then after I graduated, I went to flight school. The year was 1988. Top Gun had come out in 1986. Its not too surprising that I was very motivated to become one of the best of the best. And I flew A-4s and F-18s for about ten years with the military. I also went to graduate school in the middle of that time frame, and got my PhD in space systems engineering.
After I had flown fighter jets as one of the first female fighter pilots for three years, it wasIm really glad I did it, but it was a really rough ride. It was a lot of difficulties, making that cultural transition. So I decided to get out and go back to school to get my PhD. I did that at the University of Virginia. And MIT found out that I was close to being done with my PhD, and they were very excited to get a female faculty member who was an expert in aviation. I was there for ten years. And then Duke made me a really good offer I couldnt turn down to move my lab down south, which I did. And thats where I am now.
Michael Chui: Well, congratulations on that. Clearly a pioneer in a number of different areas. Any reflection on diversity now being discussed in a lot of different fields?
Missy Cummings: Its funny to me because being one of the first female fighter pilots, I saw all the problems that come around with being one of a minority thats trying to break into a majority. And I think that the military has improved. Whats kind of concerning to meand I have written an op-ed for CNN about thisis how much Silicon Valley today, in the year 2021, still resembles the fighter pilot mafia in 1993, 94, meaning that its still a good ol boys club. Theres still a lot of fraternity-like behavior and just outward discrimination against women.
And this is one of the reasons that we see women in tech, particularly in Silicon Valleyyou know, its a rough ride for them. Its a rough ride for them now, like it was a rough ride for me 20 years ago. So I do worry that were not making as much progress as I had hoped that we would.
Michael Chui: What can be done?
Missy Cummings: Theres a magic percentage of 15 percent. Its called critical mass. If you can get your minority to 15 percent, they start to make major inroads in the culture of the majority. So female fighter pilots are still in the single percentages in the year 2021.
To actually fundamentally move that needle, you need to get that up. Those percentages are probably very reflective of what you might see in a lot of tech cultures in Silicon Valley as well. We need to get more women in. And we need to get them at all levels. Because if you dont have women at senior levels being able to provide some high cover for the junior women, then it becomes very difficult.
Michael Chui: Whats magical about 15?
Missy Cummings: Well, Im not a social scientist who came up with that. And Im not sure how hard the science is that backs that number up. But certainly if, anecdotally, I look around at all the places that Ive been in where there were small and/or larger numbers of women, you do see that the more women that are in a unit, whether that unit is a squadron or an academic department, the more natural it is to see women around.
And women attract other women. So I always have a lot more women in my lab than most men do. The importance of role models cannot be overstated here. I think thats [also] true for people of color. The more that people see people like themselves in positions of leadership, then its inspiring and people realize that they can achieve the positions that they want.
Michael Chui: Thats a remarkable insight, and amazing that youre able to continue contributing in that area, with regard to diversity. Talk more about what youre researching now. Youre running the Humans and Autonomy Laboratory. What goes on there? By the way, it has the interesting acronymHAL.
Missy Cummings: I chose that on purpose, because if youre a [2001: A] Space Odyssey buff you will know that HAL is the computer that was warring with the humans, trying to keep them under control. And Ill let you see the movie to see what thats all about.
By the way, I think were due for a remake of that movie. So yes, there is a joke in the name. We are really here to build bridges between humans and autonomy. I consider myself the human advocate. All the research that I do is looking at how humans are, or are not, integrated correctly into complex systems with either straight-up automation or varying levels of autonomy and artificial intelligence.
I read the tea leaves correctly many years ago that humans inside these systems would become more difficult to plan for, design around, test, certify. And we see that today in spades, with self-driving and potentially driver-assist cars. Humans have become an issue in terms of both designing the technology and as users of the technology.
Michael Chui [laughs]: It is kind of funny to view that its the humans that are the issue, right? But I hear what youre saying. I am curiousmaybe we just talk a little bit about the history of autonomy with regard to vehicles, et cetera. Take me back. You mentioned that you read the tea leaves. Where did you see things starting to move along this dimension that you found interesting?
Missy Cummings: When I say I read the tea leaves, in the 90s, when I was in the military, I saw the rise of GPS. I saw the planes, the automation in the planes that I was flying. And it was clear to me that something was changing, in terms of more and more automation was taking over parts of jobs for humans. And I also saw a lot of deaths of humans as a result.
In the time that I flew fighters (about three years), on average one person died a month every year I was there. And they were all training accidents. Nothing was in wartime. And it was a stark reality that we had started to design aircraft that far exceeded the capabilities, the cognitive capabilities, of humans. So thats what motivated me to get into this field and say, We can do this better. Theres got to be a way to take a principled approach to designing technology to consider the strengths and limitations of the humans. And that is my area in a nutshell.
Michael Chui: Say more about what you found, or the conclusion that you came to, that these tragic losses of life had to do with exceeding the human cognitive capability in the cockpit or what was happening.
Missy Cummings: This particular day, it was late in the bounce pattern, and they decided that the wives and girlfriends could come and watch. Mostly the men do the bounce pattern. And on this one particular day, one of my peers did a touch-and-go, everything looked good, and he got the infamous deedle deedle. It was a sound that was made in the cockpit. And one of the flight control axes had thrown an error. So we were trained, pretty much like monkeys, to hit the reset button, very much like a reset button turn, cycling the power on your computer, except that it happens really fast.
And in this particular case the system rebooted, the software rebooted, but the hardware was not connected to the software correctly. And it caused the rudder to actually be out of limits, even though the software set the center point of the rudder back to zero. The aircraft flipped and killed the pilot. He had no chance for escapeand right in front of his fiance.
And this problem ofyou know, the pilot did not understand that there had been a software-hardware error. There actually was a place that he couldve figured it out, but with a menu that was buried several layers deep inside the system. This is one of the problems with a single pilot. He didnt have time to troubleshoot the system. And in the end, that accident report blamed the pilot. The accident report said, Pilot error. It was all his fault.
I think that we need to move away from that. In the end, maybe he couldve figured it out, but he was set up to fail. And we need to be much more careful in how were designing these safety-critical systems with automation so were not setting people up like this.
Michael Chui: Are there things that car designers should be learning as theyre starting to implement more and more levels of automated technology and driver assistance that the aerospace industry has learned, or vice versa? Whats your assessment of the possible cross-fertilization there?
Missy Cummings: One of the big areas that researchers learned about as a consequence of automation in aviation is a problem known as mode confusion. This is when the aircraft was in one automated mode, but pilots thought it was in a different mode and actually had different and oftentimes very incorrect actions, which sometimes had catastrophic consequences.
Indeed, that was the case for my peer who flipped his airplane doing touch-and-goes. He didnt realize that the aircraft was actually in a fail mode. He thought the aircraft was in a safe mode and he didnt need to do anything different with the aircraft.
Weve known about this for a long time in aviation. But this is new learning for the automotive world. And we are seeing this problem, mode confusion, quite a bit. We see this, for example, when people think that the words autopilot and full self-driving actually mean those things. And people climb in the back seat or take their hands off the steering wheel just for a little bit, and then they dont realize the trouble that theyre in, and then the car crashes.
People will often think that the cars doing a great job tracking between the lanes and doing navigate on autopilot. And then theyll say, Well, I can just, ooh, oops, I dropped my phone or I dropped my french fry or whatever that is I was doing in the car. Unfortunately, in cars really bad things can happen in a tenth of a second, especially in heavy, dense traffic. And so people are getting lulled into a false sense of security about how well the cars performing.
I think that the automotive community is just now starting to get it. Now whether theyre going to be able to do anything to mitigate this, I think thats another big question.
Michael Chui: It is interesting that planes fly so fast, but theres a lot of room up in the air, I guess. I think you once mentioned how much time in a commercial flight does a pilot actually spend manipulating the flight controls and on the stick. Its a surprisingly short amount of time.
Missy Cummings: Theres so much automation in commercial aircraft now. It is surprising to people. [It] really depends on whether your pilot is flying an Airbus or a Boeing. If the plane is an Airbus, most pilots touch the stick only for about three and a half minutes out of any flight. And if its a Boeing, then the pilots touch the stick for about seven minutes.
This happens because the automation is so good at manipulating the throttles that if we let humans do it, planes waste a lot more gas and create a much bigger carbon footprint when humans fly them. Humans are in planes now really as babysitters. And that introduces a whole other set of issues with people being bored.
Michael Chui: Thats a different failure mode than mode confusion.
Missy Cummings: Thats correct. Thats correct. Its not a failure mode, but it exacerbates failure modes.
Michael Chui: Weve done quite a bit of research on automation at MGI. From your point of view, what is the perfect use case?
Missy Cummings: In the military, in the Department of Defense, they like to use the mantra dull, dirty, dangerous. Its a pretty good mantra. The tasks that are boring, the tasks that are dirty, meaning maybe some kind of chem bio, where human health may be at risk for doing something. Mining, for example, is a good reason that we have robots. Nobody needs all that coal dust in their lungs.
And then of course dangerouswartime. We do see far less, not just military casualties when we use drones for warfare, but we do see fewer civilian casualties. And the reason is because when you send a drone in to drop a bomb as opposed to a human, the human is no longer at physical risk, the human thats controlling the drone from 2,000 miles or 4,000 miles away. The human can take their time. They have a lot of people they can talk to on the radio. They are not physically under stress. And theyve had plenty of sleep. You would be surprised at the number of drone operators that have a Starbucks cup of coffee sitting right next to them in their drone trailer.
There are all these practical considerations. And I think that we need to appreciate that the sweet spot is a collaboration between humans and machines. Let the humans do, let the humans augment, or let the machines augment the humans so that robots can extend our capabilities and let humans do what we do best, which is problem-solving.
Michael Chui: So just to push the question, do we need human pilots in commercial passenger aviation?
Missy Cummings: Passenger aviation is a little bit trickier, not because of the science of flying. But there is this issue that we know as shared fate. Shared fate is the comfort that passengers get by knowing that theres a human in the cockpit and that if anything goes really wrong that the human is in the cockpit doing everything he or she can to save their own life, and thus saving the lives of the passengers.
I think theres some social control issues that we have to think about as well. I dont see us getting away from that anytime soon. I think for passenger airlines, were going to need at least one person whos the captain for my lifetime.
That is not true for packages. I think right now we should turn all package [delivery services]FedEx, UPS, DHLI think these could and should become drone aircraft.
Michael Chui: Lets turn to automobiles. We talked about it a little bit already, but clearly its one of those areas where theres a lot of interest and also a place where there at least has been a lot of investment in the past five, ten years. Im curious about the history.
There was that DARPA Grand Challenge back in 2004DARPA, the Defense Advanced Research Projects Agency, had a contest. And a number of people could sign up to win a prize, to drive this autonomous vehicle 150 miles in the desert. Nobody finished. And then the following year, in 2005, I think there were five finishers. And theres a winning car, I guess from Stanford, if I recall correctly. You were in the field at the time. I think prior to that people thought, Its impossible. In 2004 they said, Look, this definitely didnt work. And by 2005 it did kind of work. What were your thoughts as you saw this playing out, coming from the field that you did?
Missy Cummings: At the time all this was happening, I was working with the MIT DARPA Grand Challenge team on a parallel project doing robotic forklifts. I saw what was happening at the ground level. I think everyone was blown away with how quickly this technology transitioned out of the academic sphere and into the commercial sphere. It literally did happen overnight.
But I think what were seeing now are the consequences of that. Because the technology was still extremely immature, still very much experimental. And Silicon Valley decided to try to commercialize a technology that was still very much in its infancy. I think that this is why youve seen so many self-driving cars, Waymo, Cruise. Theyre still really struggling to try to get the technology to a point of maturation thats safe enough in commercial settings, in robo-taxi settings.
Were not going to get this anytime soon. I have yet to see any real advancements that I think can allow these cars to reason under uncertainty. Thats where I draw the line. The cars must be able to reason under uncertainty, whether that uncertainty is weather, human behavior, different failure modes in the car. If cars cannot figure out what to do at least to fail gracefully, then its going to be a long time before this technology is ready.
Michael Chui: Just reflecting on it, its a funny thing about human expectation, right? At first, we thought this problem was super hard. Then these breakthroughs you mentioned overnight. And maybe everybody thought, Oh, its a lot easier than we thought. And now were rediscovering that the last X percent is really, really hard. Is that something that you see in common in other technologies as well? Is this just something we should expect, this hype cycle that happens, and we overshoot in both directions?
Missy Cummings: I think that this hypercompetitive race has made people focus on the wrong parts of the technology that needs the care and feeding to make it work and/or to get derivative technologies that can be useful. We may be able to get slow-speed technologyslow, meaning for autonomous shuttlesout of this. Maybe some very limited geo-fenced operations for robo-taxis.
But were not going to get the kind of full reach that either car companies or the Silicon Valley companies like Cruise and Waymo think that were going to get. Were going to fall short of that goal. Im not sure yet what the spinout technologies will be. But I think that one day we are going to look back, and the self-driving race to the bottom, I think its going to become a really important set of Harvard Business [School] case studies.
Michael Chui: You mentioned that these companies are focusing on the wrong problems. What are the right problems to solve?
Missy Cummings: The right problems, specifically when were talking about the surface transportation world, including trucking and cars, is looking at collaboration between sensors and humans. Letting the system prevent humans from doing stupid things like falling asleep at the wheel, texting while driving, because if humans do these things, the car can then react and keep the car in a safe place. If nothing else, pull it over to the side of the road and flash the blinkers until the human can take charge in the way that they need to. So this idea of technology acting as a guardian.
How can we have humans and technology work together to buffer the limitations that both have, while capitalizing on the strengths of one another?
Michael Chui: I think youve also mooted another taxonomy of things that youd want to see happen for these things to be on the road. What do you want?
Missy Cummings: I know theres a whole set of parents out there that are with me. I call myself Eeyore sometimes about the status of self-driving cars in the future.
What I truly want, being the mother of a 14-year-old, is for self-driving cars to be here. I dont want my 14-year-old daughter behind the wheel of a car ever. I do research in this field. I do understand how bad human drivers are. What I want would be for this technology to work, but understanding my job is to do research in this field, I recognize that its not going to happen.
What I foresee is that there isnot just in automotive transportation, but also in medicine and the military, financewe are going to see a very distinct shift away from replacing human reasoning to augmenting reasoning. Thats where the real future is. Thats where the money is going to be made. Because people think theyre doing it, but theyre not really doing it, and theyre not doing it in the right way.
If I hear somebody try to tell me how theyre doing explainable AI one more time, Im just going to go bonkers. Explainable AI is so much more than just trying to show weights or thresholds or various capabilities of an underlying algorithm. Explainable AI means different things to the different users that may come into contact with AI. I think that theres an entire growth area in real explainable AI.
Theres also going to be a huge growth area in the maintenance of any kind of computer system that has underlying AI in it, including robots. I think robot maintenance is going to be one of the biggest growth areas in the next 20 years. We cannot keep all the robots that we have right now working. And were not thinking about maintenance in a way thats streamlined, that can pull from the resources of the typical socioeconomic class thats doing maintenance now on regular forklifts. Were going to have to figure out how education needs to change so that we can help people lift themselves up by their bootstraps.
Michael Chui: So if you dont mind, Ill jump into the lightning round of quick questions, quick answers. Here we go. What is your favorite source of information to keep up to date on autonomy?
Missy Cummings: Wow. Thats really hard. I have a lot of sources of information. I dont have one. I am very fortunate that I have a rich network of former students and peers and colleagues, and I work in academia. So whether its an email or a newsletter or a headline or a Twitter feed, it takes a lot of different sources for me to stay on top, thats for sure.
Michael Chui: What excites you most about advances in technology?
Missy Cummings: I know this is going to sound a little weird. I love it when humans start to figure out ways around technology. So Ive been a real critic of Tesla. But every time I see a new way that somebodys tried to figure out how to defeat the Tesla autopilot bag, it does kind of tickle me a little bit, because I always appreciate that even if theres no real purpose for it, humans are always trying to figure out, even if they dont recognize it, how to outsmart a computer. I just love that about humans [laughs].
Michael Chui: What worries you most about technology advancement?
Missy Cummings: AI illiteracy, tech illiteracy. I see so much. I hung out in the World Economic Forum as a head of one of their committees for a couple years. I spent a lot of time with C-suite people. Tech illiteracy scares the pants off me.
Michael Chui: If you could fly any machine, what would it be?
Missy Cummings: A Warthog, an A-10 [laughs].
Michael Chui: If you were the head of R&D for an aerospace company, what would you prioritize?
Missy Cummings: Well, I need a qualification. What kind of aerospace company?
Michael Chui: Civilian.
Missy Cummings: Like, commercial passenger?
Michael Chui: Yes.
Missy Cummings: The passenger experience.
Michael Chui: If you were head of R&D for an automotive company, what would you prioritize?
Missy Cummings: Guardian technology in the car.
Michael Chui: What regulation is most important to implement?
Missy Cummings: Certification and testing for AI.
Michael Chui: In what company would you be most interested in investing?
Missy Cummings: I am definitely not going to wade into those waters. No, no, no. Ill have everybody and his brother calling me out for a conflict of interest[laughs].
Michael Chui: OK. What would you recommend that a college student study?
Missy Cummings: Computer science.
Michael Chui: What would you be doing if you werent a professor?
Missy Cummings: I would be leading outdoor adventures.
Michael Chui: What kind of adventures?
Missy Cummings: Oh, kind of a smorgasbord, hiking, snowboarding, whitewater rafting.
Michael Chui: Thats great. And what one piece of advice do you have for our listeners?
Missy Cummings: Stay curious.
More:
From fighter pilot to robotics pioneer: An interview with Missy Cummings - McKinsey
Posted in Robotics
Comments Off on From fighter pilot to robotics pioneer: An interview with Missy Cummings – McKinsey
SVT Robotics Named a 2021 Cool Vendor by Gartner – GlobeNewswire
Posted: at 10:57 am
NORFOLK, Va., Sept. 23, 2021 (GLOBE NEWSWIRE) -- SVT Robotics, a disrupter in the industrial robotics space whose software accelerates and simplifies the integration and deployment of robotics, today announced that it has been named a Cool Vendor by Gartner in the September 9, 2021, report titled, Cool Vendors in Supply Chain Execution Technologies, 2021.
The Gartner report points out, these cool vendors target the need for better supply chain execution productivity from digital investments in difficult economic conditions. Supply chain technology leaders should use this research to identify emerging supply chain execution technology vendors that can drive enhanced business value. 1
It is a unique honor to be named a Gartner Cool Vendor, said A.K. Schultz, co-founder and CEO of SVT Robotics. The future of supply chain is adoption of robotics, and there needs to be a direct focus on solving the issue of multirobot orchestration and seamless interoperability. Our revolutionary platform is on the forefront of meeting this need, and we believe its leading to industry recognition such as the Cool Vendor.
Our SOFTBOT Platform continues to gain momentum due to its drag-and-drop connectivity and processing orchestration across many industry-leading automation and software systems, enabling companies to rapidly deploy fully integrated solutions that work in concert with each other, said Michael Howes, co-founder and CPO of SVT Robotics. Its a game-changer like never before for piloting and adopting new technologies or adding more automation capacity.
The information provided in the Gartner report is of value to all supply chain executives and industry professionals who are seeking new ways to streamline and expedite their supply chains.
SVT Robotics has been riding a surge of interest in their SOFTBOT Platform as well as receiving numerous awards for their groundbreaking technology. They won both the June 2021 Supply Chain Top Projects Award by Supply and Demand Chain Executive as well as the RILA 2021 Startup Innovation Award. They were also named a top-ten Best Product of 2021 by MHI.
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartners research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
1Gartner Cool Vendors in Supply Chain Technologies, 2021 by Dwight Klappich, published Sept. 9, 2021
About SVT RoboticsSVT Robotics is a software company that's revolutionizing robot deployments in warehousing and manufacturing industries. SVT's SOFTBOT Platform empowers companies to compete in a quickly changing marketplace by enabling them to rapidly integrate any enterprise system with any robot or automation for any taskwithout custom code. Learn more about SVT Robotics atsvtrobotics.com,Twitter,Facebook, andLinkedIn.
A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/c73d5895-3d48-4403-8768-07c5628c008a
Original post:
SVT Robotics Named a 2021 Cool Vendor by Gartner - GlobeNewswire
Posted in Robotics
Comments Off on SVT Robotics Named a 2021 Cool Vendor by Gartner – GlobeNewswire
Afghan girls robotics team design their future in Qatar – Yahoo News
Posted: at 10:57 am
The nine members of an all-girl Afghan robotics team evacuated from Kabul to Qatar have built on their star status and captured hearts since fleeing their homeland.
Now back in education and working on their entries for a global robotics competition, the girls worry about their immediate future but hope they can one day return to Afghanistan.
Team member Ayda Haydarpour, 17, who switched onto digital engineering playing Super Mario as a child, said it was "too hard" to follow events in Afghanistan but hopes to return to open the first STEM (science, technology, engineering, and mathematics) school.
"My grandfather used to ask me lots of questions about his tablet and phone," she said with a smile.
"In Afghanistan, robotics is new, especially for women," said Haydarpour, who has three sisters back in Afghanistan.
Her mother had worked as a teacher at a girls' high school, but the facility is yet to reopen following last month's fall of the government to the Taliban.
The Taliban had banned women from work and education, confining them to homes during their brutal rule of Afghanistan between 1996 and 2001.
On Tuesday, the Taliban vowed girls would be allowed to return to school although they have so far been effectively excluded, with a spokesman saying "more time is needed".
While Haydarpour dreams of one day working for tech giant Microsoft, she is adamant that she wants "to go back and serve my people".
In the robotics laboratory at Texas A&M, one of several US universities with an outpost in Qatar, Haydarpour hunched over a laptop decorated with colourful badges while her teammates assembled components.
- 'Weather station' -
The girls evacuated to Qatar were placed in one of three institutions depending on their needs with full scholarships granted by Doha.
Some of their teammates remain in Afghanistan while others are in Mexico and the UAE.
But the nine girls in Qatar all get together after school to work on their entries for the First Global Challenge robotics competition.
Story continues
Asked how their second day of school had gone, following their arduous departure from Afghanistan on a Qatari military plane and 10 days quarantine amid the coronavirus pandemic, the girls chimed back "all good" in chorus.
Eight of the girls had a spirited exchange about a faulty component on a circuit board they were building to use on a CubeSat weather station.
"For one week we couldn't fix it -- so we changed the wire," said Haydarpour, holding up a printed circuit board trailing cables onto a lab bench dotted with toolboxes.
Beside her, another member of the team assembled the plastic housing for the weather station while periodically checking her phone.
On the other side of the table, half of the girls worked to build a robot capable of scooping up plastic balls and firing them away.
"It's for the shooter for the balls," said 18-year-old Somaya Faroqi, the team leader, as she and her teammate Florans conferred on how to fix a motor.
She had earlier told AFP that she was "so sad because we lost our family, our (robotics) coaches, our life" by abruptly leaving Afghanistan.
- 'Robotics award' -
Roya Mahboob, founder of an Afghan software company, helped form the team which went on to develop a low-cost ventilator last year at the height of the pandemic.
The girls made headlines in 2017 after being denied visas to take part in a robotics competition in Washington -- before then-president Donald Trump intervened and they were allowed to travel.
The same year the girls won a prestigious robotics award.
US Secretary of State Antony Blinken dropped in on the girls during a visit to Qatar earlier this month.
"I was honored to meet several remarkable women & girls of the Afghan Robotics Team. Their journeys & STEM aspirations are inspiring," he tweeted.
Benjamin Cieslinski, a laboratory manager at the university, called the girls' skills "a really high level" despite their ordeal.
But Haydarpour still worries about the future and education of girls in her country.
"What will happen in Afghanistan?" she asks. "It's too hard to see your country in that situation."
gw-dla/dm/hc
Go here to read the rest:
Afghan girls robotics team design their future in Qatar - Yahoo News
Posted in Robotics
Comments Off on Afghan girls robotics team design their future in Qatar – Yahoo News
Friendly robots are more helpful than authoritative ones – Earth.com
Posted: at 10:56 am
A new study published in the journal Science Robotics has found that socially interactive robots built to assist seniors with their daily living should be constructed as collaborative and peer-oriented rather than dominant and authoritative. This will maximize humans acceptance of their advice and assistance.
When robots present themselves as human-like socialagents, we tend to play along with that sense of humanityand treat them much like we would aperson, explained studylead author ShaneSaunderson, a specialist in human-robot interaction at the University of Toronto.
But even simple tasks, like asking someone to take their medication, have a lot of social depth to them. If we want to put robots in those situations, we need to better understand the psychology of robot-human interactions.
According to Saunderson and his team, in order to understand how to build better robots, scientists should first understand the concept of authority and how to incorporate it into their machines. The researchers found that this concept can be divided into two: formal authority and real authority.
Formal authority comes from your role: if someone is your boss, your teacher or your parent, they have a certain amount of formal authority, Saunderson explained. Real authority has to do withthe control of decisions, often forentitiessuch asfinancial rewards or punishments.
Saunderson and his colleagues devised a humanoid robot named Pepper and used it to help 32 volunteer participants to complete a series of tests. For some subjects, Saunderson was presented a formal authority figure, with Pepper as a simple helper, while for others Pepper was introduced as the authoritative leader of the experiment.
The researchers discovered that Pepper was less persuasive when presented as a strong authoritative figure than when introduced as a peer helper. Social robots are not commonplace today, and in North America at least, peoplelack both relationships and a sense of shared identity with robots. It might be hard for them to come to see them as a legitimate authority, explained Saunderson.
Thus, the scientists concluded that in order to create positive experiences in contexts of human-robot interaction, robots should be constructed as peer-oriented and collaborative rather than dominant and authoritative.
This ground-breaking research provides an understanding of how persuasive robots should be developed and deployed in everyday life, and how they should behave to help different demographics,including our vulnerable populations such as older adults,concluded study co-author Goldie Nejat.
By Andrei Ionescu, Earth.com Staff Writer
Read the original post:
Friendly robots are more helpful than authoritative ones - Earth.com
Posted in Robotics
Comments Off on Friendly robots are more helpful than authoritative ones – Earth.com
Honeywell Introduces New Robotic Technology To Help Warehouses Boost Productivity, Reduce Injuries – Yahoo Finance
Posted: at 10:56 am
-- Smart Flexible Depalletizer minimizes the need for manual labor in roles that carry risk or injury and are currently experiencing high turnover
-- Solution offers the latest in machine learning and gripping technologies to help unload pallets without requiring any pre-programming or operator intervention
-- Robotic technology will be showcased at PACK EXPO in Las Vegas Sept. 27-29
CHARLOTTE, N.C., Sept. 23, 2021 /PRNewswire/ -- Honeywell (Nasdaq: HON) announced today its latest innovation in robotic technology designed to help warehouses and distribution centers automate the manual process of unloading pallets, reducing the operational risks of potential injuries and labor shortages.
Driven by sophisticated machine learning and advances in perception and gripping technologies, Honeywells Smart Flexible Depalletizer minimizes the need for manual labor to break down pallet loads roles that carry risk of injury to labor, experience high turnover and are currently difficult to staff.
Driven by sophisticated machine learning and advances in perception and gripping technologies, Honeywell's Smart Flexible Depalletizer minimizes the need for manual labor to break down pallet loads roles that carry risk of injury to labor, experience high turnover and are currently difficult to staff. A typical medium- to large-sized distribution center unpacks up to several thousand pallets per day, with the probability of errors, injuries and worker fatigue increasing during each shift.
"Even when manual operations are running smoothly without injuries, the physical, repetitive task of unloading pallets is variable and limited by human constraints," said Dr. Thomas Evans, Chief Technology Officer of Honeywell Robotics. "Our Smart Flexible Depalletizer helps improve throughput by operating consistently without interruption over multiple shifts with minimal human interaction. With the labor constraints warehouses and distribution centers are seeing in filling these manual roles, this solution can be an ideal fit to help keep up with daily order volumes."
The depalletizer's articulated robotic arm is guided by advanced vision and perception technologies, which allow cases to be picked from a single- or mixed-SKU pallet on a fixed or mobile location. The latest computer vision technology identifies the exact location of every case on the pallet, while perception software automatically recognizes a wide variety of packaging. This technology allows for seamless handling of a continuous flow of pallets in any sequence without requiring any pre-programming or operator intervention.
Story continues
The advanced machine learning and motion planning optimizes the movements of the robotic arm to ensure maximum picking speed and efficiency. The control logic also senses the weight of each item as the robot lifts it and automatically updates its gripping response to transfer each product securely and effectively. The more the solution picks, the more it learns and continues to improve in quickly and efficiently unloading pallets.
"Reliable depalletizing rates are of growing importance as consumer preferences continue to accelerate the rate of packing and the increasing product mix warehouses and distribution centers handle every day," said Evans. "These major technology improvements are driving fully automated solutions capable of meeting or exceeding the throughput of manual operations. Not only do these solutions offer significant benefits to modern distribution centers and other fulfillment operations, the business case for their utilization is also increasingly attractive."
A recent Honeywell study revealed more than half of companies are more willing to invest in automation because of the pandemic and its lasting effects. The same study showed companies see increased speed of tasks, greater productivity and increased employee utilization and productivity as the top three potential benefits from automation.
The robot can work in conjunction with pallet conveyance autonomous mobile robots, enabling continuous operation of the system while providing the flexibility to stage pallets and empty stacks virtually anywhere the robot is capable of traveling.
The Smart Flexible Depalletizer will be showcased at this year's PACK EXPO International trade show in Las Vegas at booth C-4436.
From concept and integration, Honeywell Intelligrated draws on its expanding portfolio and deep industry expertise to help warehousing, distribution and fulfillment companies optimize and manage their processes. The business offers integrated end-to-end automation systems, warehouse automation software and lifecycle support services regardless of the manufacturer to improve throughput and keep workers safe. For more information on Honeywell Intelligrated services and offerings, visit sps.honeywell.com/us/en/products/automation.
Honeywell Safety and Productivity Solutions (SPS) provides products, software and connected solutions that improve productivity, workplace safety and asset performance for our customers across the globe. We deliver on this promise through industry-leading mobile devices, software, cloud technology and automation solutions, the broadest range of personal protective equipment and gas detection technology, and custom-engineered sensors, switches and controls. For more information, please visit: sps.honeywell.com.
Honeywell (www.honeywell.com) is a Fortune 100 technology company that delivers industry-specific solutions that include aerospace products and services; control technologies for buildings and industry; and performance materials globally. Our technologies help aircraft, buildings, manufacturing plants, supply chains, and workers become more connected to make our world smarter, safer, and more sustainable. For more news and information on Honeywell, please visit http://www.honeywell.com/newsroom.
Contacts:
MediaWhitney Ellis(803) 835-8137whitney.ellis@honeywell.com
Honeywell Sensing and Productivity Solutions logo (PRNewsFoto/Honeywell) (PRNewsfoto/Honeywell)
Cision
View original content to download multimedia:https://www.prnewswire.com/news-releases/honeywell-introduces-new-robotic-technology-to-help-warehouses-boost-productivity-reduce-injuries-301383819.html
SOURCE Honeywell
Read more here:
Posted in Robotics
Comments Off on Honeywell Introduces New Robotic Technology To Help Warehouses Boost Productivity, Reduce Injuries – Yahoo Finance
Hy-Vee adopts AI robots to reduce out-of-stocks – Chain Store Age
Posted: at 10:56 am
A regional Midwest grocer is rolling out autonomous robots that automatically scan tens of thousands of products.
Hy-Vee Inc. will deploy Simbe Roboticsautonomous shelf-scanning Tally robotsin stores across Iowa, Nebraska and Missouri. The robots will scan items in the grocery, health and wellness aisles up to three times per day to ensure products are in stock, in the correct location, and correctly priced.
According to Simbe Robotics, by providing more frequent and accurate inventory, pricing and promotion information, Tally robots equip store associates with actionable insights that can reduce out-of-stocks by up to 30%. In addition, associates can spend less time on inventory management and focus more time on engaging with shoppers.
By leveraging this rich data and combining it with Tallys plug-and-play software platform and APIs, Hy-Vee hopes to obtain unprecedented insights into the state of its stores, giving employees real-time recommendations to improve store operations and maximize customer satisfaction.
Tally requires no infrastructure changes to the store environment to operate effectively and is designed to strategically navigate store aisles during normal store hours, safely maneuvering alongside shoppers and employees.
Tally robots are already in Hy-Vee stores in Ankeny, Iowa, and Lincoln, Neb.; and will be rolling out to locations in Lee's Summit, Mo.; Omaha, Neb.; and Altoona, Iowa, in the coming weeks.
Another regional Midwest grocery retailer, Schnucks Markets, recently committed to a multi-year, full-scale rollout that will bring Simbe Roboticsautonomous shelf-scanning Tally robotsto all 111 Schnucks locations in the U.S. The deployment, which builds on previous expansions, will make Schnucks the first grocer in the world to utilize AI-powered inventory management technology at scale.
By incorporating Simbes solution into chainwide operations, Schnucks said it will gain even greater visibility into store conditions, with deeper levels of business insights as the retailer prepares to adjust to the quickly evolving landscape of a post-pandemic world.
Hy-Vee has a strong reputation for excellent customer service and an employee-first culture, said Luke Tingley, senior VP and CIO at Hy-Vee. By employing Tally, we can continue providing that excellent service by reducing out of stocks and empowering our store teams with real-time insights to ensure the best customer experience across the board.
No other retail solution supports store teams the way Tally does. The pandemic truly created a new normal for grocery that has illuminated the need for a greater frequency and fidelity of in-store data, said Brad Bogolea, CEO and co-founder of Simbe Robotics. Hy-Vee is the perfect example of thoughtfully adopting technology to improve the store experience for both customers and their teams. As retailers face a growing number of considerations, Tally provides a cost-effective solution that ensures they can continue to provide excellent customer service and create a valuable, more enjoyable working environment for their employees.
Headquartered in West Des Moines, Iowa, Hy-Vee Inc. is an employee-owned corporation operating more than 275 retail stores across eight Midwestern states.
Excerpt from:
Hy-Vee adopts AI robots to reduce out-of-stocks - Chain Store Age
Posted in Robotics
Comments Off on Hy-Vee adopts AI robots to reduce out-of-stocks – Chain Store Age
Robots and machine learning researchers combine forces to speed up the drug development process – TechRepublic
Posted: at 10:56 am
IBM Research and Arctoris announce a research collaboration to test a closed-loop platform.
Ulysses is the world's first fully automated drug discovery platform developed and operated by Arctoris based in Oxford, Boston and Singapore.
Image: Arctoris
IBM Research and Arctoris are bringing the power of artificial intelligence and robotic automation to the process of developing new drugs. The two companies aim to make smarter choices early on in the process, iterate faster and improve the odds of finding an effective treatment.
IBM Research contributed two platforms to the project. RXN for Chemistry uses natural language processing to automate synthetic chemistry and artificial intelligence to make predictions about which compound has the highest chance of success. That information is passed on to RoboRXN, an automated platform for molecule synthesis.
Arctoris, a drug discovery company, brought Ulysses to the project. The company's automated platform uses robots and digital data capture to conduct lab experiments in cell and molecular biology and biochemistry and biophysics. Experiments conducted with Ulysses generate 100 times more data points per assay compared to industry-standard manual methods, according to Arctoris.
IBM Research will design and synthesize new chemical matter that Arctoris will test and analyze. The resulting data will inform the next iteration of the experiment.
SEE: Drug discovery company works with ethnobotanists and data scientists
Thomas A. Fleming, Arctoris co-founder and COO, described this project as "a world-first closed-loop drug discovery project" that combines AI and robotics-powered drug discovery.
"This collaboration will showcase how the combination of our unique technology platforms will lead to accelerated research based on better data enabling better decisions," he said in a press release.
A research paper about closed-loop drug discovery describes the process as a centralized workflow controlled by machine learning. The system generates a hypothesis, synthesizes a lead drug candidate, tests it and then stores the data. This comprehensive process could "reduce bottlenecks and standards discrepancies and eliminate human biases in hypothesis generation," according to the paper.
Automating lab work results in better data which in turn means less rework and a savings of time and money, Poppy Roworth, head of laboratory at Arctoris, explained in a blog post. She described the benefits of automation this way: "I no longer have to manually pipette each well at a time of a 96 or 384 well plate, which is highly beneficial for my sanity when there is a stack of more than 5 or 10 to get through." By automating the protocol, scientists can use time previously spent in the lab on "planning the next experiment, designing new projects with clients, reading literature and keeping up to day with other projects."
Matteo Manica, a research scientist at IBM Research Europe, Zurich, is coordinating the project and said in a press release that this work is a unique opportunity to quantify the impact of AI and automation technologies in accelerating scientific discovery.
"In our collaboration, we demonstrate a pipeline to perform iterative design cycles where generative models suggest candidates that are synthesized with RoboRXN and screened with Ulysses," he said. "The data produced by Ulysses will then be used to establish a feedback loop to retrain the generative AI and improve the proposed leads in a completely data-driven fashion."
More than 3,000 researchers in 16 locations on five continents work for IBM Research. Arctoris is a biotech company headquartered in Oxford with offices in Boston and Singapore. The collaboration is ongoing.
Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays
More here:
Posted in Robotics
Comments Off on Robots and machine learning researchers combine forces to speed up the drug development process – TechRepublic
AMP Robotics Installs its First Recycling Robots in the United Kingdom and Ireland with Recyco – Business Wire
Posted: at 10:56 am
DENVER--(BUSINESS WIRE)--AMP Robotics Corp. (AMP), a pioneer in AI, robotics, and infrastructure for the waste and recycling industry, has installed its first AI-guided robotics systems in the UK and Ireland with Recyco, a leading recycling and waste management business in Northern Ireland.
The project includes two robotsa single AMP Cortex unit along with a tandem unitinstalled on Recycos fibre lines for quality control to improve pick rates and bale purity. Recyco is the first deal closed with REP-TEC Advanced Technologies, AMPs recently appointed official reseller and integrator for customers in the UK and Ireland, with several other projects in negotiation.
Were delighted to bring our first robots to the UK and Ireland as we continue to see strong demand for our AI and automation solutions and build our pipeline in Europe, said Gary Ashburner, European general manager for AMP. Recyco has been a superb partner in this process, and recognizes our technology addresses chronic staffing challenges it and many recyclers face, while aligning with its goals of maximizing recovery, increasing landfill diversion, and advancing sustainability.
AMPs proprietary technology applies computer vision and deep learning to guide high-speed robotics systems to precisely identify, differentiate, and recover recyclables found in the waste stream by color, size, shape, opacity, and more, storing data about each item it perceives. AMPs sorting technology can pick upwards of 80 items per minute, about twice the pace of a human sorter, and the company has recorded up to 150 picks per minute with its tandem units. The companys AI platform can quickly adapt to packaging introduced into the recycling stream with recognition capabilities to the brand levelincreasingly critical as demand for sufficient quantities of high-quality recycled material grows to meet consumer packaged goods companies commitment to use of post-consumer recycled content.
"AMP's robots have quickly doubled the pick rates we were accustomed to, maintaining and even improving the purity of our bales, which we depend on to maximize prices with our end-market buyers," said Michael Cunningham, owner and CEO, Recyco. "We're proud to be leading the way for AI-powered recycling in these islands, and look forward to continued gains in productivity and efficiency."
Its great to have our first robotics installation go live this month, with the project running very smoothly from start to finish, said Colm Grimes, founder and CEO of REP-TEC. We started the installation on a Friday evening, and by Sunday evening, we had both Cortex systems ready for the shift starting the next morning. We were blown away with the identification accuracy of the AI given this is AMPs first installation in the UK.
AMP now has more than 160 systems installed globally, covering North America, Asia, and Europe. The companys AI platform, AMP Neuron, encompasses the largest known real-world dataset of recyclable materials for machine learning, with the ability to classify more than 100 different categories and characteristics of recyclables across single-stream recycling; e-scrap; and construction and demolition debris, and reaching an object recognition run rate of more than 10 billion items annually.
About AMP Robotics Corp.
AMP Robotics is modernizing the worlds recycling infrastructure by applying AI and automation to increase recycling rates and economically recover recyclables reclaimed as raw materials for the global supply chain. The AMP Cortex high-speed robotics system automates the identification and sorting of recyclables from mixed material streams. The AMP Neuron AI platform continuously trains itself by recognizing different colors, textures, shapes, sizes, patterns, and even brand labels to identify materials and their recyclability. Neuron then guides robots to pick and place the material to be recycled. Designed to run 24/7, all of this happens at superhuman speed with extremely high accuracy. AMP Clarity provides data and material characterization on what recyclables are captured and missed, helping recycling businesses and producers maximize recovery. With deployments across North America, Asia, and Europe, AMPs technology recovers recyclables from municipal collection, precious commodities from electronic scrap, and high-value materials from construction and demolition debris.
Continue reading here:
Posted in Robotics
Comments Off on AMP Robotics Installs its First Recycling Robots in the United Kingdom and Ireland with Recyco – Business Wire