Monthly Archives: June 2024

Apple’s next nebulous idea: smart home robots – The Verge

Posted: June 15, 2024 at 7:52 pm

Humanoid robots are one of those dreams that sometimes feel like were on the precipice of realizing. Boston Dynamics has its Atlas robot, and Tesla is pursuing robotics, while companies like Mercedes, Amazon, and BMW are or will be testing robots for industrial use. But those are all very expensive robots performing tasks in controlled environments. In the home, they might still be far off.

Enter Apple. Mark Gurman at Bloomberg has said its robotics projects are under the purview of former Google employee John Giannandrea, who has been in charge of Siri and, for a time, the Apple Car. With the car project canceled, the Vision Pro launched, and Apple Intelligence around the corner, is that the next big thing?

According to his information, any humanoid Apple robot is at least a decade away. Still, simpler ideas may be closer a smaller robot that might follow you around or another idea involving a large iPad display on a robotic arm that emotes along with the caller on the other end with head nods and the like.

Many, if not most, homes are dens of robot-confounding chaos

A mobile robot is tricky, though; what in the world would Apple do with a home robot that follows me around? Will it play music? Will it have wheels, or will it walk? Will I be expected to talk to Ajax or SiriGPT or whatever the company names its chatbot? Or, given Apples rumored OpenAI deal, some other chatbot?

For that matter, what form will it take? Will it fly? Will it have wheels? Will it be a ball? Can I kick it?

Its form factor will be at least as important as its smarts. Houses have stairs, furniture that sometimes moves, clothes that end up on the floor, pets that get in the way, and kids who leave their stuff everywhere. Doors that opened or closed just fine yesterday dont do so today because it rained. A haphazard kitchen remodel 20 years ago might mean your refrigerator door slams into the corner of the wall by the stairs because why would you put the refrigerator space anywhere else, Dave? But I digress.

Based on what little detail has trickled out, Apples robotics ideas seem to fit a trend of charming novelty bots weve seen lately.

One recent example is Samsungs Bot Handy concept, which looks like a robot vacuum with a stalk on top and a single articulating arm, meant to carry out tasks like picking up after you or sorting your dishes. Theres also the cute ball bot named Ballie that Samsung has shown off at a couple of CES shows. The latest iteration follows its humans and packs a projector that can be used for movies, video calls, or entertaining the family dog.

Meanwhile, Amazons $1,600 home robot with a tablet for a face, Astro, is still available by invitation only. It is charming, in a late 90s Compaq-computer-chic aesthetic sort of way, but its not clear that its functionally more useful than a few cheap wired cameras and an Echo Dot.

LG says its Q9 AI Agent is a roving smart home controller that can guess your mood and play music for you based on how it supposes youre feeling. Im very skeptical of all of that, but it has a handle, and I do love a piece of technology with a built-in handle.

I still want a sci-fi future filled with robotic home assistants that save us from the mundane tasks that keep us from the fun stuff we would rather do. But we dont all live in the pristine, orderly abode featured in Samsungs Ballie video or the videos Apple produces showing its hardware in personal spaces. Many normal homes are dens of robot-confounding chaos that tech companies will have a hard time accounting for when they create robots designed to follow us or autonomously carry out chores.

There are other paths to take. Take the Ring Always Home Cam, which will be very noisy judging from the demo videos, but it could also be useful and even good. While putting aside the not insignificant privacy implications for a moment, it seems promising to me mostly because of the mobility and that its only designed to be a patrolling security camera.

That kind of focused functionality means its predictable, which is what makes single-purpose gizmos and doodads work. After some experimentation, my smart speakers are where they hear me consistently or are the most useful, and I can put my robot vacuums in the rooms I know Ill keep clean enough that they wont get trapped or break something (usually).

The robot vacuums I have the Eufy Robovac L35 and a Roomba j7 do an okay job, but they sometimes need rescuing when they find my cats stringy toys or eat a paperclip (which are somehow always on the floor even though I never, ever actually need one or even know where we keep them).

I have a kid, see, and preparing the way for them in other parts of the house is just adding more work to the mix. Thats fine for me because the two rooms in their charge are the ones that need vacuuming the most, so theyre still solving a problem, but it waves at the broader hurdles robotic products face.

And its not all that clear that AI can solve those problems. A New York Times opinion piece recently pointed out that despite all the hand-wringing about the tech over the last year and a half, generative AI hasnt proven that it will be any better at making text, images, and music than the mediocre vacuum robot that does a passable job.

Given the generative AI boom and rumors that Apple is working on a HomePod with a screen, a cheerful, stationary smart display that obsequiously turns its screen to face me all the time seems at least vaguely within the companys wheelhouse. Moving inside the house and interacting with objects is a trickier problem, but companies like Google and Toyota have seen success using generative AI training approaches for robots that learn how to do things like make breakfast or quickly sort items with little to no explicit programming.

Itll be years, maybe even decades, before Apple or anyone else can bring us anything more than clumsy, half-useful robots that blunder through our homes, being weird, frustrating, or broken. Heck, phone companies havent even figured out how to make notifications anything but the bane of our collective existence. Theyve got their work cut out for them with homes like mine, where were just one busy week away from piles of clutter gathering like snowdrifts, ready to ruin some poor robots day.

See more here:

Apple's next nebulous idea: smart home robots - The Verge

Posted in Robotics | Comments Off on Apple’s next nebulous idea: smart home robots – The Verge

Helping Robots Grasp the Unpredictable – The Good Men Project

Posted: at 7:52 pm

By Alex Shipps | MIT CSAIL | MIT News

When robots come across unfamiliar objects, they struggle to account for a simple truth: Appearances arent everything. They may attempt to grasp a block, only to find out its aliteral piece of cake. The misleading appearance of that object could lead the robot to miscalculate physical properties like the objects weight and center of mass, using the wrong grasp and applying more force than needed.

To see through this illusion, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers designed theGrasping Neural Process, a predictive physics model capable of inferring these hidden traits in real time for more intelligent robotic grasping. Based on limited interaction data, their deep-learning system can assist robots in domains like warehouses and households at a fraction of the computational cost of previous algorithmic and statistical models.

The Grasping Neural Process is trained to infer invisible physical properties from a history of attempted grasps, and uses the inferred properties to guess which grasps would work well in the future. Prior models often only identified robot grasps from visual data alone.

Typically, methods that infer physical properties build on traditional statistical methods that require many known grasps and a great amount of computation time to work well. The Grasping Neural Process enables these machines to execute good grasps more efficiently by using far less interaction data and finishes its computation in less than a tenth of a second, as opposed seconds (or minutes) required by traditional methods.

The researchers note that the Grasping Neural Process thrives in unstructured environments like homes and warehouses, since both house a plethora of unpredictable objects. For example, a robot powered by the MIT model could quickly learn how to handle tightly packed boxes with different food quantities without seeing the inside of the box, and then place them where needed. At a fulfillment center, objects with different physical properties and geometries would be placed in the corresponding box to be shipped out to customers.

Trained on 1,000 unique geometries and 5,000 objects, the Grasping Neural Process achieved stable grasps in simulation for novel 3D objects generated in the ShapeNet repository. Then, the CSAIL-led group tested their model in the physical world via two weighted blocks, where their work outperformed a baseline that only considered object geometries. Limited to 10 experimental grasps beforehand, the robotic arm successfully picked up the boxes on 18 and 19 out of 20 attempts apiece, while the machine only yielded eight and 15 stable grasps when unprepared.

While less theatrical than an actor, robots that complete inference tasks also have a three-part act to follow: training, adaptation, and testing. During the training step, robots practice on a fixed set of objects and learn how to infer physical properties from a history of successful (or unsuccessful) grasps. The new CSAIL model amortizes the inference of the objects physics, meaning it trains a neural network to learn to predict the output of an otherwise expensive statistical algorithm. Only a single pass through a neural network with limited interaction data is needed to simulate and predict which grasps work best on different objects.

Then, the robot is introduced to an unfamiliar object during the adaptation phase. During this step, the Grasping Neural Process helps a robot experiment and update its position accordingly, understanding which grips would work best. This tinkering phase prepares the machine for the final step: testing, where the robot formally executes a task on an item with a new understanding of its properties.

As an engineer, its unwise to assume a robot knows all the necessary information it needs to grasp successfully, says lead author Michael Noseworthy, an MIT PhD student in electrical engineering and computer science (EECS) and CSAIL affiliate. Without humans labeling the properties of an object, robots have traditionally needed to use a costly inference process. According to fellow lead author, EECS PhD student, and CSAIL affiliate Seiji Shaw, their Grasping Neural Process could be a streamlined alternative: Our model helps robots do this much more efficiently, enabling the robot to imagine which grasps will inform the best result.

To get robots out of controlled spaces like the lab or warehouse and into the real world, they must be better at dealing with the unknown and less likely to fail at the slightest variation from their programming. This work is a critical step toward realizing the full transformative potential of robotics, says Chad Kessens, an autonomous robotics researcher at the U.S. Armys DEVCOM Army Research Laboratory, which sponsored the work.

While their model can help a robot infer hidden static properties efficiently, the researchers would like to augment the system to adjust grasps in real time for multiple tasks and objects with dynamic traits. They envision their work eventually assisting with several tasks in a long-horizon plan, like picking up a carrot and chopping it. Moreover, their model could adapt to changes in mass distributions in less static objects, like when you fill up an empty bottle.

Joining the researchers on the paper is Nicholas Roy, MIT professor of aeronautics and astronautics and CSAIL member, who is a senior author. The group recentlypresented this workat the IEEE International Conference on Robotics and Automation.

Reprinted with permission of MIT News

***

All Premium Members get to view The Good Men Project with NO ADS. A $50 annual membership gives you an all access pass. You can be a part of every call, group, class and community. A $25 annual membership gives you access to one class, one Social Interest group and our online communities. A $12 annual membership gives you access to our Friday calls with the publisher, our online community. Need more info? A complete list of benefits is here.

Photo credit: unsplash

Here is the original post:

Helping Robots Grasp the Unpredictable - The Good Men Project

Posted in Robotics | Comments Off on Helping Robots Grasp the Unpredictable – The Good Men Project

Tesla restarts hiring beyond AI and robotics in a big way – Electrek.co

Posted: at 7:52 pm

Tesla has fully restarted its hiring effort beyond AI and robotics in a big way after a hiring freeze amid several waves of layoffs throughout the entire organization.

As we previously reported, Elon Musk came into Tesla like a wrecking ball earlier this quarter and fired an estimated 15-20% of Teslas staff.

Sources said Musk had greatly reduced his involvement at Tesla over the last year, but the CEO reasserted himself amid a proxy battle over his CEO compensation plan that was rescinded by a judge earlier this year.

Musk fired many of Teslas top executives, and others left.

With these layoffs, Tesla effectively put in place a hiring freeze.

A few weeks ago, we reported on Tesla restarting to hire, but only for the AI and robotics department.

The hiring freeze now seems to be officially over in the US as Tesla has posted hundreds of new positions across several departments but mainly service and sales:

Teslas layoffs have affected its service, sales, and delivery departments despite being at capacity in many regions.

As we previously reported, this greatly affected morale as Tesla workers not only lost friends, but they also saw their already heavy workload increase.

In some cases, Tesla is expected to rehire some of the employees it has let go, which has been the case after previous rounds of layoffs and, more recently, after Musk fired Teslas entire charging team.

On top of service and sale jobs, the automaker also posted several new positions at its lithium refinery under construction in Robstown, Texas.

FTC: We use income earning auto affiliate links. More.

See the rest here:

Tesla restarts hiring beyond AI and robotics in a big way - Electrek.co

Posted in Robotics | Comments Off on Tesla restarts hiring beyond AI and robotics in a big way – Electrek.co

Tesla jumps on talent of Koch company making direct drive for robots – Electrek.co

Posted: at 7:52 pm

Tesla appears to be behind the acquisition, or acqui-hire, of a company developing direct drive motors for robots as it is being liquidated by Koch Engineered Solutions.

Besides the SolarCity acquisition, Tesla has avoided large acquisitions despite a significant cash position.

However, the automaker is known to have made several smaller acquisitions, especially in the manufacturing industry, either to secure manufacturing automation technologies or to acqui-hire, which refers to acquiring a companys talent.

Here are several examples:

Now, Electrek has learned that Tesla might be adding a company, or at the very least its talent, to that list.

Genesis Motion Solutions is an engineering firm specializing in direct drive motors based in British Colombia, Canada.

In 2018, it received a strategic, controlling investment from fossil fuel giant Koch Industries.

Earlier this year, they announced that Koch was stopping operations of the company and liquidating it:

Shortly after, Electrek noted that many Genesis engineers started to join Tesla. Now, Electrek has found 18 former employees of Genesis who have joined Tesla.

The first engineers to joined Tesla from Genesis were Matt Balisky and Nick di Lello last year prior to Koch liquidating the company. Their role of design actuators for humanoid robotics at Tesla hints at the companys interest in Genesis.

Genesis main product was LiveDrive. The company described the product on its website before taking it down last year:

Introducing LiveDrivehoused and frameless direct drive rotary motors engineered with patented electromagnetic technology for more torque to mass than competing direct drive motors, resulting in maximum productivity and efficiency for your machinery.

Direct drive motors offer highly dynamic acceleration and a high level of positional precision,often resulting in high efficiency.

Their main downsides are generally their costs and their limited torque.

It makes them interesting solutions as actuators for robots, which is one of the applications envisioned by Genesis founder and LiveDrive inventor James Klassen

In the latest generation of its Optimus robot, Tesla has noted that it started to incorporate its own actuators designed in-house.

We couldnt confirm if Tesla is only acquiring Genesis talent amid the liquidation, much like an acqui-hire situation, or if the automaker is acquiring some or all of the companys assets.

Electrek checked the companys Canadian patents and Genesis is still the owners on record.

Tesla recently announced that it deployed its first two Optimus robots inside its factories and it plans to sell them to customers as soon as next year.

Interestingly, most of the new hires from Genesis came amid the big wave of layoffs earlier this year.

However, Teslas AI and Robotics department, which is leading the development of Teslas self-driving effort and Optimus humanoid robot, has been one of the rare departments spared in the round of layoffs.

Elon is making it clear that Teslas priority is self-driving and humanoid robot. After laying off as much as 20% of the staff, its clear that those are the safest jobs at Tesla.

FTC: We use income earning auto affiliate links. More.

Read more here:

Tesla jumps on talent of Koch company making direct drive for robots - Electrek.co

Posted in Robotics | Comments Off on Tesla jumps on talent of Koch company making direct drive for robots – Electrek.co

Generative AI takes robots a step closer to general purpose – TechCrunch

Posted: at 7:52 pm

Most coverage of humanoid robotics has understandably focused on hardware design. Given the frequency with which their developers toss around the phrase general purpose humanoids, more attention ought to be paid to the first bit. After decades of single-purpose systems, the jump to more generalized systems will be a big one. Were just not there yet.

The push to produce a robotic intelligence that can fully leverage the wide breadth of movements opened up by bipedal humanoid design has been a key topic for researchers. The use of generative AI in robotics has been a white-hot subject recently, as well. New research out of MIT points to how the latter might profoundly affect the former.

One of the biggest challenges on the road to general-purpose systems is training. We have a solid grasp on best practices for training humans how to do different jobs. The approaches to robotics, while promising, are fragmented. There are a lot of promising methods, including reinforcement and imitation learning, but future solutions will likely involve combinations of these methods, augmented by generative AI models.

One of the prime use cases suggested by the MIT team is the ability to collate relevant information from these small, task-specific datasets. The method has been dubbed policy composition (PoCo). Tasks include useful robot actions like pounding in a nail and flipping things with a spatula.

[Researchers] train a separate diffusion model to learn a strategy, or policy, for completing one task using one specific dataset, the school notes. Then they combine the policies learned by the diffusion models into a general policy that enables a robot to perform multiple tasks in various settings.

Per MIT, the incorporation of diffusion models improved task performance by 20%. That includes the ability to execute tasks that require multiple tools, as well as learning/adapting to unfamiliar tasks. The system is able to combine pertinent information from different datasets into a chain of actions required to execute a task.

One of the benefits of this approach is that we can combine policies to get the best of both worlds, says the papers lead author, Lirui Wang. For instance, a policy trained on real-world data might be able to achieve more dexterity, while a policy trained on simulation might be able to achieve more generalization.

The goal of this specific work is the creation of intelligence systems that allow robots to swap different tools to perform different tasks. The proliferation of multi-purpose systems would take the industry a step closer to general-purpose dream.

Excerpt from:

Generative AI takes robots a step closer to general purpose - TechCrunch

Posted in Robotics | Comments Off on Generative AI takes robots a step closer to general purpose – TechCrunch

Robotics Manufacturing Hub to help small and midsize U.S. manufacturers compete – Robot Report

Posted: at 7:51 pm

Listen to this article

The Robotics Manufacturing Hub is modular, adaptable, and multi-use, with OEM diversity. Source: The ARM Institute

When the ARM Institute launched its Robotics Manufacturing Hub about a year ago, it quickly realized that U.S. manufacturers werent looking at robotics and automation because they werent interested in the technology. Instead, the barriers to automation loomed so large that it was impossible for small and midsize firms to know where to start.

When the ARM Institute announced its no-cost Robotics Manufacturing Hub for manufacturers in the Pittsburgh region, its pipeline of interested manufacturers rapidly filled. With the ARM Institute offering a pathway to minimize the risks they associate with robotics and automation, U.S. manufacturers were, and still are, eager to explore the possibilities.

Larger manufacturing firms can more easily navigate the process of implementing automation. With greater general resources, in-house R&D, financing to invest in the upfront costs, and more time to explore solutions, theyve more successfully been able to see the process through from start to finish.

Small and midsize manufacturers (SMMs) have to navigate more risk. They need to spend more time understanding how the changes will affect their operations. They often lack in-house robotics expertise, and they need systems that will dynamically meet their needs without requiring constant upkeep when, in many cases, their workforce is already strained.

The ARM InstitutesRobotics Manufacturing Hubis a free resource to help manufacturers navigate these barriers and others by identifying the best business cases for robotics, testing the systems within the manufacturers budget, and offering a path to implementation. Part of this solution includes the ability for SMMs in Southwestern Pennsylvania to work directly with the institutes team of robotics engineers and get hands-on with advanced technologies in the institutes Pittsburgh facility.

Submit your presentation idea now.

Since the Robotics Manufacturing Hubs creation, the ARM Institute has worked with several manufacturers in the Pittsburgh region to explore their challenges and help them understand where robotics can address these challenges.

For example, the ARM Institute worked with a manufacturer of castings and forgings to automate its manual quality-inspection process. Partnering with FARO and NEFF Automation through the Robotics Manufacturing Hub, the ARM Institute performed a proof-of-concept of a Universal Robots cobot controlling a FARO laser scanner. The manufacturer plans to pursue implementation.

The ARM Institute also worked with a company that needed to package heavy iron and steel parts into shipping containers, creating an ergonomically uncomfortable task for a human worker. In this situation, requirements for the robotic end effector were highly specific, and its critical to calculate the correct pick place on the parts and speed limitations of the robot to move heavy parts and prevent failure or injury.

The ARM Institute is working with its member CapSen Robotics on a solution.

CapSen Robotics has designed end effectors to sort metal parts. Source: CapSen Robotics

Much of this work is completed using the ARM Institutes headquarters as a neutral ground for exploration and prototyping, giving manufacturers access to equipment before they commit to installing any system.

This facility is modular, adaptable, and multi-use, with OEM diversity to directly meet each manufacturers individual needs. ARM Institute engineers work directly in the lab and interface between suppliers and manufacturers to act in the SMMs best interest and ensure that the work will address the specific challenges the company is facing.

Below is a brief overview of the equipment available through the Robotics Manufacturing Hub and application areas that can be addressed using this equipment:

The cobots can be configured for the following applications:

The industrial robots can be configured for the following applications

Small and midsize manufacturers in the Pittsburgh region can get a free automation assessment and use the Robotics Manufacturing Hub at no cost, thanks to funding from the Southwestern Pennsylvania Regions Build Back Better Regional Challenge Award. Now is a great time to get started with the hub, as the ARM Institute is looking to work with more manufacturers.

In the future, the ARM Institute hopes to expand these services to manufacturers beyond this region and encourages those with interest in using or housing these services to reach out. In addition, the ARM Institutes member ecosystem can use the Robotics Manufacturing Hub as a benefit of membership.

According to the ARM Institutes Future of Work study released last week, industry trends include keeping people in the loop and the need for organizations to learn how to use data as artificial intelligence increases in importance. As a result, the institute noted that manufacturers and training centers must develop programs to help workers develop the skills needed to stay competitive and adapt to new technologies.

U.S. manufacturing resiliency is the cornerstone of our national security. The ARM Institutes Robotics Manufacturing Hub addresses a critical need in helping to provide SMMs with the resources that they need to explore and implement automation, enhancing their competitiveness and benefiting the full manufacturing ecosystem.

Larry Sweet last year became director of engineering at the Advanced Robotics for Manufacturing (ARM) Institute in Pittsburgh. He has experience in bringing emerging technologies into production by increasing their Technology Readiness Level, concurrent with improvements in factory floor processes and workforce skills.

Sweet was previously the director for worldwide robotics deployment at Amazon Robotics, leading technology transition and system integration for all internally developed automation into Amazons global network. He has also held senior manufacturing and technology roles at Symbotic, the Frito-Lay, United Technologies, ABB, FANUC, and GE. Sweet spoke at the 2024 Robotics Summit & Expo in May.

Editors note: This article is syndicated fromThe Robot Reportsibling siteEngineering.com.

Visit link:

Robotics Manufacturing Hub to help small and midsize U.S. manufacturers compete - Robot Report

Posted in Robotics | Comments Off on Robotics Manufacturing Hub to help small and midsize U.S. manufacturers compete – Robot Report

DeepMind experimenting with ‘Shadow Hand’ that can withstand a severe beating in the name of AI research – Livescience.com

Posted: at 7:51 pm

A U.K. robotics startup has claimed its new robot hand designed for artificial intelligence (AI) research is the most dexterous and robust out there.

The Shadow Robot Companys "Shadow Hand," built in collaboration with Googles DeepMind, can go from fully open to closed within 0.5 seconds and can perform a normal fingertip pinch with up to 10 newtons of force.

Its primarily built for AI research, specifically "real-world" machine learning projects that focus on robotic dexterity. These projects may include TK EXAMPLE (OpenAI is using a Shadow Hand device for dexterity training, teaching it to manipulate objects in its hand). However, the Shadow Hand's durability is its key selling point, with the device able to endure extreme punishment, such as aggressive force and impacts.

"One of the goals with this has been to make something that is reliable enough to do long experiments," Rich Walker, one of Shadow Robots directors, said May 30 in a blog post. "If youre doing a training run on a giant machine learning system and that run costs $10 million, stopping halfway through because a $10k component has failed isnt ideal.

"Initially we said that we could try and improve the robustness of our current hardware. Or, we can go back to the drawing board and figure out what would make it possible to do the learning you need. Whats an enabling approach here?"

Related: Robot hand exceptionally 'human-like' thanks to new 3D printing technique

What exactly makes the Shadow Hand so robust isnt entirely clear: the company website states only that it is "resistant against repeated impacts from its environment and aggressive use from an untrained policy," which does little to explain the methods and materials used. But in his blog post, Walker suggested trial and error was the key to the sturdiness of the robotic hand.

Get the worlds most fascinating discoveries delivered straight to your inbox.

"We spent a huge amount of time and effort testing the various components, iterating the design, trying various things," Walker explained."It was a very integrated project in terms of collaboration and iterative development. The end result is something quite special. Its not a traditional robot by any means."

The Shadow Robot Company previously demonstrated an earlier robot hand at Amazon re: MARS. Shadow Hand, however, is its latest model. It has been built with precise torque control and each of its fingers is driven by motors at their base and connected via artificial tendons.

Each finger is a self-contained unit with sensors and stereo cameras simulating a sense of touch. The segments that make up the fingers are fitted with tactile sensors, and a stereo camera setup provides high-resolution, wide-dynamic-range feedback. The cameras are specifically pointed towards the inside of the surface of the silicon-covered fingertips so that they can capture the moment it touches something and convert this visual data into other types of data.

Should any of the appendages endure significant damage, they can simply be removed from the base model and replaced. The sensors can also be replaced if need be, with the internal network able to identify when a sensor has been removed and a new one added.

Read this article:

DeepMind experimenting with 'Shadow Hand' that can withstand a severe beating in the name of AI research - Livescience.com

Posted in Robotics | Comments Off on DeepMind experimenting with ‘Shadow Hand’ that can withstand a severe beating in the name of AI research – Livescience.com

Collaborative Robotics Opens Seattle Office After $100MM Raise – The Registry Seattle

Posted: at 7:51 pm

In a move thats set to reshape the landscape of robotics, Collaborative Robotics unveiled its ambitious plans for the future this week. The company, known for its innovative collaborative robots (cobots), is stepping into the forefront of AI with the formation of a dedicated Foundation Models AI team. This powerhouse team, led by Michael Vogelsong, a veteran of Amazons Deep Learning Tech team, will be based in a new Seattle office, further solidifying the citys reputation as a tech hub.

Our cobots are already doing meaningful work in production on behalf of our customers, said Brad Porter, CEO of Collaborative Robotics in an article by The Robot Report. Our investment in building a dedicated foundation models AI team for robotics represents a significant step forward as we continue to increase the collaborative potential of our cobots.

The focus of this new AI team is clear: to explore the cutting edge of AI in enhancing robotic capabilities, especially in areas like bimanual manipulation and low-latency multimodal models. Their goal is to create robots that can understand and respond to complex tasks and environments with a level of comprehension and control never before seen.

The company announced on its website that it has secured a new Seattle office and a research grant for University of Washington professor Sidd Srinivasa to support advanced AI research. Industry reports indicate that roughly 30 employees will be working at the companys new offices at 100 NE Northlake Way.

This strategic move follows a successful $100 million Series B funding round in April, which will be used to commercialize Collaborative Robotics autonomous mobile manipulator. Details about this innovative system remain tightly under wraps, but snippets of information reveal a wheeled collaborative robot with omnidirectional motion and the ability to handle totes and boxes in warehouse settings.

The opportunity surrounding foundation models AI in the robotics industry can be significant. These models hold the promise of generalizing behaviors and streamlining the development and maintenance of special-purpose models, the report stated. Collaborative Robotics is prioritizing work in this arena, integrating advanced machine learning techniques into its production robots. This approach, coupled with novel research and partnerships, looks to revolutionize the adaptability and precision of robotic tasks.

See original here:

Collaborative Robotics Opens Seattle Office After $100MM Raise - The Registry Seattle

Posted in Robotics | Comments Off on Collaborative Robotics Opens Seattle Office After $100MM Raise – The Registry Seattle

Top 6 examples of humanoid robots – TechTarget

Posted: at 7:51 pm

Humanoids are a fusion of AI and robotics. They typically have a body structure similar to humans, often sport skin and eyes, and are equipped with sensors and cameras to recognize human faces, respond to voice commands and engage in conversations.

Also embedded with the technology to mimic human traits, humanoids can learn and adapt in real time. The most recent versions of these robots may even exhibit a wide spectrum of human emotions and move and talk like people.

Besides captivating human imagination, these anthropomorphic creations also serve as groundbreaking tools across various industries. According to a Goldman Sachs report, the global market for humanoid robots could reach $38 billion by 2035, underscoring their importance across numerous industries.

Various sources, such as Interesting Engineering, Business Today and Built In, identify the following as the top examples of humanoid robots:

Sophia is an emotionally intelligent, AI-powered social robot that a team of AI experts and David Hanson of the Hong Kong-based company Hanson Robotics developed. It was activated on February 14, 2016, and unlike previous models of humanoids, Sophia can imitate human expressions and engage in conversations.

Sophia is a service robot developed to fulfill specific roles such as caring for the elderly, serving customers, engaging with kids and handling crowds at events. Sophia's exceptional natural language processing skills, fueled by AI and neural networks, enable it to maintain eye contact, answer questions, converse and synchronize body language with its voice. Sophia is also skilled at reading the emotions and body language of humans. Sophia has been featured at numerous events and conferences, such as the Consumer Electronics Show (CES) 2019 and is scheduled to appear at the Global AI Show and Global Blockchain Show in December 2024.

Interesting facts about Sophia: Sophia's look is an ideal fusion of science fiction and historical elegance and was inspired by the Hollywood actress Audrey Hepburn, Amanda Hudson (the wife of Hanson) and the ancient Egyptian queen Nefertiti.

In 2019, Sophia displayed the ability to create drawings, including portraits. Notably, a non-fungible token (NFT) self-portrait created by Sophia sold for nearly $700,000 at an auction in Hong Kong, China, in 2021.

Developed by an American robotics design company, Boston Dynamics, and funded by the Defense Advanced Research Project (DARPA), Atlas made its public debut on July 11, 2013. Measuring 5 feet tall and weighing 196 pounds, the first iteration of Atlas relied on a robust and intricate hydraulics system, that enhanced its agility. Capable of backflips and bending down, this robot was designed to undertake hazardous tasks in search and rescue missions. Atlas also aids in real-world applications, such as industrial automation tasks, and mobile manipulation that involve the integration of navigation and interaction with the environment, such as welding, screwing and quality control.

In April 2024, Boston Dynamics revealed intentions to replace the hydraulic Atlas with an electric version to boost its strength and provide a wider range of motion.

Interesting facts about Atlas: The retired hydraulics version of Atlas was the most agile humanoid around. It effortlessly lifted and transported items such as boxes and crates. However, its signature moves were running, jumping and performing backflips.

Ameca's designer and vendor, Engineered Arts claims that Ameca is the world's most advanced humanoid robot. Originally conceived as a foundation for advancing robotics technologies in human-robot interaction and as a development platform for testing AI and machine learning systems, this humanoid incorporates embedded microphones, binocular eye-mounted cameras, a chest camera and facial recognition software for engaging with the public.

Ameca was developed at Engineered Arts' base in Falmouth, Cornwall, UK, in 2021. It quickly captured the spotlight on X (formerly known as Twitter) and TikTok before its debut demonstration at CES 2022, where it attracted vast coverage from various media outlets.

Interesting facts about Ameca: Since Ameca has cameras in each of its eyes, it can recognize and track faces, identify objects and respond appropriately when a hand is placed in front of its face. It also has humanlike shoulder motions and can extend its hand to the side of its head.

Geminoid DK is a teleoperated android boasting a metallic skeleton covered in silicone skin and complemented by human and artificial hair. When it debuted in 2011, the world was taken aback by its lifelike appearance and facial expressions.

The Geminoid DK also shares an uncanny resemblance with its creator, the Danish professor Henrik Scharfe of Aalborg University, who collaborated on the project along with Japanese engineer Hiroshi Ishiguro, his team at Advanced Telecommunication Institute International, and Sanrio Group's robot manufacturer Kokoro.

Geminoid DK's goal is to study human-robot interactions, especially how people respond to robotic representations of real humans.

Interesting facts about Geminoid DK: Geminoid-DK can establish eye contact, exhibit various expressions and perform involuntary muscle and breathing movements. It's also the first humanoid robot to sport a beard, which, along with other facial hair, was manually implanted and trimmed using Henrik Scharfe's personal trimmer.

Nadine is a gynoid social robot, also known as a fembot, that was created in 2013. It was modeled after Professor Nadia Magnenat Thalmann, one of Nadine's creators and a visiting professor at Nanyang Technological University's Institute (NTU). Japanese firm Kokoro developed Nadine's hardware, while Thalmann's team at NTU crafted the software and articulated the robot's hands to achieve natural grasping.

Nadine was designed to interact with humans in social settings, displaying empathy, answering queries and remembering conversations. Nadine is equipped with 3D depth cameras and microphones to ensure seamless operation.

Interesting facts about Nadine: Nadine is full of personality, returns greetings, makes eye contact and interacts with arm movements. It assists individuals with special needs by reading stories and helping with other communication tasks. Additionally, Nadine has served as an office receptionist or a personal coach.

Pepper was developed by SoftBank Robotics and made its debut in 2014. This advanced and commercially available social humanoid robot stands at approximately 4 feet tall and features a tablet display on its chest for enabling interactions with users.

Pepper was created to serve various functions and industries. For example, it has served as a companion in various settings such as homes, schools, hospitality, healthcare and retail. It is equipped with several cameras, touch sensors and microphones that enable it to engage with humans through speech, touch and emotion recognition.

Interesting facts about Pepper: Pepper's voice can be adjusted depending on preferences. Pepper utilizes tactile sensors in its hands that enable it to perform human actions such as gently picking up and setting down objects. Pepper uses these sensors during activities such as playing games or engaging in social interactions. These sensors are also present in Pepper's head to perceive touch and interactions.

Kinza Yasar is a technical writer for WhatIs with a degree in computer networking.

See the original post here:

Top 6 examples of humanoid robots - TechTarget

Posted in Robotics | Comments Off on Top 6 examples of humanoid robots – TechTarget

Quantum computers are like kaleidoscopes why unusual metaphors help illustrate science and technology – The Conversation

Posted: at 7:50 pm

Quantum computing is like Forrest Gumps box of chocolates: You never know what youre gonna get. Quantum phenomena the behavior of matter and energy at the atomic and subatomic levels are not definite, one thing or another. They are opaque clouds of possibility or, more precisely, probabilities. When someone observes a quantum system, it loses its quantum-ness and collapses into a definite state.

Quantum phenomena are mysterious and often counterintuitive. This makes quantum computing difficult to understand. People naturally reach for the familiar to attempt to explain the unfamiliar, and for quantum computing this usually means using traditional binary computing as a metaphor. But explaining quantum computing this way leads to major conceptual confusion, because at a base level the two are entirely different animals.

This problem highlights the often mistaken belief that common metaphors are more useful than exotic ones when explaining new technologies. Sometimes the opposite approach is more useful. The freshness of the metaphor should match the novelty of the discovery.

The uniqueness of quantum computers calls for an unusual metaphor. As a communications researcher who studies technology, I believe that quantum computers can be better understood as kaleidoscopes.

The gap between understanding classical and quantum computers is a wide chasm. Classical computers store and process information via transistors, which are electronic devices that take binary, deterministic states: one or zero, yes or no. Quantum computers, in contrast, handle information probabilistically at the atomic and subatomic levels.

Classical computers use the flow of electricity to sequentially open and close gates to record or manipulate information. Information flows through circuits, triggering actions through a series of switches that record information as ones and zeros. Using binary math, bits are the foundation of all things digital, from the apps on your phone to the account records at your bank and the Wi-Fi signals bouncing around your home.

In contrast, quantum computers use changes in the quantum states of atoms, ions, electrons or photons. Quantum computers link, or entangle, multiple quantum particles so that changes to one affect all the others. They then introduce interference patterns, like multiple stones tossed into a pond at the same time. Some waves combine to create higher peaks, while some waves and troughs combine to cancel each other out. Carefully calibrated interference patterns guide the quantum computer toward the solution of a problem.

The term bit is a metaphor. The word suggests that during calculations, a computer can break up large values into tiny ones bits of information which electronic devices such as transistors can more easily process.

Using metaphors like this has a cost, though. They are not perfect. Metaphors are incomplete comparisons that transfer knowledge from something people know well to something they are working to understand. The bit metaphor ignores that the binary method does not deal with many types of different bits at once, as common sense might suggest. Instead, all bits are the same.

The smallest unit of a quantum computer is called the quantum bit, or qubit. But transferring the bit metaphor to quantum computing is even less adequate than using it for classical computing. Transferring a metaphor from one use to another blunts its effect.

The prevalent explanation of quantum computing is that while classical computers can store or process only a zero or one in a transistor or other computational unit, quantum computers supposedly store and handle both zero and one and other values in between at the same time through the process of superposition.

Superposition, however, does not store one or zero or any other number simultaneously. There is only an expectation that the values might be zero or one at the end of the computation. This quantum probability is the polar opposite of the binary method of storing information.

Driven by quantum sciences uncertainty principle, the probability that a qubit stores a one or zero is like Schroedingers cat, which can be either dead or alive, depending on when you observe it. But the two different values do not exist simultaneously during superposition. They exist only as probabilities, and an observer cannot determine when or how frequently those values existed before the observation ended the superposition.

Leaving behind these challenges to using traditional binary computing metaphors means embracing new metaphors to explain quantum computing.

The kaleidoscope metaphor is particularly apt to explain quantum processes. Kaleidoscopes can create infinitely diverse yet orderly patterns using a limited number of colored glass beads, mirror-dividing walls and light. Rotating the kaleidoscope enhances the effect, generating an infinitely variable spectacle of fleeting colors and shapes.

The shapes not only change but cant be reversed. If you turn the kaleidoscope in the opposite direction, the imagery will generally remain the same, but the exact composition of each shape or even their structures will vary as the beads randomly mingle with each other. In other words, while the beads, light and mirrors could replicate some patterns shown before, these are never absolutely the same.

Using the kaleidoscope metaphor, the solution a quantum computer provides the final pattern depends on when you stop the computing process. Quantum computing isnt about guessing the state of any given particle but using mathematical models of how the interaction among many particles in various states creates patterns, called quantum correlations.

Each final pattern is the answer to a problem posed to the quantum computer, and what you get in a quantum computing operation is a probability that a certain configuration will result.

Metaphors make the unknown manageable, approachable and discoverable. Approximating the meaning of a surprising object or phenomenon by extending an existing metaphor is a method that is as old as calling the edge of an ax its bit and its flat end its butt. The two metaphors take something we understand from everyday life very well, applying it to a technology that needs a specialized explanation of what it does. Calling the cutting edge of an ax a bit suggestively indicates what it does, adding the nuance that it changes the object it is applied to. When an ax shapes or splits a piece of wood, it takes a bite from it.

Metaphors, however, do much more than provide convenient labels and explanations of new processes. The words people use to describe new concepts change over time, expanding and taking on a life of their own.

When encountering dramatically different ideas, technologies or scientific phenomena, its important to use fresh and striking terms as windows to open the mind and increase understanding. Scientists and engineers seeking to explain new concepts would do well to seek out originality and master metaphors in other words, to think about words the way poets do.

Originally posted here:

Quantum computers are like kaleidoscopes why unusual metaphors help illustrate science and technology - The Conversation

Posted in Quantum Computing | Comments Off on Quantum computers are like kaleidoscopes why unusual metaphors help illustrate science and technology – The Conversation