Unity moves robotics design and training to the metaverse – VentureBeat

Posted: November 17, 2021 at 1:11 pm

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event.

Unity, the San Francisco-based platform for creating and operating games and other 3D content, on November 10 announced the launch of Unity Simulation Pro and Unity SystemGraph to improve modeling, testing, and training complex systems through AI.

With robotics usage in supply chains and manufacturing increasing, such software is critical to ensuring efficient and safe operations.

Danny Lange, senior vice president of artificial intelligence for Unity, told VentureBeat via email that the Unity SystemGraph uses a node-based approach to model the complex logic typically found in electrical and mechanical systems. This makes it easier for roboticists and engineers to model small systems, and allows grouping those into larger, more complex ones enabling them to prototype systems, test and analyze their behavior, and make optimal design decisions without requiring access to the actual hardware, said Lange.

Unitys execution engine, Unity Simulation Pro, offers headless rendering eliminating the need to project each image to a screen and thus increasing simulation efficiency by up to 50% and lowering costs, the company said.

The Unity Simulation Pro is the only product built from the ground up to deliver distributed rendering, enabling multiple graphics processing units (GPUs) to render the same Unity project or simulation environment simultaneously, either locally or in the private cloud, the company said. This means multiple robots with tens, hundreds, or even thousands of sensors can be simulated faster than real time on Unity today.

According to Lange, users in markets like robotics, autonomous driving, drones, agriculture technology, and more are building simulations containing environments, sensors, and models with million-square-foot warehouses, dozens of robots, and hundreds of sensors. With these simulations, they can test software against realistic virtual worlds, teach and train robot operators, or try physical integrations before real-world implementation. This is all faster, more cost-effective, and safer, taking place in the metaverse.

A more specific use case would be using Unity Simulation Pro to investigate collaborative mapping and mission planning for robotic systems in indoor and outdoor environments, Lange said. He added that some users have built a simulated 4,000 square-foot building sitting within a larger forested area and are attempting to identify ways to map the environment using a combination of drones, off-road mobile robots, and walking robots. The company reports it has been working to enable creators to build and model the sensors and systems of mechatronic systems to run in simulations.

A major application of Unity SystemGraph is how it enables those looking into building simulations with a physically accurate camera, lidar models, and SensorSDK to take advantage of SystemGraphs library of ready-to-use models and easily configure them to their specific cases.

Customers can now simulate at scale, iterate quickly, and test more to drive insights at a fraction of current simulation costs, Unity says. The company adds that customers like Volvo Cars, Allen Institute of AI, and Carnegie Mellon University are already seeing results.

While there are several companies that have built simulators targeted especially at AI applications like robotics or synthetic data generation, Unity claims that the ease of use of its authoring tools makes it stand out above its rivals, including top competitors like Roblox, Aarki, Chartboost, MathWorks, and Mobvista. Lange says this is evident in the size of Unitys existing user base of over 1.5 million creators using its editor tools.

Unity says its technology is aimed at impacting the industrial metaverse, where organizations continue to push the envelope on cutting-edge simulations.

As these simulations grow in complexity in terms of the size of the environment, the number of sensors used in that environment, or the number of avatars operating in that environment, the need for our product increases. Our distributed rendering feature, which is unique to Unity Simulation Pro, enables you to leverage the increasing amount of GPU compute resources available to customers, in the cloud or on-premise networks, to render this simulation faster than real time. This is not possible with many open source rendering technologies or even the base Unity product all of which will render at less than 50% real time for these scenarios, Lange said.

Moving into 2022, Unity says it expects to see a steep increase in the adoption of AI-powered technologies, with two key adoption motivators. On one side, companies like Unity will continue to deliver products that help lower the barrier to entry and help increase adoption by wider ranges of customers. This is combined with the decreasing cost of compute, sensors, and other hardware components, Lange said. Then on the customer adoption side, the key trends that will drive adoption are broader labor shortages and the demand for more operational efficiencies all of which have the effect of accelerating the economics that drive the adoption of these technologies on both fronts.

Unity is doubling down on building purpose-built products for its simulation users, enabling them to mimic the real world by simulating environments with various sensors, multiple avatars, and agents for significant performance gains with lower costs. The company says this will help its customers to take the first step into the industrial metaverse.

Unity will showcase the Unity Simulation Pro and Unity SystemGraph through in-depth sessions at the forthcoming Unity AI Summit on November 18, 2021.

Original post:

Unity moves robotics design and training to the metaverse - VentureBeat

Related Posts