Intel expands AI developer toolkit to bring more intelligence to the edge – ZDNet

Posted: February 28, 2022 at 8:20 pm

Intel on Wednesday announcedthat it's updating its OpenVINO AI developer toolkit, enabling developers to use it to bring a wider range of intelligent applications to the edge. Launched in 2018 with a focus on computer vision, OpenVINO now supports a broader range of deep learning models, which means adding support for audio and natural language processing use cases.

"With inference taking over as a critical workload at the edge, there's a much greater diversity of applications" under development, Adam Burns, Intel VP and GM of Internet of Things Group, said to ZDNet.

Since its launch, hundreds of thousands of developers have used OpenVINO to deploy AI workloads at the edge, according to Intel. A typical use case would be defect detection in a factory. Now, with broader model support, a manufacturer could use it to build a defect spotting system, plus a system to listen to a machine's motor for signs of failure.

Besides the expanded model support, the new version of OpenVINO offers more device portability choices besides the expanded model support with an updated and simplified API.

OpenVINO 2022.1 also includes a new automatic optimization process. The new capability auto-discovers the compute and accelerators on a given system and then dynamically load balances and increases AI parallelization based on memory and compute capacity.

"Developers create applications on different systems," Burns said. "We want developers to be able to develop right on their laptop and deploy to any system."

Intel customers already using OpenVINO include automakers like BMW and Audi; John Deere, which uses it for welding inspection; and companies making medical imaging equipment like Samsung, Siemens, Philips and GE. The software is easily deployed into Intel-based solutions -- which is a compelling selling point, given that most inference workloads already run on Intel hardware.

"We expect a lot more data to be stored and processed at the edge," Sachin Katti, CTO of Intel's Network and Edge Group, said to ZDNet. "One of the killer apps at the edge is going to be inference-driven intelligence and automation."

Ahead of this year's Mobile World Congress, Intel on Thursday also announced a new system-on-chip (SoC) designed for the software-defined network and edge. The new Xeon D processors (the D-2700 and D-1700) are built for demanding use cases, such as security appliances, enterprise routers and switches, cloud storage, wireless networks, AI inferencing and edge servers -- use cases where compute processing needs to happen close to where the data is generated. The chips deliver integrated AI and crypto acceleration, built-in Ethernet, support for time-coordinated computing and time-sensitive networking.

More than 70 companies are working with Intel on designs that utilize the Xeon D processors, including Cisco, Juniper Networks and Rakuten Symphony.

Intel also said Thursday that its next-gen Xeon Scalable platform, Sapphire Rapids, includes unique 5G-specific signal processing instruction enhancements to support RAN-specific signal processing. This will make it easier for Intel customers to deploy vRAN (virtual Radio Access Networks) in demanding environments.

Original post:

Intel expands AI developer toolkit to bring more intelligence to the edge - ZDNet

Related Posts