Qualcomm opens up its AI optimization software, says dedicated mobile chips are coming – The Verge

Posted: July 25, 2017 at 12:16 pm

In the race to get AI working faster on your smartphone, companies are trying all sorts of things. Some, like Microsoft and ARM, are designing new chips that are better suited to run neural networks. Others, like Facebook and Google, are working to reduce the computational demands of AI itself. But for chipmaker Qualcomm whose processors account for 40 percent of the mobile market the current plan is simpler: adapt the silicon thats already in place.

To this end the company has developed what it calls its Neural Processing Engine. This is a software development kit (or SDK) that helps developers optimize their apps to run AI applications on Qualcomms Snapdragon 600 and 800 series processors. That means that if youre building an app that uses AI for, say, image recognition, you can integrate Qualcomms SDK and it will run faster on phones with compatible processors.

Qualcomm first announced the Neural Processing Engine a year ago as part of its Zeroth platform (which has since been killed off as a brand). From last September its been working with a few partners on developing the SDK, and today its opening it up to be used by all.

Any developer big or small that has already invested in deep learning meaning they have access to data and trained AI models they are the target audience, Gary Brotman, Qualcomms head of AI and machine learning, told The Verge. Its simple to use. We abstract everything under the hood so you dont have to get your hands dirty.

The company says one of the first companies to integrate its SDK is Facebook, which is currently using it to speed up the augmented reality filters in its mobile app. By using the Neural Processing Engine, says Qualcomm, Facebooks filters load five times faster than compared to a generic CPU implementation.

How exactly developers will use the SDK will vary from job to job, but the basic task of the software is to allocate tasks to different parts of Qualcomms Snapdragon chipset. Depending on whether developers want to optimize for battery life or processing speed, for example, they can draw on compute resources from different parts of the chip eg, the CPU, GPU, or DST. It allows you choose your core of choice relative to the power performance profile you want for your user, explains Brotman.

The SDK works with some of the most popular frameworks for developing AI systems, including Caffe, Caffe2, and Googles TensorFlow. Qualcomm says its designed not just to optimize AI on mobile devices, but also in cars, drones, VR headsets, and smart home products.

what were seeing is a tidal wave of AI workloads.

But deploying frameworks that adapt existing silicon is only the beginning. What were seeing is a tidal wave of AI workloads that are creating more demand for compute, says Brotman. To meet this demand, companies are working on entirely new architectural designs for AI-optimized chips. Microsoft, for example, is building a custom machine learning processor for the Hololens 2, while British chipmaker Graphcore recently raised $30 million to build its own Intelligence Processing Units for mobile devices.

For Qualcomm, this switch is further down the line, but its definitely coming. When were baking something into silicon, thats a very deliberate bet for us, and it doesnt come easy, says Brotman. Computes compute, and if we can optimize now what weve already got in our portfolio then were doing our job well. Longer term, though, is there going to be a need for dedicated neural computing? I think thats going to be the case, and the question is just, when do we place that bet.

Here is the original post:

Qualcomm opens up its AI optimization software, says dedicated mobile chips are coming - The Verge

Related Posts