AI Weekly: Why Google still needs the cloud even with on-device ML – VentureBeat

Posted: October 20, 2019 at 10:32 pm

Google held its big annual hardware event Tuesday in New York to unveil the Pixel 4, Nest Mini, Pixelbook Go, Nest Wifi, and Pixel Buds. It was mostly predictable because details about virtually every piece of hardware the company revealed at the event were leaked months in advance, but if Googles biggest hardware event of the year had an overarching theme, it was the many applications of on-device machine learning. Most of the hardware Google introduced includes a dedicated chip for running AI, continuing an industry-wide trend to power services consumers will no doubt enjoy, but there can be privacy implications too.

The new Nest Minis on-device machine learning recognizes your most commonly used voice commands to quicken Google Assistant response time compared to the first-generation Home Mini.

In Pixel Buds, due out next year, machine learning helps recognize ambient sound levels and increase or decrease sound the same way your smartphone dims or brightens when its in sunlight or shade.

Google Assistant on Pixel 4 is faster with an on-device language model. Pixel 4s Neural Core will power facial recognition for payment verification, Face Unlock, and Frequent Faces, which is AI that trains your camera to recognize the faces of people you photograph often and then coaches you on how to take the best picture.

Traditionally, edge deployment of on-device machine learning means an AI assistant can function without the need to maintain connection to the internet, an approach that can prevent the need to share user data online or collect the kind of voice recordings that became one of the most controversial privacy concerns for the better part of 2019.

Due to privacy concerns that stem from the routine recording of users voices, phrases like on-device machine learning and edge computing have become synonymous with privacy. Thats why a handful of edge assistants like Snips have made privacy a selling point.

For Googles many AI services, some like speech recognition powered by the Neural Core processor can entirely operate on-device, whereas others like the new Google Assistant require connecting to the cloud and sending your data back to the Google mothership.

Today, on-device AI for Google hardware is primarily meant to provide speed gains, Google Nest product manager Chris Chan told VentureBeat.

Tasks like speech recognition and natural language processing can be completed on-device, but they still need the cloud to deliver personalization and stitch together an ecosystem of smart home devices and streaming services like YouTube or Spotify.

Its a hybrid model, Chan said. If you focus too much on commands existing only on that single device, the user then doesnt benefit from the context of that usage to even other devices, let alone say Nest or Google services when theyre on the go, when theyre in the car, and other environments, Chan said.

In the case of on-device ML for Nest Mini, you still need an internet connection to complete a command, he said.

There are other architectures we could definitely explore over time that might be more distributed or based in the home, but were not there yet, Chan said.

The hybrid approach, as opposed to edge computing that can operate offline, raises the question: The package is powerful, but why not go all the way with an offline Google Assistant?

The answer may lie in that controversial collection of peoples voice data.

Leaders of the global smart speaker market and AI assistant market have moved in unison to address peoples privacy concerns.

In response to controversy over humans reviewing voice recordings from popular digital assistants like Siri, Cortana, Google Assistant, and Alexa, Google and Amazon both introduced voice commands to allow people to delete voice recordings every day. They also extended to users the ability to automatically remove voice data every three months or every 18 months.

So why make it easy to delete data but choose three months or 18 months?

When VentureBeat asked Alexa chief scientist Rohit Prasad this question, he said that Amazon wants to continue to track trends and follow seasonal changes in queries, and theres still more work to do to improve Alexas conversational AI models.

A Google spokesperson also said the company keeps data to understand seasonal or multi-season trends, but that this could be revisited in the future.

In our research, we found that these time frames were preferred by users as theyre inclusive of data from an entire season (a three-month period) or multiple seasons (18 months), the spokesperson said.

Chan said Google users may find more privacy benefits from on-device machine learning in the future.

Its our hope that over the coming years that things go entirely local, because then youre going to get a massive speed benefit, but were not there yet, he said.

As conversational computing becomes a bigger part of peoples lives, why and when tech giants connect assistants to the internet are likely to play a role in shaping peoples perceptions of edge computing and privacy with AI. But if the competition between tech giants ever becomes about making smart home usage more private to meet consumer demand, then consumers can win.

As always, if you come across a story that merits coverage, send news tips toKhari JohnsonandKyle Wiggers and be sure to bookmark our AI Channeland subscribe to theAI Weekly newsletter.

Thanks for reading,

Khari Johnson

Senior AI staff writer

Originally posted here:

AI Weekly: Why Google still needs the cloud even with on-device ML - VentureBeat

Related Posts