AI heading back to the trough – Network World

Posted: July 11, 2017 at 10:11 pm

I like Gartners concept of the technology hype cycle. It assumes that expectations of new technologies quickly ramp to an inflated peak, drop into a trough of disillusionment, then gradually ascend a slope of enlightenment until they plateau. Of course, not all technologies complete the cycle or transition through the stages at the same pace.

Artificial intelligence (AI) has arguably been in the trough for 60 years. I am thinking of Kubricks HAL and Roddenberrys computer that naturally interact with humans. Thats a long trough, and despite popular opinion, the end is nowhere in sight.

Theres so much excitement and specialized research taking place that AI has fragmented into several camps such as heuristic programming for game-playing AI, natural language processing for conversational AI, and machine learning for statistical problems. The hype is building again, and just about every major tech company and countless startups are racing toward another inflated peak and subsequent trough.

The reason expectations are so high is because of breakthroughs in three broad categories: compute, data and algorithms. The compute innovations refer to general cloud services and specific improvements in processing arrays and graphics processing units (GPUs).

The availability of huge data sets has also been important for machine learning. Large labeled and annotated data sets have enabled progress in computer vision, natural language and speech recognition. There are numerous public data sets available, plus many of the larger firms are also using their own private data.

The third ingredient is advanced algorithms that with compute power and data provide responses or predictions. For example, algorithms are used to recommend movies to watch, stocks to trade and updates to include on a timeline. The concept is as old as computing itself, but suddenly vastly improved.

Or is it? While a computer beat a human chess champion 21 years ago, it wasnt until two months ago that a different computer beat a human champion at Go. There was an impressive milestone on Jeopardy in 2011and more recently a breakthrough regarding Ms. Pac-Man.

AI will definitely change the world, but just dont hold your breath, at least not regarding general purpose AI. Specialized AI, such as self-driving cars, is progressing quickly. The general AI stuff is almost useless.

I have yet to find any general AI solution to be helpful. For example, Google Assistant often suggests to me the best time to leave for the airport. Its invariably wrong. It largely bases its recommendation on my current location and traffic conditions. My personal algorithm for determining the best time to leave for the airport involves relatively big data. I consider variables such as how I intend to get there (car, bus or shuttle). If by car, then I factor in where I intend to park. Then there's gate and concourse information; whether I have PRE on my boarding pass; and whether I intend to eat at the airport before departure.

Usually, I take the bus to the airport and query Google about the bus schedule. The famed Google Assistant cant recognize that pattern. Telling me when to leave to catch the airport bus could be more helpful.

But having the data to answer the question isnt Googles problem. The difficulty lies in understanding the question. Emmanuel Mogenet, head of Google Research Europe, recently highlighted the limitations of natural language processing with a similar example. Google Assistant cant answer will it be dark when I get home? Let me put that in context. Google cant answer this question even when it knows where the user is, where the user lives, and when the sun sets at that location.

This is not a question that has an answer Google can look up. It needs to pull all this information together, and doing that requires truly understanding the relationship between the question and the data. Thats a hard puzzle to solve. Now consider that Google Assistant is six times more likely to correctly answer a question than Amazons Alexa.

Alexa now boasts more than 15,000 skills. These skills are largely simple web queries. The AI part is using speech instead of a keyboard.

AI has a ways to go, but thats not even the whole problem. As with my airport example above, AI works best when it has access to contextual data. That often means exposing personal and confidential data to the service, which is a practice riddled with concerns and liabilities. Its not as if security breaches are rare.

Theres also the little issue that AI is very hard to test. Developing self-driving cars requires driving cars millions of miles. That just doesnt scale, so we keep discovering gaps with each new application. Even self-driving car behavior can be surprising. Volvo recently found that its self-driving cars cannot recognize kangaroos.Oops.

I think its important to reset expectations about AI. Its fantastic that some people find Siri, Google Assistant and Alexa helpful sometimes. We should briefly celebrate the tremendous progress in kitchen timer technologyand then get back to work.

Read the original post:

AI heading back to the trough - Network World

Related Posts