Artificial intelligence: The good, the bad and the ugly – TechTalks

Image credit: Depositphotos

Welcome to TechTalksAI book reviews, a series of posts that explore the latest literature on AI.

It wouldnt be an overstatement to say that artificial intelligence is one of the most confusing and least understood fields of science. On the one hand, we have headlines that warn of deep learning outperforming medical experts, creating their own language and spinning fake news stories. On the other hand, AI experts point out that artificial neural networks, the key innovation of current AI techniques, fail at some of the most basic tasks that any human child can perform.

Artificial intelligence is also marked with some of the most divisive disputes and rivalries. Scientists and researchers are constantly quarreling over the virtues and shortcomings of different AI techniques, further adding to the confusion and chaos.

Between tales of killer drones and robots that cant brew a cup of coffee, a book that sheds light on the real capabilities and limits of artificial intelligence is an invaluable and necessary read. And this is exactly how I would describe Artificial Intelligence: A Guide for Thinking Humans, a book by computer science professor Melanie Mitchell.

With a rich background in artificial intelligence and computer science, Mitchell sorts out, in her own words, how far artificial intelligence has come, and elucidates AIs disparateand sometimes conflictinggoals. As AI is drawing growing attention from investors, governments and the media, Mitchells Guide for Thinking Humans lays out the good, bad and ugly of artificial intelligence.

Most of us reading news headlines view artificial intelligence in the context sensational articles that have started to appear in mainstream media in the past few years. But theres much more to AI, which has a history that dates back to the early days of computing.

A Guide for Thinking Humans demystifies some of the least understood facts about artificial intelligence. As you read through the chapters, Mitchell eloquently takes you through the six-decade history of AI. You become acquainted with the original vision of AI, the early efforts at creating symbolic AI and expert systems, and parallel efforts to develop artificial neural networks.

You go through the AI winters, where overpromising and underdelivering dampened interest and funding in artificial intelligence. One of the most important parts of the book is the chapter convolutional neural networks (CNN), the AI technique that triggered the deep learning revolution in the early 2010s. While digging into the inner workings of CNNs, Mitchell also explains how other scientific fields such as neuroscience and cognitive science have played a crucial role in advancing AI.

Today convolutional neural networks and deep learning, in general, are a pertinent component of many applications we use every day.

It turns out that the recent success of deep learning is due less to new breakthroughs in AI than to the availability of huge amounts of data (thank you, internet!) and very fast parallel computer hardware, Mitchell notes in A Guide for Thinking Humans. These factors, along with improvements in training methods, allow hundred-plus-layer networks to be trained on millions of images in just a few days.

The trend, sparked by the ImageNet competition, has gradually morphed the field from an academic contest to a high-profile sparring match for tech companies commercializing computer vision, Mitchell explains.

The commercialization of AI has had bad effects on the field (as Ive also argued in these pages). Mitchell points to some of the other negative effects of the race to beat tests and benchmarks. Precision at ImageNet has become a de facto ticket to getting funding and improving stock prices and product sales. It has also led some companies and organizations to cheat their way to better test results without proving robustness in real-world situations.

These systems can make unpredictable errors that are not easy for humans to understand, Mitchell said in written comments to TechTalks. The machines often are not able to deal with input that is different from the kind of input they have been trained on. A Guide for Thinking Humans provides several examples of AIs failures.

There are several studies that show deep learning models optimized for ImageNet do not necessarily perform well when faced with objects in real life. There are also numerous papers that show how neural networks can make dangerous mistakes.

Mitchell also points out that, while very efficient at processing vast amounts of data, current AI models lack the generalization abilities of human intelligence, which makes them vulnerable to the long-tail problem: the vast range of possible unexpected situations an AI system could be faced with. Unfortunately, current approaches to AI only try to solve these problems by throwing more data and compute at the problem.

Often these benchmark datasets dont force the learning systems to solve the actual full problem that humans want them to solve (e.g., object recognition) but allow the learning systems to use shortcuts (e.g., distinguishing textures) that work well on the benchmark dataset, but dont generalize as well as they should, Mitchell says.

The obsession with creating bigger datasets and bigger neural networks has sidelined some of the important questions and areas of research regarding AI. Some of these topics include causality, reasoning, commonsense, learning from few examples and other fundamental elements that todays AI technology lacks.

But at least, the ImageNet race has taught us one thing. Says Mitchell: It seems that visual intelligence isnt easily separable from the rest of intelligence, especially general knowledge, abstraction, and language Additionally, it could be that the knowledge needed for humanlike visual intelligence cant be learned from millions of pictures downloaded from the web, but has to be experienced in some way in the real world.

Fortunately, these are topics that have been gaining increasing attention in the past year. In his 2019 NeurIPS keynote speech, deep learning pioneer Yoshua Bengio discussed system 2 deep learning, which aims to solve some of these fundamental problems. While not everyone agrees with Bengios approach (and its not clear which approach will work), the fact that these things are being discussed is a positive development.

One of the least-understood aspects of artificial intelligence is its handling of human language. The advances in the field have been tremendous. Machine translation has taken leaps and bound thanks to deep learning. Search engines are producing much more meaningful results. There are AI algorithms that can pass science tests. And of course, theres that OpenAI text generation algorithm that threatens to create a massive fake new crisis.

There has also been remarkable progress in speech recognition, an area where neural networks perform especially well (Mitchell calls it AIs most significant success to date in any domain). It is thanks to deep learning that you can utter commands to Alexa, Siri, and Cortana. Your Gmails Smart Compose and sentence completion features are powered by AI. And the numerous chatbot applications that have found a stable user base all leverage advances in natural language processing (NLP).

As Mitchell told TechTalks, I think some of these advances are very positive developments; applications such as automated translation, speech recognition, etc. certainly make life better. Indeed, human-machine combination is much better today than at any time in the past.

But whats less understood is how much todays artificial intelligence systems understand the meaning of language.

Understanding languageincluding the parts that are left unsaidis a fundamental part of human intelligence, Mitchell explains in A Guide for Thinking Humans. Language relies on commonsense knowledge and understanding of the world, two areas where todays AI lacks sorely.

Todays machines lack the detailed, interrelated concepts and commonsense knowledge that even a four-year-old child brings to understanding language, Mitchell writes.

And its true. Even the most sophisticated language models start to break as soon as you test their limits. For the moment, AI is limited to handling small amounts of text. Alexa can perform thousands of tasks, but it cant hold a meaningful conversation. Smart Compose provides interesting reply suggestions, but theyre only short answers to basic queries. Google Translate produces decent results when you want to translate simple sentences. But it cant translate an article that contains the rich and complicated nuances of language and culture. And the text generated by OpenAIs famous GPT-2 language model loses coherence as it becomes longer.

This is because todays AI still lacks the understanding of language. So how is AI performing such feats? It is basically the same pattern-matching that neural networks are performing on images (though in a different manner and with some added tricks). Again, recent years have proven that bigger data sets and larger neural networks will help push the limits of NLP applications. But they wont result in breakthroughs.

Whats stunning to me is that speech-recognition systems are accomplishing all this without any understanding of the meaning of the speech they are transcribing Many people in AI, myself included, had previously believed that AI speech recognition would never reach such a high level of performance without actually understanding language. But weve been proven wrong, Mitchell writes in A Guide for Thinking Humans.

But as she later explains in the book (and the failures of AI show), theres only so much you can achieve with statistics and pattern matching. For the moment, AI systems might have solved 90 percent of the problem of solving language problems. But that last 10 percent, dealing with the implicit subtleties and hidden meanings of language, remain unsolved.

Whats needed to power through that last stubborn 10 percent? More data? More network layers? Or, dare I ask, will that last 10 percent require an actual understanding of what the speaker is saying? Mitchell reflects. Im leaning toward this last one, but Ive been wrong before.

So while we enjoy the applications of artificial intelligence in natural language processing, theres no reason to worry that robots will soon replace human writers or interpreters. While neural machine translation can be impressively effective and useful in many applications, the translations, without post-editing by knowledgeable humans, are still fundamentally unreliable, Mitchell observes.

A Guide for Thinking Humans delves into many more topics, including the ethics of AI, the intricacies of the human mind, and the meaning of intelligence. Like several other scientists, Mitchell notes in her book that in intelligence, the central notion of AI, remains ill-defined, with various meanings, an over-packed suitcase, zipper on the verge of breaking.

The attempt to create artificial intelligence has, at the very least, helped elucidate how complex and subtle are our own minds, Mitchell writes.

See the article here:
Artificial intelligence: The good, the bad and the ugly - TechTalks

Related Posts
This entry was posted in $1$s. Bookmark the permalink.