Discussing the limits of artificial intelligence – TechCrunch

Posted: April 2, 2017 at 8:03 am

Alice Lloyd George Contributor

Alice Lloyd George is an investor at RRE Ventures and the host of Flux, a series of podcast conversations with leaders in frontier technology.

Its hard to visit a tech site these days without seeing a headline about deep learning for X, and that AI is on the verge of solving all our problems. Gary Marcus remains skeptical.

Marcus, a best-selling author, entrepreneur, and professor of psychology at NYU, has spent decades studying how children learn and believes that throwing more data at problems wont necessarily lead to progress in areas such as understanding language, not to speak of getting us to AGI artificial general intelligence.

Marcusis the voice of anti-hype at a time when AI is all the hype, and in2015 he translated his thinking into a startup,Geometric Intelligence, whichuses insights from cognitive psychology to buildbetter performing, less data-hungrymachine learning systems. The team was acquired by Uber in December torun Ubers AI labs, where his cofounderZoubin Ghahramanihas now been appointedchief scientist.So what did the tech giant see that was so important?

In an interview for Flux,I sat down with Marcus, whodiscussed whydeep learning isthe hammer thats making all problems look like a nailand why his alternative sparse data approach is so valuable.

We also got intothe challenges of being an AIstartup competing with theresources of Google,how corporates arent focused on what society actually needs from AI,his proposal to revamp the outdatedTuring test with amulti-disciplinaryAI triathlon, and why programming a robot to understand harm is so difficult.

Gary you are well known as a critic of this technique, youve said that its over-hyped. That theres low hanging fruit that deep learnings good atspecific narrow tasks like perception and categorization, and maybe beating humans at chess, but you felt that this deep learning mania was taking the field of AI in the wrong direction, that were not making progress on cognition and strong AI. Or as youve put it, we wanted Rosie the robot, and instead we got the roomba. So youve advocated for bringing psychology back into the mix, because theres a lot of things that humans do better, and that we should be studying humans to understand why they do things better. Is this still how you feel about the field?

GM: Pretty much. There was probably a little more low hanging fruit than I anticipated. I saw somebody else say it more concisely, which is simply, deep learning does not equal AGI (AGI is artificial general intelligence.) Theres all the stuff you can do with deep learning, like it makes your speech recognition better. It makes your object recognition better. But that doesnt mean its intelligence. Intelligence is a multi-dimensional variable. There are lots of things that go into it.

In a talk I gave at TEDx CERN recently, I made this kind of pie chart and I said look, heres perception thats a tiny slice of the pie. Its an important slice of the pie, but theres lots of other things that go into human intelligence, like our ability to attend to the right things at the same time, to reason about them to build models of whats going on in order to anticipate what might happen next and so forth. And perception is just a piece of it. And deep learning is really just helping with that piece.

In a New Yorker article that I wrote in 2012, I said look, this is great, but its not really helping us solve causal understanding. Its not really helping with language. Just because youve built a better ladder doesnt mean youve gotten to the moon. I still feel that way. I still feel like were actually no closer to the moon, where the moonshot is intelligence thats really as flexible as human beings. Were no closer to that moonshot than we were four years ago. Theres all this excitement about AI and its well deserved. AI is a practical tool for the first time and thats great. Theres good reason for companies to put in all of this money. But just look for example at a driverless car, thats a form of intelligence, modest intelligence, the average 16-year-old can do it as long as theyre sober, with a couple of months of training. Yet Google has worked on it for seven years and their car still can only drive as far as I can tell since they dont publish the datalike on sunny days, without too much traffic

AMLG: And isnt there the whole black box problem that you dont know whats going on. We dont know the inner workings of deep learning, its kind of inscrutable. Isnt that a massive problem for things like driverless cars?

GM: It is a problem. Whether its an insuperable problem is an open empirical question. So it is a fact at least for now that we cant well interpret what deep learning is doing. So the way to think about it is you have millions of parameters and millions of data points. That means that if I as an engineer look at this thing I have to contend with these millions or billions of numbers that have been set based on all of that data and maybe there is a kind of rhyme or reason to it but its not obvious and theres some good theoretical arguments to think sometimes youre never really going to find an interpretable answer there.

Theres an argument now in the literature which goes back to some work that I was doing in the 90s about whether deep learning is just memorization. So this was the paper that came out that said it is and another says no it isnt. Well it isnt literally exactly memorization but its a little bit like that. If you memorize all these examples, there may not be some abstract rule that characterizes all of whats going on but it might be hard to say whats there. So if you build your system entirely with deep learning, which is something that Nvidia has played around with, and something goes wrong, its hard to know whats going on and that makes it hard to debug.

AMLG: Which is a problem if your car just runs into a lamppost and you cant debug why that happened.

GM: Youre lucky if its only a lamppost and not too many people are injured. There are serious risks here. Somebody did die, though I think it wasnt a deep learning system in the Tesla crash, it was a different kind of system. We actually have problems on engineering on both ends. So I dont want to say that classical AI has fully licked these problems, it hasnt. I think its been abandoned prematurely and people should come back to it. But the fact is we dont have good ways of engineering really complex systems. And minds are really complex systems.

AMLG: Why do you think these big platforms are reorganizing around AI and specifically deep learning. Is it just that theyve got data moats, so you might as well train on all of that data if youve got it?

GM: Well theres an interesting thing about Google which is they have enormous amounts of data. So of course they want to leverage it. Google has the power to build new resources that they give away free and they build the resources that are particular to their problem. So Google because they have this massive amount of data has oriented their AI around, how can I leverage that data? Which makes sense from their commercial interests. But it doesnt necessarily mean, say from a societys perspective. does society need AI? What does it need it for? Would be the best way to build it?

I think if you asked those questions you would say, well what society most needs is automated scientific discovery that can help us actually understand the brain to cure neural disorders, to actually understand cancer to cure cancer, and so forth. If that were the thing we were most trying to solve in AI, I think we would say, lets not leave it all in the hands of these companies. Lets have an international consortium kind of like we had for CERN, the large hadron collider. Thats seven billion dollars. What if you had $7 billion dollars that was carefully orchestrated towards a common goal. You could imagine society taking that approach. Its not going to happen right now given the current political climate.

AMLG: Well they are sort of at least coming together on AI ethics. So thats a start.

GM: It is good that people are talking about the ethical issues and there are serious issues that deserve consideration. The only thing I would say there is, some people are hysterical about it, thinking that real AI is around the corner and it probably isnt. I think its still OK that we start thinking about these things now, even if real AI is further away than people think it is. If thats what moves people into action and it takes 20 years, but the action itself takes 20 years, then its the right timing to start thinking about it now.

AMLG: I want to get back to your alternative approach to solving AI, and why its so important. So youve come up with what you believe is a better paradigm, taking inspiration from cognitive psychology. The idea is that your algorithms are a much quicker study, that theyre more efficient and less data hungry, less brittle and that they can have broader applicability. And in a brief amount of time youve had impressive early results. Youve run a bunch of image recognition tests comparing the techniques and have shown that your algorithms perform better, using smaller amounts of data, often called sparse data.So deep learning works well when you have tons of data for common examples and high frequency things. But in the real world, in most domains, theres a long tail of things where there isnt a lot of data. So while neural nets may be good at low level perception, they arent as good at understanding integrated wholes. So tell us more about your approach, and how your training in cognitive neuroscience has informed it?

GM: My training was with Steve Pinker. And through that training I became sensitive to the fact that human children are very good at learning language, phenomenally good, even when theyre not that good at other things. Of course I read about that as a graduate student, now I have some human children, I have a four-year-old and a two-and-a-half year old. And its just amazing how fast they learn.

AMLG: The best AIs youve ever seen.

GM: The best AIs Ive ever seen. Actually my son shares a birthday with Rodney Brooks, whos one of the great roboticists, I think you know him well. For a while I was sending Rodney an e-mail message every year saying happy birthday. My son is now a year old. I think he can do this and your robots cant. It was kind of a running joke between us.

AMLG: And now hes vastly superior to all of the robots.

GM: And I didnt even bother this year. The four year olds of this world, what they can do in terms of motor control and language is far ahead of what robots can do. And so I started thinking about that kind of question really in the early 90s. and Ive never fully figured out the answer but part of the motivation for my company was, hey we have these systems now that are pretty good at learning if you have gigabytes of data and thats great work if you can get it, and you can get it sometimes. So speech recognition, if youre talking about white males asking search queries in a quiet room, you can get as much labelled data, which is critical, for these systems as you want. This is how somebody says something and this is the word written out. But my kids dont need that. They dont have labelled data, they dont have gigabytes of label data they just kind of watch the world and they figure all this stuff out.

Go here to see the original:

Discussing the limits of artificial intelligence - TechCrunch

Related Posts