A Nobel Prize-Winning Economist Explains Why Good AI Will Always Outsmart Humans – Inc.

Posted: December 15, 2021 at 9:58 am

When it comes to answering difficult questions,well-builtartificial intelligencewill always haveus beat.

That'was a keytakeaway from a conversation betweeneconomist Daniel Kahneman and MIT professor of brain and cognitive science Josh Tenenbaumat the Conference on Neural Information Processing Systems (NeurIPS)last week. The pairspoke during the virtual event about the shortcomings of humans and what we can learn from them while building AI.

Kahneman,aNobel Prize winner in Economic Sciences and the author of Thinking, Fast and Slow, noted an instancein which humansuse judgment heuristics--shortcuts, essentially--to answer questions they don't know the answer to. In the example, people are given a small amount of informationabout a student: She's about to graduate, and she was reading fluently when she was four years old. From that, they're asked to estimate her grade point average.

Based on this information, many people will estimate the student's GPA to be 3.7 or 3.8. To arrive there, Kahneman explained, they assign her a percentile on the intelligence scale--usually very high, given what they know about her reading ability at a young age. Then theyassign her a GPA in what they estimate to be the corresponding percentile.

Of course, the person answering the question doesn't consciously realize that they're following this process. "It's automatic," saidKahneman."It's not deliberate. It's something that happens to you."

And the guess they offer isn'tlikely to be a particularly good one."The answer that came to your mind is ridiculous, statistically," saidKahneman. "The information that you've received is very,very uninformative."

A student's reading ability at age four, in other words, doesn't have a high correlation with their GPA fourteen years later. But when we're faced with a question we can'tanswer, saidKahneman, we tend to answer a simpler one instead.

"We're rarely stumped," he said."The answer to a related questionwill come to our mind, and we may not be fully aware of the fact that we're substituting one questionfor another."

In reality, the best way to estimate the student'sGPA would be to start with an average GPA--say, 3.0 or slightly higher--and make a minorupward adjustment based on what we know about the girl. But research shows that most people don't think this way. They tend to lean too heavily on the information theyhave (in this case, the girl's reading ability at a young age)andnot realize how much information they don't have.

A soundly engineered AI system, on the other hand, isn't likely to make the same mistake.Properly build AIwill use all the data it has andwon't over-adjust based on one piece of new information.

Engineers should keep this in mind when building AI, saidTenenbaum. "If there are ways in which human thinking is a model to be followed, we should be following it," he said."If there are ways in which human thinking is flawed, we should be figuring out how to avoid those in the AIs we build."

Read more:

A Nobel Prize-Winning Economist Explains Why Good AI Will Always Outsmart Humans - Inc.

Related Posts