Humans and AI: Problem finders and problem solvers – TechTalks

Posted: February 15, 2022 at 5:09 am

Last weeks announcement of AlphaCode, DeepMinds source codegenerating deep learning system, created a lot of excitementsome of it unwarrantedsurrounding advances in artificial intelligence.

As Ive mentioned in my deep dive on AlphaCode, DeepMinds researchers have done a great job in bringing together the right technology and practices to create a machine learning model that can find solutions to very complex problems.

However, the sometimes-bloated coverage of AlphaCode by the media highlights the endemic problems with framing the growing capabilities of artificial intelligence in the context of competitions meant for humans.

For decades, AI researchers and scientists have been searching for tests that can measure progress toward artificial general intelligence. And having envisioned AI in the image of the human mind, they have turned to benchmarks for human intelligence.

Being multidimensional and subjective, human intelligence can be difficult to measure. But in general, there are some tests and competitions that most people agree are indicative of good cognitive abilities.

Think of every competition as a function that maps a problem to a solution. Youre provided with a problem, whether its a chessboard, a go board, a programming challenge, or a science question. You must map it to a solution. The size of the solution space depends on the problem. For example, go has a much larger solution space than chess because it has a larger board and a bigger number of possible moves. On the other hand, programming challenges have an even vaster solution space: There are hundreds of possible instructions that can be combined in nearly endless ways.

But in each case, a problem is matched with a solution and the solution can be weighed against an expected outcome, whether its winning or losing a game, answering the right question, maximizing a reward, or passing the test cases of the programming challenge.

When it comes to us humans, these competitions really test the limits of our intelligence. Given the computational limits of the brain, we cant brute-force our way through the solution space. No chess or go player can evaluate millions or thousands of moves at each turn in a reasonable amount of time. Likewise, a programmer cant randomly check every possible set of instructions until one results in the solution to the problem.

We start with a reasonable intuition (abduction), match the problem to previously seen patterns (induction), and apply a set of known rules (deduction) continuously until we refine our solution to an acceptable solution. We hone these skills through training and practice, and we become better at finding good solutions to the competitions.

In the process of mastering these competitions, we develop many general cognitive skills that can be applied to other problems, such as planning, strategizing, design patterns, theory of mind, synthesis, decomposition, and critical and abstract thinking. These skills come in handy in other real-world settings, such as business, education, scientific research, product design, and the military.

In more specialized fields, such as math or programming, tests take on more practical implications. For example, in coding competitions, the programmer must decompose a problem statement into smaller parts, then design an algorithm that solves each part and put it all back together. The problems often have interesting twists that require the participant to think in novel ways instead of using the first solution that comes to mind.

Interestingly, a lot of the challenges youll see in these competitions have very little to do with the types of code programmers write daily, such as pulling data from a database, calling an API, or setting up a web server.

But you can expect a person who ranks high in coding competitions to have many general skills that require years of study and practice. This is why many companies use coding challenges as an important tool to evaluate potential hires. Otherwise said, competitive coding is a good proxy for the effort that goes into making a good programmer.

When competitions, games, and tests are applied to artificial intelligence, the computational limits of the brain no longer apply. And this creates the opportunity for shortcuts that the human mind cant achieve.

Take chess and go, two board games that have received much attention from the AI community in the past decades. Chess was once called the drosophila of artificial intelligence. In 1996, DeepBlue defeated chess grandmaster Garry Kasparov. But DeepBlue did not have the general cognitive skills of its human opponent. Instead, it used the sheer computational power of IBMs supercomputers to evaluate millions of moves every second and choose the best one, a feat that is beyond the capacity of the human brain.

At the time, scientists and futurists thought that the Chinese board game go would remain beyond the reach of AI systems for a good while because it had a much larger solution space and required computational power that would not become available for several decades. They were proven wrong in 2016 when AlphaGo defeated go grandmaster Lee Sedol.

But again, AlphaGo didnt play the game like its human opponent. It took advantage of advances in machine learning and computation hardware. It had been trained on a large dataset of previously played gamesa lot more than any human can play in their entire life. It used deep reinforcement learning and Monte Carlo Tree Search (MCTS)and again the computational power of Googles serversto find optimal moves at each turn. It didnt do a brute-force survey of every possible move like DeepBlue, but it still evaluated millions of moves at every turn.

AlphaCode is an even more impressive feat. It uses transformersa type of deep learning architecture that is especially good at processing sequential datato map a natural language problem statement to thousands of possible solutions. It then uses filtering and clustering to choose the 10 most-promising solutions proposed by the model. Impressive as it is, however, AlphaCodes solution-development process is very different from that of a human programmer.

When thought of as the equivalent of human intelligence, advances in AI lead us to all kinds of wrong conclusions, such as robots taking over the world, deep neural networks becoming conscious, and AlphaCode being as good as an average human programmer.

But when viewed in the framework of searching solution spaces, they take on a different meaning. In each of the cases described above, even if the AI system produces outcomes that are similar to or better than those of humans, the process they use is very different from human thinking. In fact, these achievements prove that when you reduce a competition to a well-defined search problem, then with the right algorithm, rules, data, and computation power, you can create an AI system that can find the right solution without going through any of the intermediary skills that humans acquire when they master the craft.

Some might dismiss this difference as long as the outcome is acceptable. But when it comes to solving real-world problems, those intermediary skills that are taken for granted and not measured in the tests are often more important than the test scores themselves.

What does this mean for the future of human intelligence? I like to think of AIat least in its current formas an extension instead of a replacement for human intelligence. Technologies such as AlphaCode cannot think about and design their own problemsone of the key elements of human creativity and innovationbut they are very good problem solvers. They create unique opportunities for very productive cooperation between humans and AI. Humans define the problems, set the rewards or expected outcomes, and the AI helps by finding potential solutions at superhuman speed.

There are several interesting examples of this symbiosis, including a recent project in which Googles researchers formulated a chip floor-planing task as a game and had a reinforcement learning model evaluate numerous potential solutions until it found an optimal arrangement. Another popular trend is the emergence of tools like AutoML, which automate aspects of developing machine learning models by searching for optimal configurations of architecture and hyperparameter values. AutoML is making it possible for people with little experience in data science and machine learning to develop ML models and apply them to their applications. Likewise, a tool like AlphaCode will provide programmers to think more deeply about specific problems, formulate them into well-defined statements and expected results, and have the AI system generate novel solutions that might suggest new directions for application development.

Whether these incremental advances in deep learning will eventually lead to AGI remains to be seen. But whats for sure is that the maturation of these technologies will gradually create a shift in task assignment, where humans become problem finders and AIs become problem solvers.

See the rest here:

Humans and AI: Problem finders and problem solvers - TechTalks

Related Posts