AI, the brain, and cognitive plausibility – TechTalks

Posted: February 5, 2022 at 5:09 am

By Rich Heimann

This article is part of the philosophy of artificial intelligence, a series of posts that explore the ethical, moral, and social implications of AI today and in the future.

Is AI about the brain?

The answer is often, but not always. Many insiders and most outsiders believe that if a solution looks like a brain, it might act as the brain. If a solution acts like a brain, then the solution will solve other problems like humans solve other problems. What insiders have learned is that solutions that are not cognitively plausible teach them nothing about intelligence or at least nothing more than before they started. This is the driving force behind connectionism and artificial neural networks.

That is also why problem-specific solutions designed to actually play to their strengthsstrengths that are not psychologically or cognitively plausiblefall short of artificial intelligence. For example, Deep Blue is not real AI because it is not cognitively plausible and will not solve other problems. The accomplishment, while profound, is an achievement in problem-solving, not intelligence. Nevertheless, chess-playing programs like Deep Blue have shown that the human mind can no longer claim superiority over a computer on this task.

Lets consider approaches to AI that are not based on the brain but still seek cognitive plausibility. Shane Legg and Marcus Hutter are both a part of Google DeepMind. They explain the goal of artificial intelligence as an autonomous, goal-seeking system; [for which] intelligence measures an agents ability to achieve goals in a wide range of environments.

This definition is an example of behaviorism. Behaviorism was a reaction to 19th-century philosophy of the mind which focused on the unconscious, and psychoanalysis, which was ultimately challenging to test experimentally. John Watson, professor of psychology at John Hopkins University, spearheaded the scientific movement in the first half of the twentieth century. Watsons 1913 Behaviorist Manifesto sought to reframe psychology as a natural science by focusing only on observable behaviorhence the name.

Behaviorism aims to predict human behavior by appreciating the environment as a determinant of that behavior. By concentrating only on observable behavior and not the origin of the behavior in the brain, behaviorism became less and less a source of knowledge about the brain. In fact, to the behaviorist, intelligence does not have mental causes. All the real action is in the environment, not the mind. Ironically, DeepMind embraces the philosophy of operant conditioning, not the mind.

In operant conditioning, also known as reinforcement learning, an agent learns that getting a reward depends on action within its environment. The behavior is said to have been reinforced when the action becomes more frequent and purposeful. This is why DeepMind does not define intelligence: it believes there is nothing special about it. Instead, intelligence is stimulus and response. While an essential component of human intelligence is the input it receives from the outside world, and learning from the environment is critical, behaviorism purges the mind and other internal cognitive processes from intellectual discourse.

This point was made clear in a recent paper by David Silver, Satinder Singh, Doina Precup, and Richard Sutton from DeepMind titled Reward is Enough. The authors argue that maximizing reward is enough to drive behavior that exhibits most if not all attributes of intelligence. However, reward is not enough. The statement itself is simplistic, vague, circular, and explains little because the assertion is meaningless outside highly structured and controlled environments. Besides, humans do many things for no reward at all, like writing fatuous papers about rewards.

The point is that suppose you or your team talk about how intelligent or cognitively plausible your solution is? I see this kind of solution arguing quite a bit. If so, you are not thinking enough about a specific problem or the people impacted by that problem. Practitioners and business-minded leaders need to know about cognitive plausibility because it reflects the wrong culture. Real-world problem solving solves the problems the world presents to intelligence whose solutions are not ever cognitively plausible. While insiders want their goals to be understood and shared by their solutions, your solution does not need to understand that it is solving a problem, but you do.

If you have a problem to solve that aligns with a business goal and seek an optimal solution to accomplish that goal, then how cognitively plausible some solution is, is unimportant. How a problem is solved is always secondary to if a problem is solved, and if you dont care how, you can solve just about anything. The goal itself and how optimal a solution is for a problem are more important than how the goal is accomplished, if the solution was self-referencing, or what a solution looked like after you didnt solve the problem.

About the author

Rich Heimann is Chief AI Officer atCybraics Inc,a fully managed cybersecurity company. Founded in 2014, Cybraics operationalized many years of cybersecurity and machine learning research conducted at the Defense Advanced Research Projects Agency. Rich is also the author of Doing AI, a book that exploreswhat AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving. Find out more about his bookhere.

See the rest here:

AI, the brain, and cognitive plausibility - TechTalks

Related Posts