4 Main Types of Artificial Intelligence – G2

Posted: March 31, 2020 at 7:03 am

Although AI is undoubtedly multifaceted, there are specific types of artificial intelligence under which extended categories fall.

What are the four types of artificial intelligence?

There are a plethora of terms and definitions in AI that can make it difficult to navigate the difference between categories, subsets, or types of artificial intelligence and no, theyre not all the same. Some subsets of AI include machine learning, big data, and natural language processing (NLP); however, this article covers the four main types of artificial intelligence: reactive machines, limited memory, theory of mind, and self-awareness.

These four types of artificial intelligence comprise smaller aspects of the general realm of AI.

Reactive machines are the most basic type of AI system. This means that they cannot form memories or use past experiences to influence present-made decisions; they can only react to currently existing situations hence reactive. An existing form of a reactive machine is Deep Blue, a chess-playing supercomputer created by IBM in the mid-1980s.

Deep Blue was created to play chess against a human competitor with intent to defeat the competitor. It was programmed with the ability to identify a chess board and its pieces while understanding the pieces functions. Deep Blue could make predictions about what moves it should make and the moves its opponent might make, thus having an enhanced ability to predict, select, and win. In a series of matches played between 1996 and 1997, Deep Blue defeated Russian chess grandmaster Garry Kasparov 3 to 2 games, becoming the first computerized program to defeat a human opponent.

Deep Blues unique skill of accurately and successfully playing chess matches highlight its reactive abilities. In the same vein, its reactive mind also indicates that it has no concept of past or future; it only comprehends and acts on the presently-existing world and components within it. To simplify, reactive machines are programmed for the here and now, but not the before and after.

Reactive machines have no concept of the world and therefore cannot function beyond the simple tasks for which they are programmed. A characteristic of reactive machines is that no matter the time or place, these machines will always behave the way they were programmed. There is no growth with reactive machines, only stagnation in recurring actions and behaviors.

Limited memory is comprised of machine learning models that derive knowledge from previously-learned information, stored data, or events. Unlike reactive machines, limited memory learns from the past by observing actions or data fed to them in order to build experiential knowledge.

Although limited memory builds on observational data in conjunction with pre-programmed data the machines already contain, these sample pieces of information are fleeting. An existing form of limited memory is autonomous vehicles.

Autonomous vehicles, or self-driving cars, use the principle of limited memory in that they depend on a combination of observational and pre-programmed knowledge. To observe and understand how to properly drive and function among human-dependent vehicles, self-driving cars read their environment, detect patterns or changes in external factors, and adjust as necessary.

Not only do autonomous vehicles observe their environment, but they also observe the movement of other vehicles and people in their line of vision. Previously, driverless cars without limited memory AI took as long as 100 seconds to react and make judgments on external factors. Since the introduction of limited memory, reaction time on machine-based observations has dropped sharply, depicting the value of limited memory AI.

GIF courtesy of ProStock/Getty via Tesla

What constitutes theory of mind is decision-making ability equal to the extent of a human mind, but by machines. While there are some machines that currently exhibit humanlike capabilities (voice assistants, for instance), none are fully capable of holding conversations relative to human standards. One component of human conversation is having emotional capacity, or sounding and behaving like a person would in standard conventions of conversation.

This future class of machine ability would include understanding that people have thoughts and emotions that affect behavioral output and thus influence a theory of mind machines thought process. Social interaction is a key facet of human interaction, so to make theory of mind machines tangible, the AI systems that control the now-hypothetical machines would have to identify, understand, retain, and remember emotional output and behaviors while knowing how to respond to them.

From this, said theory of mind machines would have to be able to use the information derived from people and adapt it into their learning centers to know how to communicate with and treat different situations. Theory of mind is a highly advanced form of proposed artificial intelligence that would require machines to thoroughly acknowledge rapid shifts in emotional and behavioral patterns in humans, and also understand that human behavior is fluid; thus, theory of mind machines would have to be able to learn rapidly at a moments notice.

Some elements of theory of mind AI currently exist or have existed in the recent past. Two notable examples are the robots Kismet and Sophia, created in 2000 and 2016, respectively.

Kismet, developed by Professor Cynthia Breazeal, was capable of recognizing human facial signals (emotions) and could replicate said emotions with its face, which was structured with human facial features: eyes, lips, ears, eyebrows, and eyelids.

Sophia, on the other hand, is a humanoid bot created by Hanson Robotics. What distinguishes her from previous robots is her physical likeness to a human being as well as her ability to see (image recognition) and respond to interactions with appropriate facial expressions.

GIF courtesy of GIPHY

These two humanlike robots are samples of movement toward full theory of mind AI systems materializing in the near future. While neither fully holds the ability to have full-blown human conversation with an actual person, both robots have aspects of emotive ability akin to that of their human counterparts one step toward seamlessly assimilating into human society.

Self-aware AI involves machines that have human-level consciousness. This form of AI is not currently in existence, but would be considered the most advanced form of artificial intelligence known to man.

Facets of self-aware AI include the ability to not only recognize and replicate humanlike actions, but also to think for itself, have desires, and understand its feelings. Self-aware AI, in essence, is an advancement and extension of theory of mind AI. Where theory of mind only focuses on the aspects of comprehension and replication of human practices, self-aware AI takes it a step further by implying that it can and will have self-guided thoughts and reactions.

We are presently in tier three of the four types of artificial intelligence, so believing that we could potentially reach the fourth (and final?) tier of AI doesnt seem like a far-fetched idea.

But for now, its important to focus on perfecting all aspects of types two and three in AI. Sloppily speeding through each AI tier could be detrimental to the future of artificial intelligence for generations to come.

TIP: Find out what AI software currently exists today, and see how it can help with your business processes.

Ready to learn more in-depth information about artificial intelligence? Check out articles on the benefits and risks of AI as well as the innovative minds behind the first genderless voice assistant!

More:

4 Main Types of Artificial Intelligence - G2

Related Posts