What is Artificial Intelligence? – Definition & History | Study.com

Brief History

The field of artificial intelligence as we know it today began in the 1940s. World War II and its need for rapid technological advancement to fight the enemy spurred on the creation of this field thanks to the likes of mathematician Alan Turing and neurologist Grey Walter. These men, and many others like them, began to exchange ideas regarding the various possibilities of intelligent machines and what would count as an intelligent machine.

It wasn't until the 1950s, however, that the actual term 'artificial intelligence' was coined by computer scientist John McCarthy. During this time, scientist Marvin Minsky's ideas on how to pre-program computers with rules of intelligence would come to dominate the coming decades. In fact, he and McCarthy received a lot of funding to develop AI in the hopes of getting an upper hand against the Soviet Union. However, Minsky's predictions about artificial intelligence (namely the pace of its advancement) fell woefully flat over time.

It was also in the late 1960s that the first mobile decision making robot capable of various actions was made. Its name was Shakey. Shakey could create a map of its surroundings prior to moving. However, Shakey was very slow in its ability to sense the surrounding environment. Shakey was a good example of the shaky ground AI was on at the time.

This is because in the 1970s, owing to a derisive and what would ultimately prove to be a wrong conclusion by mathematician Sir. James Lighthill about AI's capabilities, AI hit a snag. Funding was massively slashed for AI projects and very little development occurred during this decade.

But by the early 1980s, AI started to receive funding for commercial projects as companies noted that AI had a use for specific niches that could save them money. In the 1990s, AI had a mini-revolution of sorts. Many in the field discarded Minsky's approach to AI and, instead, adopted the approach pushed by Rodney Brooks. Instead of pre-programming a computer with algorithms of intelligence, as Minsky advised, Brooks advised that AI be built with neural networks that worked like brain cells and thus learned new behaviors. Brooks didn't come up with this idea himself but he did help bring it back to life. In fact, you can thank Brooks' company for coming up with the first widely used robot for the home, the Roomba vacuum.

Besides the Roomba vacuum, the 2000s had a lot going on in AI. Maybe you've seen Youtube clips of the robot BigDog? It looks like a big scary metallic dog-horse of some sort. It was built to function as an artificial pack animal in rough terrain for the military. Or, perhaps you've heard of PackBot? This is a bomb disposal robot that has been used in the Middle East by U.S. troops.

Even if you haven't heard of these incredible machines, then you've almost certainly heard of speech recognition on your cell-phone, speech recognition that learns your voice and becomes better over time. That's another great example of AI in the modern world.

If you're a fan of Jeopardy then you saw AI function under the name 'Watson', a machine system that beat the top two Jeopardy champions of all time in answering a wide variety of question. Watson's technology now helps give doctors recommendations about their patients.

Today's artificial intelligence hits on almost every aspect of society, from the military and entertainment to your cell phone and driverless cars, from real time voice translation to a vacuum that know where and how to clean your floor without you, from your own computer to your doctor's office.

So what where is AI going in the future? No one can tell you for sure but here are some possible ideas:

Some people claim that, no matter, what machines will never be truly intelligent. However, it's a matter of debate as to what intelligence actually is and how you can actually gauge it. So far, AI has been limited to very specific tasks and in some of those tasks it has become better than humans, such as playing chess. In more complex tasks, like speech recognition, it's not as good as you and I (at least not yet). In some limited ways, computers are already more intelligent than people. For instance, unlike people, they aren't influenced by unintelligent superstitions (unless programmed to be). The idea for whether or not a machine will ever truly surpass all of your intellectual abilities and be able to learn new things and make decisions on par or better with humans is simply unknown. Many will argue yes and no. Perhaps, there will be no actual delineation between AI and human in the future. We may simply, albeit slowly, merge into one in the future and become completely inseparable.

Artificial intelligence (AI) is the ability of a computer to perform tasks that are similar (at least in a limited sense) to that of human learning and decision making. AIs roots go back to the 1940s, with Alan Turing and Grey Walter. In the 1950s, John McCarthy coined the term 'artificial intelligence' and Marvin Minsky was a well-known scientist of the field. In the 1980s, companies began using AI to save money and in the 1990s and 2000s the field of AI really took off with the likes of Watson, speech recognition, and a lot more.

See more here:
What is Artificial Intelligence? - Definition & History | Study.com

Related Posts
This entry was posted in $1$s. Bookmark the permalink.