What Does Artificial Intelligence Really Mean, Anyway?

The great promise--and great fear--of Artificial Intelligence has always been that some day, computers would be able to mimic the way our brains work. However, after years of progress, AI isnt just a long way from HAL 9000, it has gone in an entirely different direction. Some of the biggest tech companies in the world are beginning to implement AI in some form, and it looks nothing like we thought it would.

In a piece for the BBCs website, writer Tom Chatfield examines the recent AI initiatives from companies like Facebook--which announced last week that it would be partnering with NYU to build an artificial intelligence team that hopes to develop a computer that will develop insights from enormous data sets--and argued that such developments are completely contrary to the classic definition of AI as a field.

Chatfields argument is centered on a feature in the Atlantic on cognitive scientist Douglas Hofstadter, who believes that what Facebook is doing, along with other recent advances like IBMs Watson, doesnt qualify as "intelligence." Writes Chatfield:

For Hoftstadter, the label intelligence is simply inappropriate for describing insights drawn by brute computing power from massive data sets because, from his perspective, the fact that results appear smart is irrelevant if the process underlying them bears no resemblance to intelligent thought. As he put it to interviewer James Somers, I dont want to be involved in passing off some fancy programs behaviour for intelligence when I know that it has nothing to do with intelligence. And I dont know why more people arent that way.

To that end, Chatfield argues that weve created something entirely different. Instead of machines that think like humans, we now have machines that think in an entirely different, perhaps even alien, way. Continuing to shoehorn them into replicating our natural thought processes could be limiting.

Some are inclined to agree. Writing for the MIT Technology Review, Tom Simonite reiterates just how bad computers are at tasks that are easy for brains, like image recognition. Simonite attributes this to the way weve been building computer chips. Namely, that its going to be impossible for computers to imitate non-linear thought processes if we continue to use hardware thats designed to execute linear sequences of instructions--the CPU-RAM design called the Von Neumann architecture. Instead, an answer may lie with neuromorphic chips like IBMs Synapse, which are specifically designed to work the way our brains do.

The problem, Simonite writes, will be making them work on a larger scale. It is still unclear whether scaling up these chips will produce machines with more sophisticated brainlike faculties. And some critics doubt it will ever be possible for engineers to copy biology closely enough to capture these abilities.

As it turns out, copying biology is really damn hard. While scientists like Hofstadter prop up the platonic ideal of AI as a computer that functions the same way our brains do, perhaps the Deep Learning approach embraced by Google is the means by which we get there. Maybe you dont need neuromorphic chips to build a real-life HAL. Maybe you just need lots and lots of data.

See the original post here:

What Does Artificial Intelligence Really Mean, Anyway?

Related Posts

Comments are closed.