How logic games have advanced AI thinking – ComputerWeekly.com

Posted: August 6, 2017 at 3:42 am

Since the first industrial revolution, inventors have been driven by the idea that an automaton could mimic human intelligence.

There was even an attempt at a chess-playing automaton, the Mechanical Turk. This later turned out to be a hoax, as its inventor had someone sit inside the machine to make the supposedly intelligent chess moves against its human opponent.

Access the latest thinking in AI and machine learning, and look at how these technologies could help your IT department

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Just over two decades since the worlds first robot chess champion, Deep Blue, took its bow, artificial intelligence (AI) is breaking new ground technologically.

In March this year, AlphaGo from Googles DeepMind subsidiary proved that an AI could beat the best at the ancient game of Go an achievement many had predicted would take AI many years.

AlphaGos success suggests that the pace of AI technological advancement is accelerating. In time, it seems an AI will inevitably test what it means to be human.

There have been many heroic attempts at AI over the past 70 years, leading to several breakthroughs in AI and machine intelligence.

But beating world champion Garry Kasparov in a chess tournament, which is what IBM achieved with Deep Blue, is arguably more about the raw processing power of its hardware than the prowess of AI and logical reasoning.

In the UK, the first proper machine that was tasked with playing a game was the Hollerith Electronic Computer (HEC), which is currently on display at The National Museum of Computing (TNMOC) at Bletchley Park.

The machine was displayed to the public in 1953 at the Business Efficiency Exhibition in London. Raymond Bird, the electronics engineer who was tasked with developing the HEC, described the demonstration of the noughts and crosses game as a great success in showing the potential power of computers.

Andrew Herbert, chairman of TNMOC, says HEC became an instrument of the Cold War and used AI to help it achieve this. It was programmed to do automatic machine translation, he says. The computer was set up to convert written Russian into English the sort of demo that Microsoft often does today to show off the idea of a Star Trek-like Universal Translator.

In the 1950s, AI was also developed to support image recognition for the analysis of satellite photos during the Cold War. Again, the idea of learning to identify cats or whatever in a series of images is widely used today. AI was about clever pattern recognition, says Herbert.

But by the 1970s, AI scientists were attempting to second-guess how the human brain worked, he says. That is something neuroscientists still do not truly understand.

During the 1980s, Japan announced what it called the Fifth Generation computer initiative. This was the dawn of workstations and Japan saw that powerful computers could be made more intelligent. It led to the development of expert systems machines that could become domain experts in areas such as medical diagnosis, says Herbert.

Such expert systems captured a finite amount of information on a given subject domain, allowing less expert users to work on more complex problems without needing to have a specialist on hand.

With ubiquitous internet access, much more data became available, which led to what is now called machine learning. A big driver was search engine development by the likes of Bing, Google and AltaVista and, later, the recommendation engines all of which are based on pattern recognition technology.

The original man versus machine contest took place on 11 May 1997 when an IBM computer called Deep Blue defeated the reigning world chess champion, Garry Kasparov, grabbing the worlds attention and imagination. The six-game match lasted several days and ended with two wins for IBM, one for Kasparov and three draws.

But as with the Mechanical Turk of the 18th century, AI did not play much of a role in early logic game conquests.

Deep Blue was not a true AI because it analysed all possible chess moves using a brute force algorithm.

Primary Key Associates co-founder Andrew Lea has had an interest in AI for 35 years. His company uses the technology in data analytics to identify unknown knowns in datasets.

Lea says the reason why logical games such as chess and Go are strongly associated with AI is because they are closed domains. People were so much better than computers at playing these games, he says. Now we have the conundrum where computers are getting much better.

Lea wrote his first chess program for the BBC Model B, and recently developed a version for the Arduino microcomputer board. Writing good chess programs hasnt really increased our understanding of how people think, he says. I wrote a chess program 30 years ago. I remember writing chess on the BBC B microcomputer and its about how to make it smart on a small 8-bit computer. I think what makes AI is the ability to be smart and big, where big equals knowledge and experience.

For Lea, being smart is the opposite of brute force, where sheer computational power is thrown at the problem of identifying the best possible move for the robot chess player to make. Its about pattern recognition, knowing intuitively what you learnt from a previous game, and how this can make a difference in the current game, he says.

During the first decade of the new millennium, a step-change occurred as computational power increased to the point where neural networks and deep learning algorithms could be applied to AI.

Deep learning , to use IBMs definition, is based on the human brains decision-making process. By building multiple layers of abstraction, deep learning technology can solve complex semantic problems.

In 2011, IBM showcased its deep learning technology with the Watson computer, which beat two of the most successful human contestants on the long-running US TV game show Jeopardy!. The game show requires participants to provide a question in response to general knowledge clues. In the event, Watson marked a breakthrough in AI with its understanding of natural language and ability to make sense of vast amounts of written human knowledge.

Last March in Seoul, the Go-playing computer program AlphaGo, developed by Googles DeepMind division, defeated the best Go player of the last decade, Lee Sedol. AlphaGo won by resignation after 186 moves. Go is regarded as one of the hardest games for computers to master because of its sheer complexity. There are roughly 200 possible moves for a given turn compared with about 20 in chess, and more possible board configurations than the number of atoms in the universe.

People thought it would take 20 years for a computer to be able to beat a human at Go, but Elon Musk, CEO of Tesla and SpaceX, believes AlphaGo's mastery of Go shows just how quickly AI is evolving.

TNMOCs Herbert says: We are making great strides in enabling computers to perceive things, so we can build amazing applications that can mimic human behaviour, but it is not intelligence in the way of a human.

The risk to humanity that Musk fears is an AIs ability not only to outpace human intelligence, but to exploit an intelligent network in a way that could undermine society in order to achieve a seemingly benevolent objective.

Speaking at the National Governors Association on 15 July, Musk said: The pace of progress is remarkable. Now AlphaGo can play the top 50 Go players and crush them all.

There are now AI systems capable of learning without ever having being taught the fundamental principles or a basic understanding of the subject matter. You can see robots that can learn to walk from nothing within hours, which is way faster than any biological being, Musk told US state governors at the event.

One of the most recent breakthroughs came in June, when Facebook published research introducing dialog agents with the ability to negotiate. Similar to how people have differing goals, run into conflicts and then negotiate to come to an agreed-upon compromise, the researchers demonstrated that it is possible for dialog agents with differing goals implemented as end-to-end-trained neural networks to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes, according to Facebooks blog.

While the ability to exhibit human-like negotiation tactics is certainly a big step forward, the Facebook bots gave a very public demonstration of an inherent risk in self-learning technology. They were switched off after they invented their own language for communicating a language that could not be understood by the human researchers.

Originally posted here:

How logic games have advanced AI thinking - ComputerWeekly.com

Related Posts