Artificial Intelligence: Learning to Learn – Education

2011 VIRTUAL SCIENCE FAIR ENTRY

The purpose of this project was to determine the best algorithm for strategy games.

Computer Science

9thGrade

Requires technical knowledge

There are no costs associated with this project.

There are no safety hazards associated with this project.

The total time taken to complete this project is as follows:

The purpose of this project was to determine the best algorithm for strategy games.

The goal of endowing human-like intelligence to inanimate objects has a long history. Modern computers can perform millions of calculations per second, but even with all of this remarkable speed, true logic has yet to be achieved. Every year that passes, computers come closer and closer to achieving this goal, or at least mimicking true logic. Game strategy is one of the most common applications of artificial intelligence. Algorithms are a set of instructions a computer follows to achieve a task or goal. There are three main types of algorithms for intelligence in games: Alpha-beta, learning, and hybrids. Chess was one of the first games to implement artificial intelligence with the discovery of the Alpha-beta algorithm in 1958 by scientists at Carnegie-Mellon University (Friedel, n.d.). The Alpha-beta algorithm was the first feasible algorithm that could be used for strategy in games. As artificial intelligence in games evolved and became more complex, a more modern learning approach has been adopted. Even though there have been major advancements in both learning style algorithms and Alpha-beta algorithms; a hybrid utilizing elements of both algorithms results in a stronger, more efficient, and faster program. On the forefront of the quest for artificial intelligence, these algorithms are playing vastly important roles.

The Alpha-beta algorithm has a long history of success. The first use of the algorithm in a game was in the 70s and 80s by the Belle computer. Belle remained the champion of computer chess until being superseded by the Cray supercomputer (Friedel, n.d.). Belle was the first computer to be successful using the early forms of the Alpha-beta algorithm. Deep Blue later used the algorithm in order to defeat chess grandmaster Garry Kasparov; this was a major development for the artificial intelligence community as it was the first time in history a computer had beaten a chess grandmaster in a standard match. Over time, the algorithm has been revised, updated, and modified to the point where several versions of the algorithm exist that all use the same core principles.

The Alpha-beta algorithm uses brute-force calculations (thousands every second) to make decisions. The Alpha-beta algorithm uses the minimax principle (one player tries to maximize their score while the other tries to minimize it) and efficient evaluation techniques in order to achieve its logic. Alpha-beta is a game tree searcher, or in other words, it forms a hierarchy of possible moves down to a defined level (i.e. six moves). In some variations, eliminating symmetries and rotations is used to reduce the size of the game tree (Lin, 2003). After the tree is formed the algorithm then proceeds to evaluate each position in the tree based on a set of rules intended to make the computer play stronger, this is called heuristics. The reason why Alpha-beta is fast, yet strong is that it ignores portions of the game board (Lin, 2003). It decides which portions to ignore based on finding the best move per level (or move) and ignoring all the moves that arent the best and the moves under them. Alpha-beta can calculate two levels of moves with 900 positions in 0.018 seconds, three levels of moves with 27,000 positions in 0.54 seconds, four levels of moves with 810,000 positions in 16.2 seconds, and so on. These efficiency-improving techniques are responsible for the small calculation times and improved game strategy that the algorithm provides.

Learning style algorithms are another popular type of algorithm for game use. Learning style algorithms arent necessarily a recent creation. They have been in use for approximately thirty years, but have been met with limited success until recently. In this approach, an algorithm uses its own experiences, or a large database of pre-played games to determine the best moves. Unfortunately, learning algorithms have also incorporated the bad strategies utilized by novice players. Over time, improvements have been made so that an algorithm can be a threat to intermediate players in most action games; however, learning algorithms are often unsuccessful in games requiring strategic play. The Chinook program uses the most notable learning algorithm. The program spent eighteen years calculating every possible move for the game of checkers. But the Chinooks algorithm is considered by some not to be a true learning algorithm since it already knows all of the possible outcomes for every move (Chang, 2007). Chinook, however, does adjust its playing style for each players strategy; this is where its element of learning comes into play (Chang, 2007). Learning algorithms are considered closer to true intelligence than other algorithms that use brute-force calculations such as Alpha-beta. Compared to pure calculation algorithms, they play games more like humans and even show very limited aspects of creativity and self-formed strategy.

A hybrid algorithm combines the brute-force style of the Alpha-beta algorithm with the flexibility of the learning style algorithm. This method insures that the full ability of the computer is used while it is free to adapt to each players individual game style. Chinook successfully utilized this technique to make a program that is literally unbeatable. Because of the Chinook program, the game of checkers has been solved. No matter how well an opponent plays, the best they can do is end in a draw (Chang 2007).

Other champion programs have used just one style of algorithm in order to win. As a result, no particular algorithm has been measured or proven to be dominant. Game developers choose which algorithm to use based largely on personal preferences and on a lack of consensus from the artificial intelligence community as to which algorithm is superior. There are weaknesses that can be used to determine which algorithm will prove to be inferior. For example, the Alpha-beta algorithm does not generate all possible moves from the current condition of the game. Alpha-beta assumes that the opponent will make the best possible move available. If a player makes a move that is not in their best interest, the algorithm will not know how to respond because that moves game tree has not been calculated. The opponent can trick the algorithm by making sup-par moves, and forcing it to recalculate. It is also important to note that the Alpha-beta algorithm can use tremendous amounts of time when calculating more than a couple of moves. The learning algorithm has its flaws, too. If it encounters an unknown strategy, the algorithm will be helpless against its opponents moves. The most likely way to minimize these flaws is to combine these algorithms into a hybrid. If the hybrid encounters an unknown strategy, it can then use the Alpha-beta style game tree to determine the possible moves from that point. Likewise, if the opponent uses a move not calculated by the brute-force method, it can then use learned strategies to defend itself. The hybrid algorithm will be faster and have better winning strategies than either the Alpha-beta, or the learning style algorithms.

The experiment clearly demonstrated the alpha-beta algorithm won more games, took less time to generate a move, and took less moves to win. It was clearly superior to both the hybrid and learning algorithms.

This chart shows the percent each algorithm won out of 9,000 games of checkers. Alpha-beta scored the highest percentage of wins, the hybrid came in second, and the learning algorithm scored the lowest percentage.

This chart displays the average time it took each algorithm to generate a move. In this situation the lowest scoring algorithm preformed the best.

This chart represents the average number of moves it took each algorithm to win a game. As with the previous chart, the lowest scoring algorithm performed the best.

Evidence gathered from the experiments showed that the Alpha-beta algorithm was far superior to both the hybrid and learning algorithms. This can be concluded based on three distinct factors: the percentage of wins, the average time taken to make a move, and the average number of moves generated in order to win a game. In each of these categories the Alpha-beta algorithm preformed the best in every category. The hybrid performed better than the learning, but worse than the Alpha-beta. The Learning algorithm performed the worst.

This experiment included 9,000 trials; therefore, the experimental error was minimal. The only measured value that needed to be considered for errors was the average amount of time each algorithm used to generate a move. The computer can record the precise time, but the time was rounded so the time-keeping process would not affect the outcome of an experiment. However, the difference between the averages was not at all significant, and even if the computer recorded the results with absolute precision the conclusion would remain unchanged. Another aspect to consider about the results was the possibility of a recursion loop (basically, when the algorithm gets stuck in a repeating loop). Although the algorithm will break from the loop, it would cause the average time spent on a move to go up considerably for that game. The last error that needed to be considered was the inefficiencies in an algorithms programming. If an algorithm was erroneously programmed in a way that was inefficient, it would obviously damage the overall performance.

Chang, K.(2007, July 19). Computer checkers program is invincible.Retrieved from http://www.nytimes.com/2007/07/19/science/19cnd-checkers.html

Frayn, C.(2005, August 1). Computer chess programming theory. Retrieved from http://www.frayn.net/beowulf/theory.html

Friedel, F.(n.d.).A short history of computer chess. Retrieved from http://www.chessbase.com/columns/column.asp?pid=102

Lin, Y. (2003).Game trees. Retrieved from http://www.ocf.berkeley.edu/~yosenl/extras/alphabeta/alphabeta.html

For a demo of the program email connerruhl at me.com

Education.com provides the Science Fair Project Ideas for informational purposes only. Education.com does not make any guarantee or representation regarding the Science Fair Project Ideas and is not responsible or liable for any loss or damage, directly or indirectly, caused by your use of such information. By accessing the Science Fair Project Ideas, you waive and renounce any claims against Education.com that arise thereof. In addition, your access to Education.com's website and Science Fair Project Ideas is covered by Education.com's Privacy Policy and site Terms of Use, which include limitations on Education.com's liability.

Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state's handbook of Science Safety.

Continue reading here:

Artificial Intelligence: Learning to Learn - Education

Related Posts

Comments are closed.