# Artificial Intelligence - The minimax method.

Extracts from this document...

Introduction

ARTIFICIAL INTELLIGENCE PROJECT

ARTIFICIAL INTELLIGENCE PROJECT

## INTRODUCTION

- Game Playing

Playing games have always been an attraction of computers, and will probably always be so. Action games where you charge around blowing everything up are one type of game, but we are more interested in this project in examining a more thoughtful game. How do you get a computer to make sensible moves when for our game there are 97 possible positions and a horribly big number of board configurations? Surely we can use the search techniques that we have covered. The problems we have thus far considered have been of the type where the problem is well defined at the beginning and nothing changes during the search. Except for one player games that have no chance involved, such as the game of solitaire, (a game where you have to remove pegs from a board), games involve either chance, or another player's moves or both. In this case we cannot simply search for a solution as the other player will try to block you and games of chance simply will no cooperate!

However, we do not throw away all that we have just looked at in Search. We do need to modify the techniques to allow for opponents.

- Generate and Test

One approach to take is to look at the problem as one of generating then testing possible sets of moves. One extreme would be to generate a number of complete sets of moves and then test each of them, taking the best one. The alternate extreme is to generate one move at a time, evaluate each move, take the best, and then generate the next move. Obviously, these do not make particularly ideal solutions.

Middle

If DEEP_ENOUGH(CurPosition, Depth)

CUR_BEST: PATH = CurPosition

CUR_BEST: VALUE = EVALFUNCTION (CurPosition, Player)

RETURN CUR_BEST

SUCCESSORS = GenerateMoves (CurPosition, Player)

IF SUCCESSORS is empty:

CUR_BEST: PATH = CurPosition

CUR_BEST: VALUE = EVALFUNCTION (CurPosition)

RETURN CUR_BEST

IF Ply = MAX

Set up a current best class:

CUR_BEST: PATH = NULL

CUR_BEST: VALUE = -LIMIT

FOR ALL SUCC IN SUCCESSORS

RESULT_STATE = MINIMAX(SUCC, Depth +1, OTHERPLAYER(Player), MIN)

IF RESULT_STATE : VALUE > CUR_BEST : VALUE

THEN CUR_BEST = RESULT_STRUCT

CUR_BEST : PATH = CurPosition + CUR_BEST : PATH

RETURN CUR_BEST

ELSE - (Ply = MIN)

Set up a current best class:

CUR_BEST : PATH = NULL

CUR_BEST : VALUE = +LIMIT

FOR ALL SUCC IN SUCCESSORS

RESULT_STATE = MINIMAX(SUCC, Depth +1, OTHERPLAYER(Player), MAX)

IF RESULT_STATE : VALUE < CUR_BEST : VALUE

THEN CUR_BEST = RESULT_STATE

CUR_BEST : PATH = CurPosition + CUR_BEST : PATH

RETURN CUR_BEST

- Coding

Since we are not only trying to find an efficient algorithm, there are many coding issues that we can talk about.

At the first sight, we have tried to use our State class with all the information concerning a state in it. This class contained a board (15 * 9 * 8 = 1080 variables) , value of the current position, description of current position and all the utility functions. The advantage of this representation is that the algorithm was straight forward but we have encountered a memory problem. In fact, this representation didn’t allow us to go further than depth 3.

Then, we have tried to keep our class State but to remove the board from it. Actually, our class contained only a path and state value (like algorithm above). We this new representation, we had a board that was used by any state.

Conclusion

The optimal method to use is an iterative deepening. Previous searches can be used to guide the next search by searching the previous successful moves first and implementing alpha-beta cutoffs. This method is very useful for games which have a highly variable branching factor. But we will just try to find a relatively good and quick solution. So, we have added to our move generator a sorting part. After, the array of moves is filled, we can just sort the moves in decreasing order considering the value of the position they lead to. So, if the player to play is MAX we use them in that order otherwise (MIN) we take the moves from the end of the array. This optimization allows us to reach without any time out depth 20 (it sometimes reaches depth 25 but in 20% of the games we had a time out end of game).

In this game, we have some highly constrained sequences of moves. A player can do a sequence of moves if he uses some visited positions. In this situation, it is often desirable to extend the search to the outcome of this set of constrained moves whether or not the outcome occurs within the fixed depth. This is called Search to a fixed depth with extensions. To implement this, we could add detection for the case where we previously just detected the depth in the generation of moves. But we could solve this easily by incrementing the depth only when the other player is to play. This technique produced many smart sequenced moves but it could sometimes cause a time out end of game if the search goes too far in a sequence.

EVALUATION

THIS DOCUMENT WAS DOWNLOADED FROM COURSEWORK.INFO - THE UK'S COURSEWORK DATABASE - HTTP://WWW.COURSEWORK.INFO/

This student written piece of work is one of many that can be found in our AS and A Level Electrical & Thermal Physics section.

## Found what you're looking for?

- Start learning 29% faster today
- 150,000+ documents available
- Just £6.99 a month