The actual processes that are used to search through the various operators and branches have been studied widely. Depth First search is one of the main search processes. It starts off at the root of the tree and works its way down the left hand side branch until it gets to the end. If this is not the goal state then it backs up and tries the next branch along. This continues until the goal state is reached. The algorithm tries to get as deep as possible as fast as possible. It is guaranteed to find a goal, if one exists, but it does not always find the optimal path.
Breadth first search is another search process. It checks all of the nodes at one level, starting at the left and working towards the right, before expanding the tree one level deeper. In other words, it moves back and forth through the search tree, only looking at the children of a node when all other nodes at a level have been examined. It finds the shallowest solution rather than the first solution it reaches, so therefore it is useful when we want to find a solution with the minimum number of steps from the starting point.
Consequently, both methods have positive and negative features. Both guarantee a solution, if one exists, but depth first search can fail if the tree is infinite, and breadth first can fail if the tree is in finite and there is no solution. To some extent, depth first is computationally easier than breadth first, but breadth first search does provide the most economical route, as I have mentioned, and this is considered to be particularly important in certain situations, for example, in mathematical problems, when a shorter solution is definitely considered to be better than a longer one. Along with not necessarily delivering the best possible result, depth first search does have other disadvantages, in that sometimes a loop can be created where the next move undoes the effects of the previous move, (ie a state has a successor that is also one of its ancestors) which if left undetected, would lead to an infinite search. Even when there are no infinite paths, there is a disadvantage to depth first in that the furthest left branch may have the goal at the end of it, but the branch may be very long, making depth first search very unrealistic. But, if we consider breadth first search purely in terms of computer programmes, the ease of programming and space costs becomes an issue, actually making depth first more feasible. Depth first search requires less memory since only the nodes on the current path are stored. Sometimes, by chance, depth-first search may find a solution without examining much of the search space at all, which contrasts with breadth-first search in which all parts of the tree must be examined to a certain level before any nodes on the next level can be examined. Saying this, breadth first search will not get trapped along a blind alley which leads to nowhere, in comparison with depth-first where the search could be following a long path to nowhere for a long time.
If we consider human problem solving, it is usually a combination of depth first and breadth first search. The tendency is maybe even towards depth first search of a small part of the search space, but this is after spotting the right portion of the search space to search, which to some extent is guided by heuristics. Heuristics are rules of thumb, almost like tour guides, in that they are good to the extent of pointing in the general direction, but may miss certain paths Heuristic search uses a heuristic evaluation function, which evaluates each state. On average, they improve the quality of the paths that are explored. They are the means by which humans can perform more efficient search.
Hill climbing is one heuristic search process. Sometimes referred to as ‘difference reduction,’ it involves the problem solver aiming to reduce the difference between the current state and the goal state. Human problem solvers are often governed by difference reduction or equivalently, by the opposite, similarity. In other words, they choose operators that transform the current state into a new state that reduces a difference and resembles the goal state more closely than the current state. By reducing the difference between the goal and the current state the problem solver is taking one step higher to the goal. One advantage of hill climbing as discussed in Finlay & Dix is that it can be used in continuous as well as discrete domains. An example of this is driving a car. The parameters to control include both discrete and continuous values, discrete being selecting the gear, continuous being depression of accelerator, steering etc. One problem with hill climbing is that you might reach the top of a hill that is lower than the optimal highest point and the searcher is unlikely to carry on and discover this higher point (foothill problem). This problem can be combated through means-end analysis, which I will discuss further later. Another problem with the hill climbing technique is the ‘plateau problem.’ This is when there is just a flat area between the peaks, with nowhere to turn. Another problem which is less common but still exists is the ridge problem, when considering continuous parameters this is where the system is slowly moving uphill but drops downhill on either side, and if you are slightly off the ridge the uphill direction is not to move along the ridge but to ascend directly up it, but the need to take cautious discrete steps meaning it is easy to overshoot, leading to an inefficient zig-zag pattern up the ridge, making it possible to miss it entirely.
Best-First search is another heuristic search technique, and is a way of uniting the advantages of both depth-first search and breadth-first search into a single method. One way of combining the two methods is to follow a single path at a time, but switch paths whenever a rival path looks more promising. This is done through applying an appropriate heuristic evaluation function to the nodes we have generated so far. We can then expand the chosen node by using the rules to generate its successors, and if one of them turns out to be the solution, we can then quit. If not, all those new nodes are added to the set of nodes generated so far. Once again, the most promising node is selected and the loop continues. If after a while, a solution is not found, the current branch being searched will begin to appear less promising than a branch nearer the top of the tree. At this point, this previously unexplored branch will now be explored, but the old branch is not simply forgotten, instead its last node remains in the set of generated but unexpanded nodes. This means the search can return to it if the other branches prove not to be very promising.
Beam search is a further heuristic search process. It is the same as breadth-first search except only the best nodes are added, according to the Heuristic evaluation function, to the back of the list to check.
All three heuristic search processes have both advantages and disadvantages. Hill climbing and Beam search don’t guarantee a solution, if indeed one exists, where as best-first search, although not absolutely guaranteeing a solution, is closer to this than the other two processes. Beam search finds the optimal path, best-first search often finds the optimal path, where as hill climbing does not. Saying this, the efficiency of hill climbing as a search process is very high, and best-first search and beam search have got a good level of efficiency, but with beam-first search it depends on the number of items to search.
Much research in AI has concerned games and game playing. Adversarial search concerns problem solving, which is not totally under the control of the problem solver, but instead involves a competitor. The existence of an adversary changes the search problem fundamentally, as while the problem solver strives to reach the best solution, or win, the adversary is trying to hinder them. Most games are state based, in that it is what is on the chessboard now that matters, but some are path based, for example poker. As well as game playing adversarial search can relate directly to human problem solving in the world around them. As one tries to solve problems they encounter in everyday life, certain environmental things may stand as obstacles in the way, and people have to overcome these hindrances. Chance is also an issue to consider in adversarial search, being a prominent feature in many games, and also in real world situations, when it is possible to consider that certain things are very unlikely to happen and therefore do not need to be considered in the search process.
One statement that describes search and problem solving in human terms is: “Human cognition is always purposeful, directed to achieving goals and to removing obstacles to those goals.”(Anderson, 1983; Newell, 1980; Tolman, 1932) All of the search processes that have been discussed and evaluated can be applied to human problem solving to varying extents. Possibly the most useful one, which relates most closely to human problem solving are the heuristic search processes, although there are disadvantages with some heuristic searches, for example they do not always find the optimal solution. This may not be as much of an issue as it seems though, as Simon (1981) demonstrated. The concepts of Heuristic search processes can be applied thoroughly to human problem solving, for example, as Polya (1957) discusses in his book, if you are given a problem, look for a similar problem you have solved, and ask whether you can use the solution to that problem or the method used to solve the current one. This links to heuristic search processes, which rely on already obtained knowledge and rules of thumb.
Artificial Intelligence, particularly the study of search processes, has indeed helped us to understand human cognition. The search processes which have been studied can help us to understand human problem solving in particular, in that it helps us to decipher the processes which a human brain goes through in everyday problem solving instances, and how some of these thought processes rely on stored knowledge and are guided by heuristics.
Bibliography
Anderson, J. (1990) Cognitive psychology and its Implications, 5th ed. Carnegie Mellon university, worth publishers, New York.
Finlay, J. & Dix, A (1996) An introduction to Artificial Intelligence. UCL Press:London.
Rich, E. & Knight, K (1991) Artificial Intelligence, 2nd Edition. McGraw-Hill Inc: International Edition.
Sharples, M. Hogg, D. Hutchinson, C. Torrance, S. Young, D. Computers and thought-A practical introduction to Artificial Intelligence. The MIT press-cambridge, Massachusetts, London, England.
Word Count: 2,345
Finlay, J, Dix, A, An introduction to Artificial Intelligence, p 45.
Anderson, J. (1990) Cognitive psychology and its Implications, 5th ed. Carnegie Mellon university, worth publishers, New York
Reed and Bolstad (1991) Instructed students in how to solve a problem, one group given abstract instruction, the other using an example, and the third group had access to both the abstract instruction and the example.
Finlay, J. & Dix, A (1996) An introduction to Artificial Intelligence. UCL Press:London.
Anderson, J. (1990) Cognitive psychology and its Implications, page 240.
Simon (1981) found evidence that humans are not optimisers when solving problems, but rather, satisfiers, in that they seek any solution that satisfies some set of requirements, and when they find one, quit (Rich, E. & Knight, K. Artificial Intelligence, 2nd Edition. McGraw-Hill Inc: International Edition.)