1.3 Outline of the paper
This paper is organised as follows. The search algorithms and heuristics implemented in this paper are discussed briefly in section 2. Section 3 gives a short description of the aspects of Homeland security implemented in this paper. Section 4 discusses the details of the implementation done using Breve. A few snapshots of the simulation performed in this paper are also shown in section 4. Time and space complexities of the algorithms are shown in the form of tables and graphs containing the experimental data obtained through the simulations implemented in Breve in section 5. Section 6 gives the conclusions and possible future work.
- Search Algorithms Used in Path Planning
In this study we have compared the general search algorithm to the A* algorithm and a modified A* algorithm that makes use of preset heuristics.
- General Search Algorithm
In the case of a general search algorithm, the agent does not use any heuristic. The only knowledge the agent possesses is the start point and the end point. An exhaustive search of all possible routes is performed and all routes are given the same importance. The only inbuilt intelligence is to ignore circular paths. Circular paths are defined as paths that start and end at the same node.
The pseudo code used in this algorithm is as follows
function GENERAL-SEARCH(problem, stratergy) returns a solution, or failure
initialise the search tree using the initial state of problem
loop do
- if there are no candidates for expansion then return failure
- choose a leaf node for expansion according to a fixed strategy
- if the node contains a goal state then return the corresponding solution
- else expand the node and add the resulting nodes to the search tree
end
2.2 A* Search algorithm
The A* algorithm is a graph search algorithm and is one of the most widely used search techniques. Unlike the general search algorithm above, the A* algorithm uses heuristic rank estimates to make the search more efficient. The nodes are evaluated by combining g(n), the cost to reach the goal, and h(n), the cost to get from the node to the goal:
f(n) = g(n) + h(n) (1)
Since g(n) gives the path cost from the start node to node n, and h(n) is the estimated cost of the cheapest path from n to the goal, we have f(n) = estimated cost of the cheapest solution through n .
Hence if we are trying to find the cheapest solution, a reasonable route to try first is the node with the lowest value of g(n) + h(n). Provided that the heuristic function h(n) satisfies certain conditions, A* search is both complete and optimal[1].
2.3 Heuristics
In the A* algorithm when there are two or more paths ending in the same node it makes sense to consider the best possible path and ignore the rest. This is the heuristic that the A* search algorithm makes use of. So whenever it finds that there are more than two paths with the same end node, it computes the f-cost of the various paths. Among these paths, the path with the lowest cost is selected and the rest are simply ignored.
The A* search algorithm also sorts the paths placing the lowest cost paths at the end since in this way the nodes at the end of the list are expanded first.
In this paper an additional heuristic has been added to the A* algorithm. This heuristic is described as follows:
The Manhattan distance between the start and target location is computed. It can be assumed that a valid path will not exceed this distance. Hence all paths exceeding this maximum distance can be concluded to be invalid and hence ignored. Domains which are influenced by the presence of obstacles leading to paths greater than this maximum distance have not been considered in this paper.
The Manhattan distance and not the Euclidean distance is considered since the simulated non-holonomic robot is capable of moving only forwards, backwards, left or right and is incapable of moving diagonally.
- Homeland Security
3.1 Sensor Networks
The application of path finding techniques for homeland applications used in this paper was inspired by work done Li and Rus [4]. Here a versatile information system by using distributed sensor networks, i.e. hundreds of small sensors, equipped with limited memory and multiple sensing capabilities which autonomously organize and reorganize themselves as ad hoc networks in response to task requirements and to triggers from the environment. Distributed adaptive sensor networks are reactive computing systems, well suited for tasks in extreme environments, especially when the environmental model and the task specifications are uncertain and the system has to adapt to them. A collection of active sensor networks can follow the movement of a source to be tracked, for example, a moving vehicle. It can guide the movement of an object on the ground, for example, a surveillance robot. Or it can focus attention over a specific area, for example, a fire in order to localize its source and track its spread. A sensor network consists of a collection of sensors distributed over some area that form an ad hoc network. Each sensor is equipped with some limited memory and processing capabilities, multiple sensing modalities, and communication capabilities. These sensors are capable of detecting special events called “danger” (e.g. temperature, biochemical agents, etc.) that are above a particular threshold. The sensors that have triggered the special events are considered to be obstacles.
3.2 Sensor Network Implemented in Breve
The sensor network mentioned in the previous section was modeled in Breve using patches. The patch class in the Steve language was utilised for this purpose. In this paper, the patches were equally distributed over the entire region. But efficiency could have been improved by placing the patches in a certain fixed pattern to minimize the number of patches and thus maximizing the safety of a vehicle navigating through the area infested with “danger” (obstacles). In [6] the sensor deployment problem in the context of uncertainty in sensor locations for airdropping situations was considered. Sensor deployment in such scenarios is inherently non-deterministic and there is a certain degree of randomness associated with the location of a sensor in the sensor field. In a sensor network the sensors would sense the special events electronically. This is simulated in Breve by placing light obstacles over the patches. The patches are capable of sensing obstacles placed on them. By getting the location of the light object and by finding the patch present at that location we are able to determine the patches which have obstacles (or “danger”) and patches which are safe.
- Implementation
- Cell Decomposition
According to the principle of Cell Decomposition the entire world is decomposed into equal sized patches. The Patch and PatchGrid classes in Steve were used for this purpose. These patches provide a sense of location. That is, given a location it is possible to obtain the patch object residing in that location.
The patches were created using
patches = (new PatchGrid init-at location (0,0.75,0) with-patch-size (5, 0.1, 5) with-x-count X_SIZE with-y-count Y_SIZE with-z-count 6 with-patch-class "LifePatch").
By changing the x and z values it was possible to create patches of different dimension e.g., 4x4, 6x6 etc. A 32x32 patch would fill the entire work space created in this simulation.
- Light Objects Modelled as Obstacles
The light objects which are part of the sample Braitenberg class in Breve were used as obstacles. Light objects are mobile objects and can be moved around during the simulation, which provides us with dynamic obstacles i.e. obstacles that would change their position with time. In this work it was possible to infer a safe path avoiding the obstacles dynamically by checking for changes in the position of the obstacles at every iteration time step. If it were not for computational limitations, this type of replanning from scratch would be the ideal, since it guarantees optimal plan generation and execution given all known information at the time it is acquired. The use of the D* algorithm ( Dynamic A*) could lessen the computational work by producing an initial plan based on known and assumed information, and then incrementally update the plan as new information is discovered. The implementation of the D* algorithm is left as future work.
- Visual Representation of the implementation
Figure 1 and Figure 2 show two snapshots of the simulation performed in this paper. The vehicle position is on the start patch. The patches with protrusions contain obstacles. The path found is shown by light colored patches.
Figure 1 below shows the optimal path found in a world containing obstacles placed in a triangular pattern.
In Figure 2 the optimal path is found in a world containing obstacles placed in a rectangular manner.
Figure 1: Path found in a world containing obstacles placed in a triangular fashion
Figure 2: Path found in a world containing obstacles placed in a rectangular fashion
- Time and Space Complexities
Time and space complexities of the various algorithms implemented in this paper are discussed in this section. The following definitions are important to understand the space complexity analysis.
Maximum paths: The maximum number of possible paths that the algorithm considers during any iteration as it progresses to find the final safe path.
Maximum nodes: The number of nodes in the path which contains the maximum number of nodes before a valid safe path is found
- Time and Space Complexity of the General Search Algorithm
Table 1 shows the time taken to compute the optimal paths for different Manhattan distance values using the general search algorithm.
Table 1: Time Complexity using the general search algorithm.
The general search algorithm fails when the distance between start patch and target patch exceeds a Manhattan distance of 55.
Table 2 shows the maximum paths and the corresponding maximum nodes for different Manhattan nodes using the general search algorithm.
Table 2: Space Complexity using the general search algorithm.
The space complexity table shown above proves that the general search algorithm failed due to the large number of maximum paths, and hence memory requirements, that could be computed in a single iteration.
- Time and Space Complexity of the A* Algorithm
Table 3 shows the time taken to compute the optimal paths for different Manhattan distances using the A*
algorithm.
Table 3: Time Complexity using the A* algorithm.
A comparison of table 1 and table 3 shows that the A* algorithm is more efficient than the general search algorithm.
Table 4 shows the maximum paths and the corresponding maximum nodes for different Manhattan nodes using the A* algorithm.
Table 4: Space Complexity using the A* algorithm.
Comparing table 2 with table 4, it can be seen that the A* algorithm expands much lesser number of paths for any Manhattan distance.
- The General Search Algorithm with additional heuristics
As previously mentioned, the additional heuristic involves deleting paths that exceed the maximum Manhattan distance. These paths are defined by the Manhattan distance between the start node and the end node. It is assumed that any path which exceeds this maximum Manhattan distance is not an efficient path.
Table 5 shows the time taken to compute optimal paths for different Manhattan distances after adding the additional heuristic to the general search algorithm.
Table 5: Time Complexity using general search algorithm after adding the additional heuristic.
Comparing the values in table 5 and table 1, we see that adding this heuristic has certainly improved the performance of the general search algorithm by decreasing the time taken to compute paths. Hence if we add this heuristic to the A* algorithm, it should improve the performance of the A* algorithm.
Table 6 shows the Maximum paths and Maximum nodes to compute optimal paths for different Manhattan distance values after adding the additional heuristic to the general search algorithm.
Table 6: Space Complexity using general search algorithm after adding the additional heuristic.
Comparing the values obtained in table 1 and table 6, it can be seen that the addition of this heuristic has decreased the space complexity of the general search algorithm. The next section shows how the addition of this heuristic improves the A* algorithm.
- The A* algorithm with additional heuristics
Table 7 shows the time taken to compute optimal paths for different Manhattan distances after adding the additional heuristic to the A* algorithm.
Table 7: Time Complexity using the A* algorithm after adding the additional heuristic.
Comparing table 3 and table 7 it can be concluded that adding this additional heuristic has improved the performance of the A* algorithm in terms of time taken.
Table 8 shows the Maximum paths and Maximum nodes to compute optimal paths for different Manhattan distances after adding the additional heuristic to the general search algorithm.
Table 8: Space Complexity using the A* algorithm after adding the additional heuristic.
Comparing table 4 and table 8 it can be seen that the additional heuristic has decreased the number paths expanded by the A* algorithm even though there is no improvement in the maximum number of nodes expanded.
The time and space complexity of the A* and modified A* algorithms discussed above are also shown in the graphs of figure 3 and 4 respectively.
Figure 3: Time complexity analysis of A* versus the modified A* algorithm
Figure 4: Space complexity analysis of the A* versus the modified A* algorithm
- Conclusion and future work
It was proved with the help of experimental data that the additional heuristic added to the A* algorithm decreased the time taken to find optimal paths. Space complexity analysis also showed us that the reason for the improvement after adding the additional heuristic was due to the decrease in the number of paths expanded.
A drawback in this algorithm when used in critical homeland applications would be that it computes optimal paths instead of safest paths which may be critical in some homeland security applications. Hence this algorithm is more suitably used in non critical homeland applications and the algorithm needs to be modified for use in critical homeland applications where it is vital to compute safe paths which keep the robot as far away from the obstacles as possible.
This research may be extended in several directions:
- A plain terrain was used in the simulations performed in this paper. Using Breve it is possible to model different types of terrains like hills, valleys and lakes (obstacles). The robots used in the WTC were unable to withstand the rigors of rubble. The effect of using the path finding and navigation techniques on different terrains can be analysed.
- This project deals only with path finding. The algorithms assume that the sensor locations are known. Breve has very efficient collision handling methods. Using this it should be possible to model and implement map building techniques. The robot can be made to explore the environment and mark the safe spots and the danger spots. There is still much research in the area of map building for mobile robots. Simultaneous Localization and Mapping (SLAM) [7] is one of the most used techniques in map building. So another interesting future work would be to first implement map building techniques to build a map of the environment and then use path finding techniques using this information.
7 Acknowledgements
I would like to express my gratitude to
- Dr. Stuart Meikle for contributing his ideas on pathplanning.
- Mr. Jonathan Klien for rescuing me many a times with regards to issues I encountered when programming with Breve.
8 Reference:
[1] Stuart Russell, Peter Norvig, A modern approach (Englewood Cliffs, NJ: Pearson Education, 2003).
[2] Sven Behnke, Local Multiresolution Path Planning. In Proceedings of 7th RoboCup International Symposium, 2004, 332-343.
[3] L. Kavraki, P. Svestka, J.C. Latombe, and M. Overmars, Probabilistic road maps for path planning in high-dimensional configuration spaces. IEEE Transactions on Robotics and Automation, 1996, 566–580.
[4] Qun Li, Daniella Rus. Navigation Protocols in Sensor Networks, ACM Transactions on Sensor Networks (TOSN) 2005, 3 – 35.
[5] Klein, J. Breve, A 3D simulation environment for the simulation of decentralized systems and artificial life, Proceedings of Artificial Life VIII, the 8th International Conference on the Simulation and Synthesis of Living Systems, 2002.
[6] Y. Zou and K. Chakrabarty, Uncertainty-Aware Sensor Deployment Algorithms for Surveillance Applications, 2003, 261-272.
[7] Stefan B.Williams, Hugh Durrant-Whyte, Gamini Dissanayake, Constrained Initialization of the Simultaneous Localization and Mapping Algorithm, In the International Journal of Robotics Research.2003, 7–8.
Explanation of why manhattan distance and not Euclidean distance is used.