Specifics:
All of the files necessary (source code and documentation) are located on maximal.robotics.cs.rpi.edu as an archive at ~beevek/ira_final.tar.gz. To begin, simply unpack the archive to any directory on maximal. After this:
- $ cd <unpack_dir>/src
- $ make
This creates the following executables in <unpack_dir>/bin:
-
getdata The data retrieval program
-
makeimg The sensor data image generator
-
map_edit The polygonal map editor
-
prmgen The PRM generator
-
navigate The navigation program
Getdata:
The 'getdata' program is very simple to use; run it like so:
$ ./getdata -robot MagellanPro > <outfile>
Terminate the program by pressing ^C after collecting data
Makeimg:
The 'makeimg' program requires X11; it can be run as follows:
$ ./makeimg <datafile>
Once the program has started, the following commands are available:
- 'q' Exit
-
'd' Dump the image to 'map.ppm'
- 'n' Process a single reading
- 's' Process the rest of the current scan
- 't' Process the next 10 scans
- 'a' Process all remaining readings
- 'e' Display 'empty' map
- 'f' Display 'full' map
- 'c' Display 'combined' map
For the quickest and best results, simply start the program and press 'a', 'c', 'd', 'q'
Map_edit: See README.MAPEDIT
Prmgen: See README.PRMGEN
Navigate: See README.NAVIGATE
Detailed Description:
The basis of our program draws off of the basic elements of all three project done throughout the course of the semester, and expands the extends of each to provide an easy to use interface to interact with the Magellan Pro robot in a controlled environment. In general, the program can be used to move the robot from a known location, to another chosen location, by using a point and click interface. Before this can be done however, some preparation must take place.
Firstly, the environment that the robot is to operate in must be mapped out. The best operating environment for the robot is a flat, level surface with smooth objects at least as high as the robot (about 1 foot), with a thickness of greater then about 4 inches. This ensures that all objects will be detected by the sonar sensors. For added accuracy, the objects should also reflect a significant amount of IR light. After the getdata program is initiated, the robot should traverse its environment, so to make a good map.
This map is essential for the next phase of operation, in which the robot operator must use a visual representation of the sonar map to construct the world and object boundaries. This polygonal representation of the world has two purposes. It serves as a visual representation to the user, and as a formal world description to the robot. The latter is important in the Kalman filtering stage. Hence, it is important to make the representation match the real world as closely as possible.
Next, a PRM is generated in the world map. This is also displayed visually to the user. Since this is a probabilistic and therefore randomly generated process, there will be instances where the roadmap generated will not adequately suit the environment that the robot must traverse. Different coverage determinations can be used to help approximate the usefulness of a specific instance of a rpm, but a quick look be a human is usually more effective. The nodes generated in this step will be used to guide the robot in the navigate program. This is an alternative method to pathfinding which was explored in assignment one, in which approximate cell decomposition was used. The PRM method has several advantages for mobile robotics. First, this technique allows a roadmap to be precomputed, and accessed later. Second, when computing paths, the rpm requires less memory then a quadtree approach. Local path planning can also be optimized, while quadtree cell decomposition is limited by rectangularly outlined paths.
With the aforementioned preparation complete, the robot is ready to be navigated with the navigate program. This program provides a simple interface for the user to place, and subsequently control, the robot using the generated PRM. While the robot is guided through the environment, it automatically uses Kalman filtering to keep itself on track. Successfully performing location and rotation corrections can be vital when the robot must traverse through tight spaces, or operated for long sessions.
Algorithms:
- Sonar Mapping
Using Dempster-Schafer theory, generate empty, full and combined sonar maps of an area given a set of sonar readings from the area, and output the sonar maps as images
- Configuration-space Generation
Using a polygonal map (constructed by hand based on the image created during the sonar mapping process), generate the configuration space of the map by finding the convex hull of each world boundary segment and world obstacle
- Probabilistic Roadmap Generation
Until the desired roadmap complexity (number of nodes) is reached, add new nodes to the roadmap as follows:
- Choose a random point in empty space in c-space
- Add a node to the roadmap at this point
- Find the <n> nearest nodes to the new node
- Attempt to directly connect each of these nodes to the new node (via straight line)
Then, enhance difficult nodes in the roadmap (denoted by the number of failed direct-connection attempts to nearby nodes) by adding new nodes within the immediate area of the difficult nodes. Finally, remove connected components in the roadmap that have fewer than <p> percent of the nodes in the whole roadmap
- Path Finding
Add nodes to the PRM representing the start and end positions of the path. Then, search the PRM graph using A* to find a path between the start and end nodes; if none exists, indicate failure
- Path Smoothing
Starting with the start and end points as i and j:
- Intersect the line between i and j with the c-space
- If no intersection occurs, delete the nodes of the path between i and j and return
- Set k = (j - i) / 2
- Recursively smooth the path between i and k and the path between k and j
- Robot Movement
To follow a path, at each node:
- Rotate toward the next node on the path
- Move to the next node on the path (in a straight line)
Keep track of the robot's odometry readings to update the estimated position of the robot. Also keep values representing movement with zero control error. When the error-free values are some predefined amount different from the odometry-based values, attempt to correct the error by moving from the current estimated position to the desired (perfect) position.
- Kalman Filtering
After each motion (straight-line or angular), use the robot's infrared sensors to sense the surrounding area. If these measurements are within some allowable range (e.g. less than 1.0m), use them in conjunction with the predicted measurement in that direction (found by tracing a ray from the robot's location across the internal polygonal map) to run a Kalman filter operation. Use the output from the Kalman filter to update the robot's estimated location.
Evaluation:
Much of the content learned throughout the course was put into practice during the project. Doing so on a real robot provided an easy to understand feedback system for our methods, and it also made us aware of how using methods in a real world environment, as opposed to a simulated one, can give very unexpected results.
A major issue that arose involved the inaccuracy of the robot’s odometry and sensor; this inaccuracy was higher than was expected, and in retrospect it might have been better to spend more time on Kalman filtering rather than considering it an “nice extra feature.”
Notes:
Further information is available in the <unpack_dir>/docs directory, in the following files:
-
README.MAPEDIT Detailed usage information for map_edit
-
README.PRMGEN Detailed usage information for prmgen
-
README.NAVIGATE Detailed usage information for navigate
-
README.ERRATA Description of a few known bugs/problems/issues
-
PROJECT_PROPOSAL The original project proposal
-
FILE_FORMATS Some (largely useless) information about the map_edit and prmgen file formats