- their ability to find the shortest route between the nest and a food source, which will be used to find and optimize a path in the graph;
- the simplicity of each individual ant, which will make it easy for us to model the ant colony as a Multi-Agent System;
The foraging behavior of the ants that is our main concern is very simple to understand. Ants use a signaling communication system based on the deposition of pheromone over the path it follows, marking a trail. Pheromone is a hormone produced by ants that establishes a sort of indirect communication among them. Basically, an isolated ant moves at random, but when it finds a pheromone trail there is a high probability that this ant will decide to follow the trail.
An ant foraging for food lay down pheromone over its route. When this ant finds a food source, it returns to the nest reinforcing its trail. Other ants in the proximities are attracted by this substance and have greater probability to start following this trail and thereby laying more pheromone on it. This process works as a positive feedback loop system because the higher the intensity of the pheromone over a trail, the higher the probability of an ant start traveling through it.
In order to understand how this process leads the colony to optimize a route, let's take a look at the following example:
Suppose some ants were randomly searching for food when they found two different routes between the nest and the source. Since the route B is shorter, the ants on this path will complete the travel more times and thereby lay more pheromone over it.
As the process continues, the pheromone concentration on trail B will increase at a higher rate than on A. And soon, even those ants on the route A will choose to follow the trail B.
Since most ants are no longer traveling through route A and also due to the volatile characteristic of the pheromone, the trail A will start evaporating and soon just the shortest route will remain
The Travelling Salesman Problem
The Travelling Salesman Problem (TSP) is a popular ‘toy problem’ in the AI community since it is at once simple to understand and very difficult (NP-hard) to solve. A salesman needs to complete a tour of a certain number of cities using the most efficient path possible. The salesman can travel from any city to any other city, but must visit each city once and only once. It doesn’t sound like a difficult problem until you consider the sheer number of possible solutions, even for a small fifteen city problem. Researchers realized that ants’ inherent ability to discover shortest paths could be put to use on the TSP and, it turned out, with some success.
The first TSP solution they developed was called Ant System and works like this: ants first make a number of random tours starting from a random city, depositing pheromone as they go. At each city an ant picks its next destination using a combination of probability and the amount of pheromone present on that route to inform its decision. Every ant must visit every city once, as per the criteria of the TSP. After a tour is completed, an ant deposits a certain amount of pheromone on each edge of the graph, depending on how far the ant travelled during its tour – shorter tours lead to more pheromone being deposited. A certain amount of pheromone will also decay, causing older solutions to fade away and be replaced by new ones.
After this process has run for some time, the pheromone present on the edges of the graph will tend to influence any ant traversing the graph towards an optimal solution. Ant System and subsequent versions of the algorithm such as Ant Colony Optimization do not always find optimal solutions, but are effective in finding good solutions in a reasonable number of iterations.
Modeling of the ant colonies as multi-agent system
The main idea of the system proposed is simply to put ants walking on a graph and observe the intensity of pheromone on its edges in time.
Think of graph's nodes as different places where ants could stop during a traversal and let's call them cities. The edges of the graph will, of course, represent the routes connecting cities. This virtual environment will be populated by Agents representing individual ants.
In the AI domain, an Agent might be considered as an autonomous entity that interacts within an environment. Agents have only a dynamic partial representation of its environment that can be changed by their sensing abilities. They can also perform actions based on local perceptions and on their internal representation of the environment. These actions can affect the environment, the agent itself or even other agents.
Implementation
An easy way to develop a multi-agent simulation environment is using a turn-based system. Think of it as a game where at each turn all players make one single move. This is exactly how the simulation will work. Each agent will perform just one of the actions described above at each turn.
All the elements of the environment were modeled using C++ classes. The Ant class represents the agents, the Route represents the edges of the graph, the City represents the nodes and the Civilization represents the environment.
Since these elements are simple, the implementation becomes simple. Many properties and methods of the classes are specific for the interface or just auxiliaries. To keep things simple, just the essential aspects will be covered here.
The City is the simplest entity. It basically knows its (X, Y) position in the environment. This will be necessary to calculate the distance between two cities.
The Route needs two pointers to City objects in order to identify the cities it is connecting. Another property is the length of the route. The longer the route, the more turns an agent will need to cross it. The pheromone intensity over a route is also represented in this class.
Another important thing that must be simulated is the volatile characteristic of the pheromone. The route will also need a method to simulate its evaporation.
class Route
{
private:
float Length; // Length of the road
int Pheromone; // Pheromone intensity over the road
City *FirstCity; // Cities connected by this road
City *SecondCity;
public:
void EvaporatePheromone(); // Simulate the evaporation of the pheromone
…
};
The most important characteristic of an ant in this context is related to its individual and unpredictable tendency to choose a certain route among the many available. Each instance of the class Ant must represent an individual agent with singular characteristics. This can be implemented by using a mathematical function. As described above the pheromone level over a route is measured by an integer number. The agent will use a method that evaluates its tendency of choosing a route based on the pheromone intensity. A good variability of the behavior of the agents can be expressed as a sinusoidal function with at least three coefficients: alpha, beta and gamma.
The input PL is the pheromone level over a route. Alpha, Beta and Gamma will be properties of the Ant class initialized as random float numbers within an interval. These properties will make possible to have different individuals in the population. They are also needed by the Genetic Algorithms that will be covered in optimization.
class Ant
{
private:
float alpha; // Indicates the Ant's pheromone sensibility
float beta;
float gamma;
bool HaveFood; // Indicates if the Ant is carrying food
public:
float GetTendency(int PheroLevel);// tendency of choosing a route
void PickFood(); // Pick food when in the food source
void LeaveFood(); // Leave food when in the nest
void PutPheromone(); // Increase pheromone level of route
void Walk(); // Walk one more step
…
};
The Civilization is the class that will control the whole environment and the simulation process. It will be also responsible for the evolution process. A graph can be created by the user through the interface of the proposed simulation program. Two nodes of the graph must be specified as the nest and the food source. When the simulation begins, a random number of agents are created in the nest. At each turn the agents will perform actions depending on their current position as explained before. As the simulation runs the most used routes will have their pheromone level increased and after some time a solution will emerge from the collective behavior of these virtual ants.
class Civilization
{
private:
City *FoodSourceCity; // The Civilization's Food Source City
City *Nest; // The Civilization's Nest
TList *Routes; // All Routes in the Environment
TList *Cities; // All Cities in the Environment
TList *Ants; // All Ants in the Environment
int NaturalSelection; // Turns remaining before the next natual selection
public:
void NextTurn(); // Perform one turn of the simulation
…
};
Although this system can provide good results, a random number of agents with random characteristics may not always solve a given problem. Also, a population of agents that is able to find the shortest path in a graph may not be able to find a solution in a complete different environment. For these reasons an optimization of the ant farm can be built using Genetic Algorithms.
Optimizing the Ant colonies: an overview
As already said genetic algorithms are used to reach satisfactorily optimized ant colony. It gives the answers to questions like:
- How to find out the best agents for a given problem?
- How many agents are necessary to perform a search in graphs with different complexities?
- How to balance agents with different characteristics to compose a good population?
Genetic algorithms are based on Darwin's theory of evolution, driven by the survival of the fittest. An entire population of individuals is created, and each evaluated at a given task and assigned a fitness value based on that. The genetic algorithm can then create new individuals based on the current population: it generally selects fitter individuals, and breeds them together. This can sometimes provide a big improvement of the solution, but generally it's just a small adjustment. Performance increases greatly as generation after generation of individuals are created.
The foundations of Genetic Algorithms such as selection, crossover and mutation can be better understood looking at the practical example that has been discussed.
Selection
Competition: Successful individuals in this environment are those that can either collect a large amount of food (workers) or find alternative routes to the food source (explorers). By collecting food, an agent is increasing the pheromone level over a good route and thereby influencing other agents. By exploring the environment, an agent can find different and maybe shorter routes to the food source. Natural selection will allow these individuals to produce offspring and thereby propagate their characteristics. Similarly to the natural process, the evolution occurs because the new generation of individuals will sometimes be better then the parents.
Extinction: Agents that get lost within the environment can not help at all. For example an agent that keeps traveling between two nodes using just one edge is useless to the colony. Also if it keeps walking in circles. These individuals will be eliminated from the population.
Offspring
Crossover: New offspring are created based on the characteristics of the successful individuals. Since there are two types of individuals that are necessary to the colony, two new individuals will be created at each evolutive cycle. One of them will be descendant of the two most successful workers and the other one will be descendant of the best two explorers. Genes representing the characteristics of the parents will be combined to compose a new chromosome which will originate a new individual. This combination is inspired by the biological process known as crossing over. Each characteristic of the new individual will come from one of the parents at random. The following figure shows two possible examples of descendants created with crossover combinations:
Mutation: After the crossover there is a low probability that a mutation occurs. This will change one of the characteristics of the individual at random. Mutation can maintain diversity within the population.
Migration
Migration will introduce a completely random new individual to the population. The effect is similar to the mutation because it will increase the diversity within the environment.
Ants, Phones and Pheromones
The foraging metaphor has been put to work successfully in solving the problem of telecommunications routing. Ruud Schoonderwoerd of Hewlett-Packard labs collaborated with a group of scientists to create the world's first ant-based routing system. As anyone who has used the Internet will know, communications networks are characteristically unpredictable. Sudden interest in a particular web site or a local crisis will lead to surges of network activity which must somehow be routed efficiently, minimizing both delays and congestion. The network must therefore dynamically route calls/requests through quieter sections of the network.
Congestion in a particular section of the network can be seen as analogous to depletion of a food source near an ant colony, causing the ants to search for new routes, dynamically updating the virtual pheromone trail between nodes. In the system developed by Schoonderwoerd et al., antlike agents are sent randomly between nodes from time to time, updating each node's routing table as they go with information regarding how long the journey from their origin took, and which nodes they have used on the way. The routing table contains a list of the node's immediate neighbors, and probabilities associated with using that neighbor as the next step on the journey to each target node on the network. The fastest ants will have a positive effect on the probability scores of the nodes they have used, while slow ants will have a negative effect.
A more recent algorithm for Internet routing designed by researchers (based on antlike agents) has, in simulation, outperformed all other routing methods, including the current standard routing protocol of the Internet, Open Shortest Path First.
Swarm Robotics
Although the multi-agent approach to robotics is not a new idea, social insect studies have demonstrated how to make such an approach worthwhile. The advantages to roboticists are many:
- Rather than develop big, expensive and complicated robots, many cheap and simple ones can be used to achieve the same goal.
- Many robot environments are precarious. If one robot is incapacitated in the traditional approach, you've lost your robot and can't go on. One individual in a swarm, however, is relatively expendable.
- If your robot is faced with too big a task (e.g. an object too heavy to lift), ordinarily it would be back to the drawing board, but where a swarm is involved solutions are often inherently scalable.
Conclusion
In an age of booming complexity, where governments, economies and societies alike have become impossible for any individual to comprehend, where software engineers wrestle with intractable computer systems and, as we've seen, computer systems wrestle with intractable problems, the study of social insects can do more than just pave the way to more powerful algorithms and robotics.
The promises and successes of collective intelligence and the swarm paradigm are a powerful demonstration of complex and useful systems arising from easily comprehended rules and habits. The units of computation are as dumb as ever, but the overall result we unhesitatingly term 'intelligence'.