Ant Colony

Ant Colony is a project that was inspired by a program called AntFarm that I read about in the Alife II proceedings. I decided to try to replicate the results. Setting up the program consists of selecting a size for the world, sprinkling the world with food, and letting a colony of ants spend some predetermined amount of time attempting to forage for food, which consists of an individual ant wandering away from the nest in the middle of the map, finding a piece of food, picking it up, heading back to the nest, and finally dropping the food on the nest. This behavior is complex and requires the coordination of several steps in chronological time.

The next image shows a time-lapse images of a well-evolved colony

  • Black: nest (always in center although not always drawn, oops)
  • Green: food
  • Red: ant
  • Blue: ant holding food
  • Yellow: pheromone

All ants in a given colony have an identical genome which is stored as a bitstring. This genome hardcodes a neural network that determines how the ants will behave in the form of excitatory and inhibitory synapes connecting input (sensory) neurons and output (motor) neurons.

A simulation consists of running populations of colonies that are originally spawned from random bitstrings for a single generation, each colony running for the same period of time. Each colony runs alone, in a world without other colonies. While the colonies are competing against each other for the right to procreate to the next generation, they do not cohabitate the map and thus do not have a chance to interact or compete directly within a given run. Those colonies with the highest fitness are bred with one another (through bitstring crossover) and mutated to create the colonies for the next generation.

The fitness is a conglomeration of several factors. The fitness is essentially a score which consists of accumulated points during a run. Since the desired behavior consists of chronological steps, I was forced to create a fitness function that would reward the step-by-step evolution of this process even though a partial success really amounts to a total failure. So, for example, points are accumulated for the total area of the map that is covered by the colony. This was used to initially motivate the ants to move away from their starting positions on the nest and to explore the environment. Points are also accumulated for successfully picking up an item of food. Subsequently, points are accumulated for walking back toward the nest while carrying food, and finally, points are accumulated for dropping food directly on the nest.

The next image shows an evolved neural network

  • Neurons are drawn as dots around the perimeter and synapses are the lines connecting the dots.
  • Blue neurons receive input only from the labeled senses, not synapses.
  • Yellow neurons send output only to motors, not synapes.
  • Black neurons are available for hidden layers in the neural network.
  • Green synapses are excitatory with a strength proportional to the greeness.
  • Red synapses are inhibitory with a strength proportional to the redess.

I consider this project to be a partial success. Given a very long period of evolution (thousands of generations) I can evolve fairly impressive foraging behavior. However, the foraging is not perfect. Even after 7000 generations, the colonies still have biases in their behavior. They might only forage on the north side off the nest leaving half the map mostly untouched for example. I also allowed the behaviors of dropping pheromone trails and sensing (and hopefully following) those trails to food sources. This level of coordination only half evolved. There is a clear evolution of the ability to drop pheromones when holding food items but the ability to follow the trail back to the food source never evolved. One might wonder how the trail dropping evolved if it didn't serve any purpose. Well, it was another attempt at incremental fitness. In other words, I was directly rewarding the dropping of pheromone when holding food so this behavior evolved for obvious reasons. The ability to follow the trail had to be developed on the colony's own and this behavior just couldn't get going.

I am most disappointed by the incremental fitness. The handholding in the evolutionary process makes this program very dull in my opinion. I was unsure at the time how else to approach the problem however. Oh well. If I ever tried again, I would have a very good base of knowledge and experience to work off of and could probably do a much more interesting job with more successful results.

I don't have a downloadable version of Ant Colony because, while I consider the project to be finished, the program is not. It just lacks the polished interface and thorough documentation that a good program should have and I can't release it with my name on it in its present form. Sorry.