ICFP 2004


Another year and another missed competition. This time by a week.

See icfp contest for more info.

I decided to write a program to the contest spec under the time span given, as an exercise to see how far I would get.


The task is to write a control program to control an ant colony. The system is a simple state machine and the ants understand a set of simple instructions. The result of each instruction determines the next instruction to execute for that ant.

The goal is to get food and bring it back to the ant nest. There are two colonies and the task is to write a better control system than the other colony.


This programming challange does not stress the implementation language as much as others.
It is in the same vein as Battling Robot challanges. We have a virtual machine that controls the actions of the ants. We have to write the brain of the ants in a fixed language. The only feature of the implementation will be how slowly it simulates the execution of the ant brain.


As I had missed the submission date, I did not follow the implementation plan exactly. The random number generator for the Flip instruction does generate the same numbers as the sample 100 given. We use a different numbering convention for the cells. In theory the ant moves the same direction as the spec but will use different cell numbers. During simulation the Move rest period can be set to zero to get faster simulation times.


As usual, my language of choice is TCL. This language is unsuited to this type of task so it adds conciderably to the challange.

As every ant executes the same control loop we must rely on random choices to seperate the ants into the groups we want.
I have decided that there should be two groups of ants

  1. Defenders
    That stay in the nest and guard the food.

  2. Foragers
    That go out search for food and bring it back to the nest.

We are limited to 10000 instructions in the whole control program so each type must be implemented as simply as possible. This means that given the random nature of the choices we must accept that each ant may not operate in the most optimal way.

Day2 Another type of ant could be a next blocker, which sticks to the foes nest and surounds it. This prevents any food into the nest. Once sufficient ants have stuck to the outside they cant be killed.


While the main simulator executes the virtual instruction set for the ants, we will write the control code at a higher level.
This will allow the use of named states. It will also allow us to define the states in a nested structure.

The control system will convert the input format into the linear list of instructions as required by any constest submission.

I will reply on YCL's use of nested lists to create the source structure.


I plan to implement a system of disparate control states depending on what the ant is trying to do.
I envision that I need to subdivide the ant types into smaller sub types and give each ant a goal.
First Cut:

  1. Search for Food
  2. Return Home
Sounds simple enough, unfortunately in the Ant world there are obsticles to overcome and the goal may have to be suspended to avoid these obsticles and then be resumed.


Thankfully the ants can drop scent markers as it moves, this will help us overcome the problem of not having stored data with each ant.
I propose to create the following Markers:

  1. dot dash dot/dash
    These allows us to determine the direction of the trail so we can either head for the food or head to the nest.

  2. some more plenty
    This allows us tag the fact that food has been taken along this trail.


I have defined some basic rules that the ants should follow:

  1. Ants will drop a trail while searching for food.
  2. An ant may not cross another trail unless the marker is in sync with the other trail.
  3. An ant will abandon it's search if it finds a trail marked with food.


If we arrange some ants in the nest as to suround a cell with five ants, then we create an impenatrible fortress. Any intruder ant that enters the circle will be killed and converted into food.
Some ideas for how the ants operate.

  1. Avoid the outer nest cells.
  2. Hunt for food deposited by forager ants. and then surround the food parcel. Leave a space for other ants to enter and deposit more food (or die!).


The most complex of the two types, we basically need these modes.


We start off from the nest and drop a scent trail alternating between dot dash dotdash as we walk. When we come to obsticles we will either choose turn left or implement some more complicated algorithm.

If along the way we bump into a scent trail that is tagged with food we will enter FollowTrail mode.
In this mode we head away from the ant nest. If we bump into other ants of ours along the way we must give way and allow them to get back to the nest.
If the other ant is an enemy then we will sit on the trail and block their path.

Once we have found food at the end of the trail, we will enter ReturnHome mode and backtrack the trail to the nest and once we have entered the nest we switch to DropFood mode.

DropFood mode makes us head for the center of the nest and either add food to an existing pile or create a new pile and hope that there are spare defenders to swarm around it. There are likely to be a lot of other sub modes within each major mode that will be created as the code develops.


Day 1

Time approx 6Hrs.

  1. Draw Hexagon playing world.
  2. Read sample world file and generate ant display.
  3. Random movement of ants.
  4. Animate ant movement and Cell state changes.
  5. 25% of Instructions coded.
  6. 75% of runtime to process state machine.
  7. 75% execution of sample ant brain file.
  8. Basis of this document file.

Day 2

Time approx 4 Hrs

  1. Finished Simulator.
  2. Sample test brains.
  3. Assembler 100%
  4. Translator Assembly - Instructions
  5. Gui - Load / Save
  6. Gui - run / stop
  7. Brain - Mark dot-dash 100%
  8. Brain - Search for food 75%
  9. Brain - Return home 50%

Day 3

Time approx 10 /hrs.

  1. Ant brain
  2. Optimisation of simulator
  3. Trial Runs.


Due to the time it takes for TCL to run the simulator , there was not much development on the ant brain. The end result was only implementing two out of the three ant types and not all of the ant logic for following a trail with food.
The poor speed meant that the longest run was only about 10 thousand steps. This was not enough to gauge the suitablility of the ant logic. Plus the time spent simulating reduced the time spent refining the logic.

The use of an assembler to allow the use of labels was a good move, but not having any higher order logic terms (such as conditionals, conjuctions, and disjunctions) resulted in the requirement to block copy code into multiple states.
It was also easy to make mistakes in jump labels, due to the use of relative brances, as a replacement for conditional statements.


  1. Once again, TCL performs poorly when asked to iterate over large loops.
    Several attempts to optimise the code did not produce any reduction in processing time. The time to simulate one step of all ants was approximately one second.
  2. The GUI and visualiser was easy to implement in TK. However, the use of string data for manipulating the visual items does impact on performance. Also when there are large numbers of items on the canvas there is significant time lost due to Hash recomputations as well as hash searches for tags.
  3. The ability to dynamically change any ant's state during run time significantly reduced design time for the ant brain but not enough to offset the slow simulation time.
  4. The lack of efficient data structures impacts on the way the logic is coded. Everything is a string is not a good match for representing either the Cell data or an Ant's state data.


Overall I rate this effort 7 out of 10 for coding. and 4 out of 10 for suitability for TCL to the problem.

Further work

  1. Storing data in a list should be faster to access over a associate array. Need to look at why a list lookup and and expression term takes so much processing time.
  2. Look at coding HLL for ant brain type state machine with no data.