This page is both an experiment with and a demonstration of genetic algorithms, using an easy to understand online game as the setting. In a moment of unusual boredom I discovered the online flash game Light-Bot (which at the time of this writing can be found easily on various websites by googling), and after playing for a few minutes I realized it could serve as an interesting (and very easy to understand) application of genetic algorithms.

I ended up implementing the genetic algorithm twice: once as a fast, low level Erlang program that runs distributed over a cluster of computers, and again as a visual, interactive JavaScript program that runs in the browser. Correspondingly, this site has two main parts: a gallery of results from the faster background program, and an interactive applet where you can literally watch the genetic algorithm progress (though since it's running in the browser, it's comparatively slower).

For those who don't know what genetic algorithms are, I recommend playing Light-Bot for several minutes to become familiar with how the game works (again, googling is probably the easiest way to find it), and then read the brief introduction to genetic algorithms at the bottom of this page. After that, the interactive applet should help glue everything together, and then the gallery of results might be interesting as well.

I think it's an interesting way to visualize the sorts of "intelligent" things that can arise from random processes, especially in some of the more difficult levels I made, which I doubt I could ever solve on my own.

In all of the harder levels, the tricky part is coming up with some kind of highly reusable set of instructions. For many of these levels there is no obvious answer, and I doubt there is any mildly subtle answer either. However, many times the genetic algorithm is able to come up with a solution that baffles me, reusing instructions in the most bizarre ways. I get enjoyment by observing how these apparently deep, unfathomable solutions spring out of mindless, random iterations.

- Click here to go directly to the interactive applet, loaded with Level 8.
- Or, click here to select a board first.

If you're interested in trying to solve any of the levels yourself (i.e., the levels I've created that aren't part of the original game), that can also be done in the interactive applet.

View the full listing here.

For any board, the algorithm may have been run several times, with different algorithm parameters each time. For any particular run, you can view the final result (i.e., watch the robot execute the commands), as well as every intermediate result that was an improvement over all previous organisms.

If you have an idea for a board that you'd like to try out in the interactive applet, there's a board editor provided.

Click here to create a new board from scratch, or alternatively you can select a board from the boards page and load it into the editor.

A genetic algorithm is a computational method for generating solutions to problems by treating potential solutions somewhat like biological organisms that compete for survival. A random population of organisms is created initially, and from that initial population new organisms are created in two ways: first by mixing parts of two organisms to create a third (i.e., sexual reproduction), and second by random mutation. As the population grows, each organism is evaluated for its quality as a potential solution to the problem, and the worst organisms are destroyed (i.e., natural selection). Using only these techniques, the hope is that eventually a good enough solution will evolve naturally, such that we have solved the problem automatically without having to design a direct method for computing it (which might be considerably more difficult).

In the case of Light-Bot, this means that for some given board, we're trying to find a sequence of 28 commands that will solve the board (i.e., when the robot executes the commands, it lights up all of the marked tiles). So we start by generating a bunch of random sequences of 28 commands. We evaluate these "organisms" by simply simulating the robot executing the commands on the board and counting the number of tiles lit at the end. For recombination (sexual reproduction), we take two randomly selected organisms and divide them up into several segments, and then take some segments from one and some segments from the other to form the offspring. For mutation, we randomly select an organism and randomly change several of its commands. Each time we create a new organism/program, we run the evaluation routine to figure out how it compares to the rest of the organisms. There are two ways for the algorithm to finish - if we eventually produce a program that solves the board, we can stop right there. Another option is to start looking for a solution that uses a minimum number of commands. So sometimes the algorithm will run for some specified length of time longer, in case it can find a better solution.

Light-Bot also provides good examples of when genetic algorithms fail. Genetic algorithms cannot work if there isn't a smooth scale for evaluating possible solutions. If a solution is either right or wrong, genetic algorithms can't help, because there is no possibility for gradual improvement. In Light-Bot, this is a level with only one target. Since candidate programs are evaluated purely based on how many targets they light up, a one-target board only provides two possible evaluations: 0 and 1. While the algorithm searches for a solution, all of the candidates will evaluate equal to each other - all 0 (if any were 1, the algorithm would already be done). Since we can't distinguish between a program that almost solves the baord and a program that doesn't come close, we have no better chance of solving the level than if we simply generated random programs until one of them worked.

Because of the nature of genetic algorithms, many of the solutions found initially tend to be quite random and circumlocutious, so I became interested in having the algorithm search for short solutions as well. That can be done by simply making solution-length a part of the evaluation algorithm, so that winning solutions are given a higher value when they are also short. Then the program just needs to be set to continue running after the first solution is found, and it will start looking for shorter and shorter solutions.

The tricky part here is that there's no way to tell when the algorithm has found the shortest solution, since the length of the shortest solution is not known, at least not for any mildly complex level. Another tricky part is how to define the length of a solution. I think the main choice to make is whether or not to reward the use of functions - so you could count the length of a function as simply the number of commands in it, or you could multiply that by the number of times the function is actually called. I chose the latter option, because it reflects more on what the robot is actually doing. So the length-computation counts the number of commands in f2, then counts the number of regular commands in f1 summed with the number of calls to f2 times the number of commands in f2 (f2 is not allowed to call f1, see recursion below), and then makes a similar computation for the main method, multiplying each function call by the number of commands in that function. If the main method never calls a function, the length of that function won't affect the result.

I solved the problem of the algorithm halting by just deciding beforehand how long the algorithm would run looking for a minimal solution, proportional to the time it ran before it found the initial solution.

Aside from the 12 boards taken from the original game (most of which are not very interesting from a genetic algorithms standpoint), I tried to create levels that would be as interesting as possible for a non-intelligent algorithm to solve. Many of them are far too difficult for me, but I needed to be careful not to make them too hard, or even impossible. I think what makes a level interesting is when there is not a clear way to break the problem up into reusable components (as is necessary to make use of functions). Many times I would create a level and run the algorithm just to see how easy it was - if the algorithm solved it right away, I would create a new, similar level, with a few tweaks to make it more difficult.

Level 10

I don't know anything about 3D flash animation, so I had to come up with an acceptable 2D html-based representation of the game that would still contain enough detail to convey the topology of the levels. I eventually settled on a basic table using shaded borders to indicate height differences. The main issue is that this doesn't naturally convey what the height difference is exactly, which is important because the robot can only jump up one level. So I indicate height differences of more than one level by changing the border color to red. So a red border is a border that cannot be jumped over (from the red side only - jumping down from large heights is allowed), while the gray borders can be jumped over.

The issue of what a generation consists of is rather hairy - the question of how many recombinations to do and how many mutations to do and when a generation has passed has lots of possible answers, and allowing all of them (by making relevant algorithm parameters) would decrease the significance of the number of generations that a particular instance of the algorithm has been running. Since that's the primary indicator of how much time has been spent on a problem, I decided it would be best to choose one definition and stay consistent.

Both in the interactive applet and in my back-end implementation, a generation is exactly one mutation and one recombination. This definition fixes the ratio of mutations and recombinations, which ideally could be an algorithm parameter, but introducing this would lower the significance of generations, so for now I'm not planning on implementing that.

A choice to be made regarding mutation is whether the mutation is destructive - that is, does it change the mutated organism, or make a copy and mutate the copy? Mutating the original seems more natural to me, but the danger there is that the leading organism may be mutated detrimentally, which could mean that the progress made in producing that organism is undone.

Whether or not mutations are destructive would be a good algorithm parameter, but I haven't implemented that yet (it's on the to-do list), so the quick-fix to the destructive-leader problem was to introduce an algorithm parameter that protects the leader from mutation. So a mutation can randomly strike any organism except for the highest-ranked organism.

Allowing a function to call itself, or allowing f1 and f2 to mutually call each other, represents a challenge for evaluating a program. Any such recursive program will necessarily run forever (in contrast to recursive functions in a normal programming language, which can have a "base case" and eventually terminate). The first question should be whether the original Light-Bot game allows recursion, and that actually depends. Apparently there are at least two different versions of the game floating around that differ on this issue: in the version I initially played, a recursive program would run forever, even if all the targets had been lit. However, I read about another version that would halt as soon as all of the targets were lit. I chose to design my algorithm around the former.

Evaluating recursive programs could potentially take much longer, and even use an unacceptably high amount of memory, because the state of the game must be saved at each step in order to compare it with future states to detect if the program has entered a repeating cycle. I can't imagine that there's any reasonable upper bound on how long these cycles can be, so there's a very real danger of the computer running out of memory during simulation. Any solution to this problem will probably produce compromised results.

Another problem that comes up is how to keep randomly-generated programs from recursing. A naive solution is to check every new program for recursion and apply some fix (e.g., replace each offending command with some random other command). This is undesirable at the very least because it takes so much time. A simpler solution (the one I use) is to never generate a recursive program in the first place. This is done by limiting the available commands that can be randomly selected based on which part of the program is being generated. When picking commands for f1, we never pick f1, and likewise for f2. The mutual recursive problem seems more complex at first, but it turns out we can simply disallow f2 from calling f1, and only allow f1 to call f2. This works because the set of non-recursive programs that we're ruling out (those programs where f2 calls f1 and f1 does not call f2) all correspond to an equivalent program with f1 and f2 swapped. So we're not really losing anything.

The last thing to think about is how to ensure that no recursive programs are generating during mutation and recombination. These problems are relatively easy. For mutation we apply the same filter that we did when generating new programs - so if we're mutating a command in f1, we remove the f1 command from the set we're choosing from, etc. For recombination, since the commands never shift left or right, we don't have to worry about it at all - if both of the parent programs are non-recursive, the child will be as well.

- Algorithm Parameters to add:
- Destructive mutation on/off
- Define mutations per recombination (maybe. This distorts the meaning of generation as discussed above)