“What the heck Sophie?!” You say, “Why are you writing a debrief for a project you did 3 years ago???”
Hahaha, please bear with me dear reader. If I don’t, I’ll forget about the cool thing I did as a freshman in college still stumblin’ through Python. And how cool and easy genetic algorithms are. 😀
Anyhow, carrying on….
For our software design final project, we wrote a simulation program that would allow you to draw a track, then generate cars that would eventually learn how to drive around said track. It was a pretty fun project, and we got a pretty cool result.
What’s happening is that the cars are starting with random parameters for their wheels relative to its distance from the wall. As the the cars progress through the generations, these parameters are selected using our parameters, such that the car that make it around the track is the only one that survives.
The matrix control slide shows that our car is modeled by two wheels and three sensors. The speed of the wheels is dependent on the readings of the sensor. The sensor outputs “distance” of the car to the wall in the directly directly aligned with the sensor. Each sensor is weighted by some constant, which is calculated by our genetic algorithm.
This slide shows a 2×2 matrix, but we actually use 6×2 matrices, because there are six coefficients needed to define wheel speed as a function of the position of the car to the wall. As the starting generation, we randomly create a set of 15 6×2 matrices for 15 cars that drive randomly around the track. Once a generation is complete, we select the two “best” matrices in a generation, and mutate them using the methods described in the slide and recreate another set of 15 6×2 matrices. As each generation passes and we create the new matrices from the best two of the last generation, we gradually get cars that drive around the track better and better.
Genetic algorithms are actually pretty conceptually easy. You have a starting set with parameters that are randomly generated. You have a set of conditions that dictate how well an individual is doing, so that you can score every element in the starting set against those parameters. The best two individuals get to “reproduce” the next set, while the rest… well, kinda die. :0
As time passes and the best two are always serving as the template for the next generation, you can see that generations get more successful.
An issue with genetic algorithms is their propensity to find local maxima. If your starting set is badly generated, than you may never find the answer. That’s why we mutate. 😉