Teaching cars to drive with genetic algorithms

“What the heck Sophie?!” You say, “Why are you writing a debrief for a project you did 3 years ago???”

Hahaha, please bear with me dear reader. If I don’t, I’ll forget about the cool thing I did as a freshman in college still stumblin’ through Python. And how cool and easy genetic algorithms are. 😀

Anyhow, carrying on….


For our software design final project, we wrote a simulation program that would allow you to draw a track, then generate cars that would eventually learn how to drive around said track. It was a pretty fun project, and we got a pretty cool result.

Here’s our website!
Here’s the code!

Demo video:

What’s happening is that the cars are starting with random parameters for their wheels relative to its distance from the wall. As the the cars progress through the generations, these parameters are selected using our parameters, such that the car that make it around the track is the only one that survives.

Model of Genetic Car

The matrix control slide shows that our car is modeled by two wheels and three sensors. The speed of the wheels is dependent on the readings of the sensor. The sensor outputs “distance” of the car to the wall in the directly directly aligned with the sensor. Each sensor is weighted by some constant, which is calculated by our genetic algorithm.

Matrix Evolution

This slide shows a 2×2 matrix, but we actually use 6×2 matrices, because there are six coefficients needed to define wheel speed as a function of the position of the car to the wall. As the starting generation, we randomly create a set of 15 6×2 matrices for 15 cars that drive randomly around the track. Once a generation is complete, we select the two “best” matrices in a generation, and mutate them using the methods described in the slide and recreate another set of 15 6×2 matrices. As each generation passes and we create the new matrices from the best two of the last generation, we gradually get cars that drive around the track better and better.

Genetic algorithms are actually pretty conceptually easy. You have a starting set with parameters that are randomly generated. You have a set of conditions that dictate how well an individual is doing, so that you can score every element in the starting set against those parameters. The best two individuals get to “reproduce” the next set, while the rest… well, kinda die. :0

As time passes and the best two are always serving as the template for the next generation, you can see that generations get more successful.

An issue with genetic algorithms is their propensity to find local maxima. If your starting set is badly generated, than you may never find the answer. That’s why we mutate. 😉

 

You might see in the slide that our matrices mutate to generate the next set of coefficients. This is a measure to offset the likelihood that we’ll get stuck at a local maxima, since mutation will let us jump around the solution space and maybe find the absolute maximum if we haven’t already.

-Sophie

Leave a Reply

Your email address will not be published. Required fields are marked *