clock menu more-arrow no yes

Filed under:

Here’s Mario Kart, as played by a neural network

New, 16 comments

Watch and learn

Programmer SethBling has built and trained a neural network to play Mario Kart (the original). After showing the program 15 hours of video and refining some of its behavior, he got it to win gold, by itself, in the 50cc Mushroom Cup.

SethBling’s goal wasn’t necessarily to build the perfect driving machine. Two years ago he created MarI/O, another neural network that evolved on its own to play Super Mario World. You can see it below:

MariFlow, as the Mario Kart network is called, takes its cues from player information; SethBling tried to make the robot play like his father, for example.

How MariFlow pulls this off is explained in detail in the six-minute video at top. Broadly speaking, from about four layers of computation, the program arrives at a set of predictions for the buttons it thinks SethBling is likely to press at a certain point on the track.

The biggest difference is that MariFlow is a recurrent neural network, which means the machine is capable of remembering information. Yet this too is not some automatic process. That experience needed to be weighted by humans so that MariFlow knew what information was important. For example, MariFlow would get itself into dead-end situations and be unable to recover. So SethBling recorded 15 hours of gameplay, taking the wheel at particularly tricky instances to educate the network on what it should be doing.

MariFlow has gotten gold medals in the Mushroom Cup (below) and Flower Cup, but just silver in the Star Cup so far. On one hand, I am impressed by a machine’s ability to learn and stick to a winning racing line — even if the AI it’s outwitting is primitive by modern standards.

On the other, having flung myself against bot racers in all kinds of driving titles, nearly all of which have perfect braking and acceleration, I’m not sure games really need the extra help?

Or maybe now the machine is training me to be a better driver.