clock menu more-arrow no yes

Filed under:

When computers dream of Dark Souls

We tasked an advanced neural network with making art based on games. The results are astonishing.

If you buy something from a Polygon link, Vox Media may earn a commission. See our ethics statement.

Over the past year or so, engineers have come up with novel ways for computers to create art on their own. Here at Polygon, we decided to put an AI to work creating mashups of key game art.

One of the most powerful systems for computer-generated art is Google's DeepDream, which uses so-called neural networks to search for a kind of meaning inside patterns and shapes. You might recognize some of its output, shown in a blog post here. In the same way that you or I could lie down in a grassy field, look up at the clouds, and see unicorns and rocketships, DeepDream sees pagodas, pugs and pufferfish.

Since then, many amateurs and professionals alike have taken these kinds of computer programs and refined them. Our principal full-stack engineer David Zhou was browsing some of the results when he got the brilliant idea to apply these same kinds of software to artwork from modern video games. We asked him to explain how it all works.

Google's DeepDream is one of the more well-known examples of a machine learning method called Convolutional Neural Networks (ConvNet). More recently, Google famously used AlphaGo to beat some of the world's best players in the game of Go. Like DeepDream, AlphaGo had components that used ConvNets to help it learn and play.

A paper published in September described an algorithm using ConvNets to "transfer style." For example, taking the artistic style of Vincent van Gogh's Starry Night, and applying it to artwork from the game Firewatch. Since the publication of the paper, there have been many open-source implementations of the algorithm described.

For the images in this article, I used GitHub user jcjohnston's "neural-style" repo running on an Amazon EC2 GPU (g2.2xlarge) instance.

Due to the memory requirements of ConvNets, and due to how much GPU RAM I had available on the virtual machine, most of the resulting images are around 800 pixels wide.

Zhou was quick to add that these programs run on a graphics processing unit, and require lots of video memory. It's open-source, though, so feel free to give it a try yourself. Be sure to share the results in the comments below. Babykayak