Over the past year or so, engineers have come up with novel ways for computers to create art on their own. Here at Polygon, we decided to put an AI to work creating mashups of key game art.
One of the most powerful systems for computer-generated art is Google's DeepDream, which uses so-called neural networks to search for a kind of meaning inside patterns and shapes. You might recognize some of its output, shown in a blog post here. In the same way that you or I could lie down in a grassy field, look up at the clouds, and see unicorns and rocketships, DeepDream sees pagodas, pugs and pufferfish.
Since then, many amateurs and professionals alike have taken these kinds of computer programs and refined them. Our principal full-stack engineer David Zhou was browsing some of the results when he got the brilliant idea to apply these same kinds of software to artwork from modern video games. We asked him to explain how it all works.
Google's DeepDream is one of the more well-known examples of a machine learning method called Convolutional Neural Networks (ConvNet). More recently, Google famously used AlphaGo to beat some of the world's best players in the game of Go. Like DeepDream, AlphaGo had components that used ConvNets to help it learn and play.
A paper published in September described an algorithm using ConvNets to "transfer style." For example, taking the artistic style of Vincent van Gogh's Starry Night, and applying it to artwork from the game Firewatch. Since the publication of the paper, there have been many open-source implementations of the algorithm described.
Due to the memory requirements of ConvNets, and due to how much GPU RAM I had available on the virtual machine, most of the resulting images are around 800 pixels wide.
- The software starts with a source image, say, some key art from BioShock Infinite.
- Then, Zhou feeds the software another image to use as a style guide. For his first few examples he chose Vincent van Gogh's 1888 masterpiece Starry Night Over the Rhone. Then the software goes to work applying that piece's color palette and overall style to the first piece.
- The end results are mesmerizing.
- In fact, van Gogh's work applied well to other games like Firewatch ...
- ... and Journey. Each time he runs the software, the picture is a little bit different. This particular repo has a bunch of sliders and adjustments, but he more or less left them alone for this experiment.
- Eventually, he tried more and different mashups, like Metal Gear Solid 5: The Phantom Pain and Leonid Afremov's Misty Mood.
- The Elder Scrolls 5: Skyrim and Monet's Impression, Sunrise.
- Just Cause 3 and Hokusai's The Great Wave off Kanagawa.
- That's when things got interesting. Eventually, Zhou started combining art styles from different games. Here's Dark Souls 3 in the style of The Legend of Zelda: Wind Waker.
- Bloodborne and The Legend of Zelda: Wind Waker.
- Final Fantasy 7 and Yoshitaka Amano Final Fantasy X artwork.
- Limbo and Firewatch ... and Totoro.
- Like most experiments, things got weird after a while. Here's what a Team Fortress 2 Heavy looks like as a bunch of potatoes.
- And of course, Dark Souls and pizza.
Zhou was quick to add that these programs run on a graphics processing unit, and require lots of video memory. It's open-source, though, so feel free to give it a try yourself. Be sure to share the results in the comments below.