True artificial intelligence coming to Space Engineers thanks to new research company

Open-world construction games Space Engineers and Medieval Engineers has spawned a new research company, will feed back into the game.

CEO and founder of Keen Software House, Marek Rosa, the man behind open-world construction games Space Engineers and Medieval Engineers, has spent the past year and a half quietly designing and building a "human-level" artificial intelligence. Now, with $10 million dollars of his own money, he's going all-in on the dream of true machine-based, artificial general intelligence — also known as an AGI.

In just a few months, he said, that AGI could show up in-game, as a servant or partner for players of the Engineers series.

"I’ve always wanted to do this," Rosa told Polygon. "Since I was 15 years old. But I decided to start with the games business first because I liked programming. I wanted to make games and I knew that in gaming, the commercialization would be faster.

Marek Rosa

"I planned to make a game, that if I got lucky it would sell well, and then I could fund my AI research." Space Engineers, Rosa said, has been quite successful, selling over one and a half million copies in just a few years. The time is right, therefore, to spool up his new company. Currently unnamed, it was opened in January of 2014 and currently has 15 employees, about half the staff of the Engineers team.

"I’ve always wanted to do this. Since I was 15 years old."

Rosa said that an AGI is very different from what the game industry commonly refers to as AI.

"The standard AI in games isn’t really AI. It’s mostly scripted behavior, meaning there’s no adaptability. Also, the range of actions these AIs are doing in games is always limited and determined by the programmer who writes the AI. In our case, this general AI will not be only for games. This is a project that could develop AI that could be used in any business or industry application.

Much of what his team of researchers and engineers have been doing is creating various AGIs in their lab, and then introducing them to stimuli and trying to prompt them to learn simple tasks. The challenges have been immense, Rosa said, but some progress is being made.

"The road map we chose is kind of copying child development," Rosa said. "The first thing children need to do is to start to understand their environment visually. Then they start moving their hands and legs. It’s kind of random, but after some time, children will find patterns. They’ll find out they can either be screaming if they’re angry, or they can say something to their mother like 'I’m hungry.' This will fix their discomfort, fix their reward and punishment motivation, much faster.

Some of the details of his team's research can be found on his personal blog.

"First, we started with visual recognition and some basic reward/punishment games," Rosa said. "Our AI was able to learn how to play a Pong-type game just by observing the unstructured pixels of the game and receiving these reward/punishment signals from us.

"A reward came when the AI was able to bounce the ball up. When the ball dropped down, it got a punishment. It didn’t know the rules of the environment or the game. It was just presented with raw pixels and these reward/punishment signals. After some trial and error, the AI came up with a solution to play the game successfully, to obtain as much reward as possible and not obtain punishment."

After learning to play Pong, Rosa said that his team has been able to successfully teach an AI to navigate a maze filled with locked doors that had to be opened via switches.

"Again, at the beginning, the AI didn’t have any information about rules or the environment," Rosa said. "It started to do some random actions, and then it was rewarding itself for changing the environment — little rewards."

"Children like to play games, right? They reward themselves by playing games. But actually, what the games are good for is learning new skills – movement, social skills, all these things."

The end goal is to create AIs that can be embedded into commercial or industrial applications.

"If you look really far into the future, the applications are everywhere," Rosa said. "If you imagine you could take this AI brain and put it, for example, into a car and train it correctly, then the brain would operate the car in a way that you would want it to. The brain would figure out what’s important and what isn’t, what pedestrians are, what is and isn’t road, things like that. Then you can keep adding it to other industries. In the end, I think we’ll have AI programmers, AI scientists, AI journalists, AI financiers.

"In the end, I think we’ll have AI programmers, AI scientists, AI journalists, AI financiers."

"Theoretically, AI can do anything we’re doing right now, and do it better. That’s the long term. In the short term, that’s a bit of a problem, because while we’re developing the AI and it’s taking these child steps — now it’s able to play Pong or other games — but there’s not really much business use for things like that. We are trying to find ways to put it to real use, even before this super-long-term ideal goal."

One short-term application for an AGI could be in gaming. Rosa said he hopes to take a version of his AI team's research and put it into Space and Medeival Engineers, giving players access to an in-game resource while at the same time providing his research team with valuable feedback.

"We’ll also release our tool that we’re using for designing these brains — wee call it Brain Simulator. It’s a kind of visual designer. In this designer, people can put these AI modules — visual recognition and prediction and so on — and connect them together, connect them to an environment, and then simulate the brain.

"We’ll make an integration for Space and Medieval Engineers so people will be able to train their peasants in Medieval Engineers to do what they want to reward and not do what they wish to punish. It doesn’t necessarily have to be positive, and you can reward your peasants for doing some nasty things. We’ll see where all this goes."


Part of the future set of challenges Rosa and his team will encounter has to do with the ethics of creating and teaching an AI. Already groups like Human Rights Watch, which led the campaign to end the use of landmines around the world, are taking steps to create international agreements on the use of autonomous weapons systems. An AGI, such as Rosa is developing, could raise the stakes considerably if it is implemented without regard to human safety.

"Regarding military drones," Rosa said, "The question was, if some government organization came to us and asked us, 'Guys, we want to use your AI in these drones’ ... For example, they might be using them to kill terrorists, but also outside casualties in the process — what you might call collateral damage. Other people who aren’t terrorists, but who are just in the wrong place.

"We need to be super careful when the AI becomes as smart as a human."

"My answer was that I would prefer to not design military applications like that. I’d prefer to design a robot that goes to the war zone and takes risks for itself, taking risk away from human beings, even if they might be possible terrorists or 100-percent-for-certain terrorists. I wouldn’t nuke some house in a village somewhere, I’d send a robot to go there and scout. Maybe it would get killed or destroyed, but it doesn’t care. It’s a robot. Then the second or third robot might be able to capture the bad guy. But I’d try to limit, to minimize human casualties as much as possible, even at the expense of the robot.

"Another thing is the possibility of true autonomy. Right now we’re debating this stuff and thinking about it and reading about it, but it’s still a work in progress, at the early stage."

Rosa said it's incredibly important to start considering ethical and safety issues now, in the early years of AI development. The industry is likely to make massive leaps in the future, and perhaps create entities that actively work to outwit the humans controlling it.

"We need to be super careful when the AI becomes as smart as a human," Rosa said. "We need to start preparing for that moment even now. Even if it’s just on a theoretical level, it’s better to do it now.

"One day we’ll have this buggy AI that doesn’t work well and crashes all the time. The next day somebody will fix it, and the day after that it’ll just be working and starting to learn. If we don’t make sure that it’s in a kind of protective-custody environment, it could do a lot of damage. We need to prepare in advance." Babykayak