They rest in a darkened shed with a concrete floor and wooden walls, baking in decades of California heat: a trio of time machines nearly lost to history.
These aren't the sort of time machines that can transport your body to different times and places; they're something altogether different.
In theory, the three machines, the only three in existence, can transport your mind, your imagination, to any time or place known to man, but also to places only imagined or even not yet imagined.
Conceived in the late 1950s and hand-built by a cinematographer, former U.S. soldier and dreamer in 1960, the Sensorama was the first working realization of what would one day become known as virtual reality.
Their creator, Mort Heilig, viewed by many in the field as the father of virtual reality, died 20 years ago. His widow, now 80, spends her days working two jobs struggling to pay off his debt, and her time off fighting to keep his legacy alive.
"Courtesy of Mort Heilig, who I love very much, I'm not able to retire and just spend time working on his legacy," she says over the phone one late evening in September. "I'm desperately trying to sell his machines so I can free myself of the enormous debt [Mort] built up during the inventor phase of his life."
"It saw some theoretical success, but it was viewed at most as a curiosity."
Imagine it’s 1964 and you stroll into a penny arcade near Times Square. The jangle of mechanical entertainment almost overcomes you. You might see duck hunting games with guns resting on steel arms, a few pinball machines like Riverboat, perhaps a Big Champ boxing game. Right next to the door to the arcade, you're confronted by a massive contraption. It looks like a big vending machine for sodas or cigarettes, but there's a chair mounted in front of it and a way to slide the seat up to a viewing area. The word "Sensorama" slowly spins above the machine.
It takes just a quarter to bring the machine to life. Soon, you're cycling your way through Manhattan, the chair beneath you rumbling over the road, wind from a fan blowing in your face. A wide screen shows the view of the city from the front of the bike in full color, and occasionally you can even smell what it’s like to be there.
"He started building the prototype in 1958," Marianne Heilig says. "In 1960 he had the machine almost done and by 1964 he had it fully functioning. He had already shown it to the press and there were articles about it."
There were four movies in the machine, including a ride through Manhattan and a close-up performance by a belly dancer. Despite the potential for the Sensorama, the best that Heilig could do was arrange for it to sit along arcade machines in theme parks or, for a time, in a penny arcade in New York City.
"It saw some theoretical success, but it was viewed at most as a curiosity," says Marianne.
While many saw the machine as a revelation at the time, Mort Heilig was never quite able to turn the Sensorama into a commercial success.
The idea for creating a machine that could transport its users to different places came to Mort when he was studying film on a Fulbright scholarship in Europe.
Inspired by what he read of Cinerama's massive, curved movie screen and the impact it had on viewers, Mort wrote a paper in 1955 describing what he saw as the current state of the film industry, a pandemonium fueled by wild innovation. In the paper he called Cinerama's expansive view as "one small step forward and one big one backward." With the gain in visual field, it brought with it a drop in clarity, he wrote. The "cinema of the future," he explained, would not only deliver perfect clarity, but the ability to address all of a person's senses. It would even surpass the sense of touch delivered by the movie "feelies" of Aldous Huxley's Brave New World.
"It will be a great new power, surpassing conventional art forms like a Rocket Ship outspeeds the horse and whose ability to destroy or build men's souls will depend purely on the people behind it," Heilig wrote.
Despite capturing the imagination of many readers, the paper, initially published in an architectural magazine in Mexico, never managed to find the support it needed to turn Heilig's ideas into a reality. His big dream was to build a movie theater that could deliver to a crowd what the Sensorama aimed to provide to the individual.
His concept seemed doomed. Doomed by ill fate (plane crashes killed off investors on two separate occasions before they could back the project), money (he greatly underestimated the cost of building prototypes) and poor business acumen (patents ran out and were snapped up by competitors).
"We went to a movie, went to dinner and then he showed me his Sensorama machine."
But when Marianne first met her future husband in 1964, he was still passionate about his vision for the future of cinema.
"He was a filmmaker who was in love with futuristic things," she says. "He considered himself somewhat like a Leonardo Da Vinci.
"I was a young journalist from Budapest who loved to travel."
Marianne's relatives set her up on a blind date with Mort in New York City while she was in the U.S. visiting a well-off uncle who lived outside of the city. She was to take a bus into the city and meet the man at the Port Authority Bus Terminal. But when she arrived she was so overcome by the sheer number of people bustling about that she wandered off to an area where she could stare down at all of the men and women riding up an escalator.
"It was a big sight," she says. "I was leaning on one of the rails looking down at these escalators and then someone walked up to me and said sweetly, "Are you Marianne?" It was Mort, and after a short conversation they went on their date.
"We went to a movie, went to dinner and then he showed me his Sensorama machine," she said. "It was like this invitation to come see his stamp collection in his apartment."
Despite the clumsy attempt at using the world's first VR machine as a pickup line, it worked. Months later, the two married in a small chapel in Westminster, just outside of London.
Marianne says her first impressions of the man and his machine weren’t exactly favorable. Mort, dressed in a seersucker suit, seemed almost clownish to the young European woman. The importance of the machine, equally oversized and clownish at first glance, was lost on her.
"It was like a big vending machine, heavy and complicated," she says. "It used wind and sound and motion. I was impressed, but I didn't think it would be historically significant."
Marianne came to love Mort and his inventions. And, she thinks, he would have been delighted to see what his ideas and his creation led to in the world of virtual reality.
"He would have been thrilled that it came to fruition, what he thought was a valid idea," she says. "He wasn't jealous or envious of people. He was disappointed he couldn't get it done."
And now, 20 years after his death, Marianne fears that despite the Sensorama being historically significant, it and Mort still won't be recognized properly.
Marianne spends what free time she has now trying to put together Mort's thoughts (cataloged in more than 50 books of handwritten notes that detail his inventions, thoughts and life) and find a suitable resting place for the only three Sensorama machines ever created, along with some of his other inventions, like the world's first prototype for a head-mounted display.
Her hope is to sell the entire collection, wiping out Mort's mountain of debt and securing his place in history in one deal.
But as during Mort's life, after his death there haven't been any takers.
She's tried film museums and collectors, but the deals haven't come together.
"At this point I think I might need to junk them and sell it as metal," she says. "Or maybe when I die I'll donate one of them to the Smithsonian with all of his papers and destroy the other two.
"I would like to pay off these damn debts so I can devote time to Mort's legacy. I just don't have the time now. I don't have the energy. I just can't do it."
Scott Fisher's life straddles all three eras of virtual reality: its growth from science fiction to NASA project to something you can achieve with a phone strapped to your face.
But it all started at a "magical place," he says: MIT's Architecture Machine Group, a collection of wide-eyed researchers tucked away on the fifth floor of Building Nine and led by tech visionary Nicholas Negroponte. It was a place that gathered up all of the latest technology and greatest thinkers, and granted undergrads full access to it all.
Years later, Arch Mach would become the MIT Media Lab, birthplace of the concepts behind creations as diverse as Lego Mindstorm toys, Google Street View, airbag sensors and the One Laptop per Child Movement. But back in 1978 when Fisher joined the group, it was still Arch Mach.
"As a teen I got very obsessed with stereoscopic imagery," he says. "I ended up doing that through my undergraduate work, exploring other ways to represent binocular imagery through painting and other medium.
"Nicholas was incredibly supportive of all of the people working in that small lab. Arch Mach was amazing. Nicholas had this incredible talent for having the newest coolest toys around, and we pretty much had free rein, playing with this stuff."
During his time there, Fisher worked on a virtual 3D drawing experiment that used magnetic trackers to follow hand movements and allowed the user to essentially paint and draw in virtual reality, in a 10-foot-by-10-foot 3D space. He also spent time working on the predecessor to Google Maps' Street View: the Aspen Movie Map.
"The offers were hard to pass up."
As Fisher’s work on 3D art continued to evolve, he began to wonder what it would be like to be inside the virtual world that this new art inhabited.
"3D was fine, but not that immersive," he says.
So Fisher started working on a system that would track a user's head in a 3D space as they looked at a 3D image. The result essentially turned a monitor into a window into a virtual 3D space that showed scenes from an island off the coast of Maine.
One day, renowned computer researcher Alan Kay showed up at Arch Mach with an unlikely entourage.
"Alan Kay brought the CEO of Atari to Arch Mach one day to show it off, and they basically tried to buy the lab," Fisher says. "Nicholas had to explain to them that you can't buy an academic research center."
Not to be thwarted, Atari hired Kay and got him to try and poach much of Arch Mach’s staff.
"The offers were hard to pass up," Fisher says.
Atari had managed to pluck Kay away from famed research center Xerox PARC, promising him near-complete autonomy and the ability to build his own team. Fisher and several others jumped at the opportunity.
"Atari research was setup to work on long-term projects as a group," Fisher says. "And I was getting more interested in arcades and coin-ops because they seemed like a great way to get more immersive experiences out to the public."
In a short time, the Atari lab became the epicenter for research in virtual and augmented reality.
Fisher even tried to bring on Mort Heilig.
"He was a total hero to me," Fisher says. "I had met him when I was at MIT. I had known about the Sensorama. As a stereopath this was my dream, having this multi-sensory display environment, and he built it in the ’50s.
"He was so far ahead of his time and he was so unhappy about it. He tried to sell it to all of the studios, especially Disney, but none of them had the foresight to know that this was coming.
"When I came out to work at Atari, his work fueled what I had hoped to do in coin-op, to make a digital version of Sensorama."
Fisher says the group was in the process of hiring Heilig, and the paperwork was in motion, when the company suddenly pulled the plug on the lab in 1984.
It was, Fisher thinks, the victim of perhaps too much deep thinking and not enough action.
"Alan kept encouraging us to think 10, at least 20 years down the road," he says. "The Atari 800 or 1200 or whatever became Atari's console was meant to run the home of the future. It was going to go beyond entertainment and be the center of the smart home."
But then it all came to that abrupt stop.
"We were literally out in a park in Sunnyvale with Alan on a retreat talking about the future," Fisher says. "He was saying, 'You guys need to be thinking 20 years out.' Then we came back to the office and I was told I had 15 minutes to get out of my office."
When Atari Lab fractured, it sent its researchers to a number of different places including HP, Apple's Multimedia Lab and Xerox PARC.
In 1985, Fisher found himself at NASA as a research scientist, something that he calls totally crazy given that he came up through the world of art and architecture, not hard science.
He filled a job created for him as part of the human factors division. Soon, he helped found NASA's Virtual Interface Environment Workstation (VIEW) project at the Ames Research Center.
The goal of the project was to create a system that would allow astronauts in space to control anthropomorphic robots located outside the space station through telepresence as they inspected and repaired the station.
"Most important of all for us was if something like this worked well enough, we would have to fly it on a shuttle and do a shake-and-bake on it to see if it could withstand launch and zero-g activities," he says. 'Which clearly meant one of us would have to go up with it."
During his time at NASA, Fisher brought on several of the people he worked with at Atari. He also hired some of the companies formed by former Atari Lab folks to help create devices needed for VIEW.
"We got an $8 million gift from EA to build out a gaming program and it kind of morphed into a very strong focus in games."
While work on the project went well, incorporating LEEP wide-angle optics, head tracking and gloves used for controls, the system never got off the ground and into space.
That was in part because of a space shuttle disaster in 1986, but also, Fisher says, because despite all of the glamour of working at NASA, the job was hampered by internal politics and a constant struggle to receive funding.
"I felt strongly about leaving NASA," Fisher says. "We spent all of this time building hardware, but we wanted to build the content and we just couldn't get to that point."
Fisher left NASA in 1990, forming a startup with early narrative in VR innovator Brenda Laurel that focused on first-person media, virtual environments and remote presence. In 1994, Fisher moved to Japan to return to academia and work on augmented reality. He had no plans to come back to the U.S., but then he got a call from the University of Southern California.
"I got an offer about a new position at USC to start a division of interactive media," he says. "I didn't know anyone at USC, but I knew the film school was world class, so it seemed interesting.
"I thought it would be a good opportunity to create a Media Lab, but on the West Coast and have it focus more on content and less on hardware."
In 2001, Fisher took the job, helping to found the Interactive Media Division inside the university's School of Cinematic Arts.
"I was able to build it from scratch and hire faculty," he says. "We got an $8 million gift from EA to build out a gaming program and it kind of morphed into a very strong focus in games, which has been great."
Culturally, gaming is absolutely vital to virtual reality, Fisher says.
As Fisher sees it, the first wave of virtual reality was all about real-time simulations and flight sims, the sort of work being conducted by Ivan Sutherland and his team at the University of Utah.
"That influenced the next generation to do that first wave of more personal simulators," he says. Then the technology started seeing real-time graphics, data gloves, sound stuff. That was the next wave.
"I feel like that morphed into games and higher-powered very special graphics machines, building virtual worlds as gameplay platforms. And that led the way to where we are now."
Watching virtual reality seemingly gain widespread cultural relevance across the general population in such short order is a bit strange for Fisher.
"It’s curious to me that it took that interim. We worked so hard in the ’80s and ’90s to get this out there, but I think it took a while for general population to catch up," he says.
Fisher thinks that the fact that virtual reality is coming to the masses not just through computers and head-mounted displays, but also through cellphones and PlayStation 4, is fantastic.
"The PlayStation, the power the console has to get this kind of stuff out to a broader market is huge," he says. "Having tools like Unity, which I don't think would exist without the game community, is important too."
While this latest take on virtual reality may seem like the tech's first success to some, Fisher says that he thinks the technology has been successful in a lot of ways for decades. And Fisher is already deeply involved in what he sees as coming next: augmented reality and the merging of all of these different sorts of technology.
"I find the AR stuff more exciting," he says. VR and AR are "absolutely going to merge."
"It’s a pretty exciting time."
Imagine a retelling of Ray Bradbury's Something Wicked This Way Comes with you as the central character. Now imagine that you can do anything you want in this computer-driven virtual reality experience. It puts you at the center of the story and turns your room into locations pulled from the pages of the book. You can smell the awaiting evil of that midwestern traveling show, feel the pricking of your thumbs. The walls around you shift to show first the view from your upstairs bedroom, then, after you jump out of the bedroom window, the blur of motion and then the view from your yard below.
Anything is possible here, even not visiting that fateful dark carnival.
Unfortunately, Atari Labs' Interactive Fantasy System was never quite realized. Instead it remains a series of "fanciful scenarios" as described by Brenda Laurel in the fall of 1983.
The concept was one of many virtual reality ideas that were driven by an exploration of the use of narrative by Laurel, one of the pioneering developers of the medium in the ’80s and ’90s.
"I guess my entry point was sort of before virtual reality," Laurel says. "I worked on a PhD thesis in 1981 and it was about using AI to do interactive fantasy. By the time I became aware of virtual reality in the mid ’80s I knew that it was the medium I should be working in for interactive fantasy."
She says Scott Fisher, whom she worked with at the Atari research lab, was a big part of her move into VR at the time.
"Of course it turned out we were stupid about business. We were like crash dummies."
"He was obsessed and he taught me to be obsessed," she says.
But her interest in computers was initially piqued in the ’70s when she saw computers and computer graphics for the first time.
"I thought, ‘Whatever this is, I want a piece of it,’" Laurel says.
A company called CyberVision brought Laurel on in 1977.
"They asked me to do some interactive fairy tales," she says. "Because we were loading them from cassette and had so little RAM, you couldn’t do anything specific."
As the technology advanced, Laurel says she became fascinated with the idea of an artificial intelligence storyteller.
"I ended up designing an AI-based system that used the heuristics from Aristotle’s Poetics to generate interesting next events based on a person’s actions," she says. "But it was all theory. I couldn’t build it yet."
While CyberVision managed to sell upward of 10,000 home computers through Montgomery Ward, it seemed apparent that the company wasn’t going anywhere, so people started to slip away to join other companies.
Laurel and a few others went to Atari to work in the company’s home computer division in 1980.
"My job was to port from console to PC, which made no sense," she says. "I went to the head of the division and said, ‘There are so many things we could do with this. We can do word processing. We can teach stuff, do personal care, finance.’"
The president of home computers liked what he heard and promoted Laurel to head up software strategy. Two years later, when Laurel heard about the company bringing on Alan Kay to start up a research lab, she moved over as quickly as she could. She was one of only two people at the lab who didn’t come from MIT’s Arch Mach, she says.
But the lab didn’t survive and, after a two-year stint at Activision, Laurel found herself working again with Scott Fisher, this time as the co-founder and president of a company they started together: Telepresence Research. It was 1990, and Fisher, tired of the politics at NASA, had just left the organization.
"The moment when we started the company, it started to get really, really real," she says. "Of course it turned out we were stupid about business.
"We were like crash dummies. It was way too expensive to do [VR] commercially. The hardware was way too expensive. You couldn’t do decent throughput."
In 1992, after leaving the company, Laurel teamed up with Rachel Strickland to start work on a two-person, three-world VR project funded by the Banff Centre for the Performing Arts.
"It was wildly ambitious for the day," Laurel says.
"I believe technology is an extrusion of the human spirit."
The team got the very first reality engine off the assembly line from SGI to help power the concept. Ultimately, Placeholder ran on 13 computers, the reality engine, a MacBook and "lots of duct tape," Laurel says.
The end result was a shared virtual reality space that allowed people to co-create a narrative by taking on the form of animals. The space was made up of three virtual locations connected by portals, and two people could participate at the same time.
The experience of creating Placeholder led to a number of major advancements in both hardware technology and VR design, Laurel says.
The teamed learned that instant travel between virtual portals could cause VR sickness, that having two controllable hands was key to increasing a sense of being there. With the help of Mike Naimark, the group created natural landscapes. With the help of Scott Foster, it created an acoustical environment. It made costumes that looked like different bodies to help remove the sense of disembodiment.
"We wanted to make a design statement that VR had uses beyond training, that it could be wild, wonderful, playful," Laurel says. "We wanted to work on the question of embodiment."
Laurel calls the project "bare-bones, but just amazing. It was like exploring the North Pole."
In her time after helping to create Placeholder, Laurel founded a transmedia company devoted to preteen girls, chaired a media design program, helmed an experience design initiative at Sun Labs and chaired the graduate program in design at California College of Arts. But her first love has always remained virtual reality, something she’s returned to more actively recently.
"I’m utterly focused on VR and AR and mixed reality," she says. "I think in the future, we’re going to be in media situations where we are trying to figure out how to allow people to move smoothly through those three realities and whatever else shows up."
While Laurel is a big fan of the work going on now in virtual reality — specifically, the ability to get affordable hardware to the masses through the HTC Vive, Oculus Rift, Oculus Gear VR and PlayStation VR — she hopes it will remain focused on what she considers to be true virtual reality experiences, and that eventually the technology will be used, in part, to work on less trivial things.
"I believe technology is an extrusion of the human spirit," she says. "I don’t see it as other; I see it as us.
"If we do trivial things with it and train people to want trivial things, we have kind of wasted this great opportunity. I feel very strongly that there is a way to develop these technologies in a way that keeps us in harmony with one another and the world."
Having parents who owned a video game store when he was in high school helped. Watching a lot of science fiction helped. But it was his years at MIT and visiting his friends in the university’s fabled Media Lab that first enticed Richard Marks with the possibilities of virtual reality.
Then in graduate school at Stanford, while working in the aerospace robotics lab with the likes of NASA and Boeing, Marks again started to see the value of virtual reality.
"The technology overlapped quite a bit with robotics," he says. "I worked quite a bit with user interfaces and we were looking at VR for interface."
It was Marks’ work with computer vision for robots that landed him his first job at a startup straight out of school in 1995. But Autodesk eventually purchased the company and then lost interest in computer vision, and Marks was out of work.
Marks says he saw a talk during the 1999 Game Developers Conference that convinced him to apply for a job at Sony. He was intrigued with the power of the PlayStation 2 and what it might be able to do outside the traditional video game space.
"I think the Move was a little bit ahead of its time," Marks says. "It asked users to move objects in 3D space while looking at a 2D screen."
"I put in my resume and then nothing happened," he says. "But then later they called me and asked me to come by and talk to some of their engineers about some ideas they had."
Marks showed up in a T-shirt and jeans, ready to chat about technology, unaware that Sony might actually be feeling him out for a possible job.
"I talked to them for about an hour and a half, and then the first group of engineers swapped out with another group and they talked about all of the same stuff with me," he says. "It seemed really weird. I left thinking the whole thing was a weird situation; I didn’t realize it was an interview.
"Then I got an offer letter."
Marks went to work for PlayStation Research and Development, where his group worked on two big ideas.
One was exploring what could be done with a traditional webcam plugged into a PlayStation 2. The second was seeing what the team could do if it plugged Sony’s Aibo robotic dog into the console.
"The Aibo didn’t do a lot in the real world, but if you plugged him into the PS2 and used that to teach him tricks in a virtual environment, you could then load that back into him," Marks says. "But we ended up not pursuing that."
Instead, Marks started digging into what a camera connected with a PlayStation could do. The result was 2003’s EyeToy.
"I was very involved with that," he says. "I did a lot of early research stuff."
Once the project took off and PlayStation knew it was going to be commercially released, Marks went moved to Europe for several months to work with Phil Harrison — once the head of R&D, but then the head of Sony Computer Entertainment of Europe — and his studios on games that would work with the device.
"I spent three months there prototyping games," he says. "I learned a lot about game development."
The little accessory went on to sell more than 10.5 million units.
With the EyeToy, Sony used a nearly unmodified off-the-shelf camera. When the company decided to bring the device to the PlayStation 3, it built a new camera from scratch.
"We wanted a wider field of view," Marks says. "We wanted high-quality uncompressed video. "It worked out very well as a general input camera. A lot of researchers still use it on their PCs for research."
The PlayStation Eye hit in 2007, setting the stage for Sony’s next move in computer vision and gesture technology.
"We had already done a lot of work with color tracking and motion tracking," Marks says. "Around that time the Wii was making hand controllers with motion pop. We already had that technology through the camera, so it was easy for us to do."
He says the group decided to focus more on precision interactions and less on the broader swinging movements seen in most Wii games. It also focused on more two-handed interactions.
In 2009, Sony announced PlayStation Move, which used two handheld motion controllers that look a bit like lit-up ice cream cones, and combined them with the PlayStation Eye to deliver motion gaming to the PS3. The company shipped 15 million of the Move controllers in about three years.
"I think the Move was a little bit ahead of its time," Marks says. "It asked users to move objects in 3D space while looking at a 2D screen."
Fortunately, the controller also turned out to be an excellent solution for one of virtual reality’s two big problems: motion tracking. And Sony had entire divisions dedicated to working with and evolving displays, virtual reality’s other big issue.
While Marks and team were busy messing around with perfecting the tracking solution, Sony teams in other countries were busy using the motion controllers for their own purposes.
"There was a fair number of groups excited about VR on their own on the display side," Marks says. "They were all bolting them onto their display and trying it out.
"At first we were supporting them, and then we were trying it out on our own and seeing how natural it was to have that display and tracking together.
"So they sort of evolved together."
Initially, Sony’s research on VR started with deconstructed Move controllers attached to headsets, and Marks and team did the initial support. But once it ramped up interest internally, Sony brought in designers and electrical engineers to help redesign the tracking and the headset.
"I get a lot of credit for PlayStation VR, but we have a huge number of people who are really key to this."
While Marks said he had occasionally thought about VR and his trackers, he didn’t think he could convince Sony to commit to the costly development of a VR headset.
"That was beyond my scope of belief," he says. "But there were others that had so much belief and comfort with displays, they were able to convince Sony."
Marks is quick to point out that the PlayStation VR was the result of a lot of people putting in a lot of effort. He minimizes his own impact, especially on the optics, which he says he had nothing to do with.
"I get a lot of credit for PlayStation VR, but we have a huge number of people who are really key to this," he says. "I’ve been pushing the interaction side, but if you talk about the display, Sony has a long history of making head-mounted displays and a lot of experience in doing optics for cameras and displays. We have a lot of experience and knowledge inside Sony for that."
That experience is one of the reasons that Sony felt confident it could forgo Fresnel lenses — used in both the Oculus Rift and HTC Vive — for a different sort.
"We were aware of some of the trade-offs, and made the decisions high up not to use them," he says.
One of the chief reasons companies use Fresnel lenses is because of how light they are, Marks says. But Sony was convinced it was more important to balance the headset than it was to lighten it.
"Ours might weigh a bit more because of it not using Fresnel lenses, but there were a lot of benefits," he says.
And according to reviews, most critics feel that the PlayStation VR, despite its slightly higher weight, is the most comfortable of the current headsets. Marks says that’s because it doesn’t press on your face, it allows people to wear glasses and it incorporates a counterbalance to avoid neck strain.
For Sony, PlayStation VR seems to be an almost inevitable evolution that started in 1999 and came to a head during what Marks calls the "perfect storm of graphic display and sensory technology all being affordable."
"If we had tried to do this 10 years ago, we wouldn’t have had the graphics horsepower, and the displays were too expensive," he says. "Now we can have a high-refresh display at 120 frames per second that is bright and vivid."
What the PlayStation VR brings to the table that other headsets might not, Marks says, is the ease of use.
"It is something you can just plug in and turn on and get it to work, which makes it more accessible," he says. "And the comfort really is awesome."
Others well-versed in the ups and downs of virtual reality’s long life agree. VR luminary Scott Fisher thinks a console-supporting VR headset will have a huge impact on the industry. So does Nonny de la Peña. If nothing else, they argue, it gets the technology out to a much broader audience.
Although the PlayStation VR has launched, Marks doesn’t think he’ll be moving on from virtual reality anytime soon.
"There is still a lot we can tap out of VR," he says. "A lot of experiences haven’t been fully explored yet."
Among those ideas is the concept of co-presence, which allows two people in VR headsets to share an area and feel like they are in the same room.
"Last year we did some stuff with multiplayer interactions," he says. "Co-presence and social presence really enhances the experience. When someone else says, ‘Can you grab that?’ and you do and then hand it off to them, it feels like you’re in the same room."
Marks says he hopes that such experiences will come to PlayStation VR one day. He’s also interested in augmented reality.
"I’ve always been excited about that," he says. "I was working on it, not thinking we could make an augmented reality display, but now I’m much more optimistic."
Nonny de la Peña
A man drops to the ground in a diabetic coma as he waits in a Los Angeles food line. Another is forced to sit hunched over in a stress position as his muscles scream in pain and eventually fail. Two sisters struggle to protect their younger sister from a violent husband. A peaceful corner in Syria, children at play, erupts into chaos when a rocket hits nearby.
Journalists strive to convey the facts, the scene, in their stories; to become witnesses to the events that shape the world and report back to the reader. Nonny de la Peña has figured out how to do one better, removing the reporter from the mix and pushing society directly into the fray.
With the help of virtual reality, de la Peña’s immersive journalism has managed to tear away the thin gray line of newsprint that separates harrowing, tragic, befuddling facts and trends from the mendacity of a newspaper column inch.
Her work breathes life into a data-rich Freedom of Information Act request about torture methodology, places a face on the statistics of homelessness and hunger, makes real the everyday plight and terror of domestic violence and reminds everyone of the child victims of war.
It all started, she says, with Second Life.
De la Peña was deep in her journalism career, which included nearly a decade freelancing for the New York Times and Newsweek, when she came across a grant from the MacArthur Foundation that sought to fund a project that would take a documentary and bring it into a tech-driven social arena. She called up her friend and digital media artist Peggy Weil to see if they could come up with a concept.
"Palmer crashed in my hotel room and made these crazy duct tape goggles."
"I told her I wanted to do something with my film on Guantanamo prison," de la Peña says. "She said, ‘Maybe we should do something in Second Life,’ and I said, ‘What’s that?’"
De la Peña dove into the online massively multiplayer gathering space, and set to work with Weil turning converted digital land into a recreation of the prison with embedded video.
"It was an interesting repository of information," de la Peña says. "I remember after that, sitting in my backyard and thinking, ‘Wow, VR could be used with all kinds of journalism,’ and that’s when I came up with the term immersive journalism."
Shortly thereafter, her work on the Second Life piece got her in contact with Mel Slater, who was running the EVENT Lab in Barcelona. He was interested in creating a more immersive piece driven by reporting, and when he heard about all of the data de la Peña had gathered on Guantanamo Bay through Freedom of Information Act requests, he said he wanted to tell that story.
IPSRESS (Induction of Psycho-Somatic Distress in Virtual Reality) was created based on the interrogation logs of detainee 063 from 2002 to 2003. Users put on a head-mounted display, sat in a chair and held their hands behind their back. A strap placed on their chest measured breathing, and sent that information into the program so that the avatar’s breathing would match up with the person viewing the experience through their eyes.
Then the room was darkened and the program started.
The experience starts with a third-person view of the detainee, but that changes to a first-person perspective. This gives the user a chance to see that they are being forced to stand in a semi-squat position on a box. Looking around, the user can seem themselves in a mirror in the game standing in that position. The entire time, actors read out the actual log entries of what was being asked of the detainee and his responses. Despite sitting relaxed in a chair in a comfortable viewing room, many who went through the experience felt like they were tensed up in a painful position.
"That began my career," she says.
Next, de la Peña, who was then a research fellow at USC’s Annenberg School of Journalism, began working on a project meant to examine hunger in California through the lens of a VR headset. She brought on an intern who was recording audio at a food bank when the intern witnessed a man waiting in line for food so long that he collapsed and fell into a diabetic coma. De la Peña recreated the experience in a game engine with virtual reality, and used the actual sound recorded when the event happened. The seven-minute experience places the user in the line along with the man who eventually collapses. Hunger in Los Angeles went on to be showcased at the 2012 Sundance Festival, leaving many of the participants shaken, some crying, de la Peña says.
The process of creating the experience involved working with a group at the MxR Lab, a place that was home to a wide range of VR innovators and creators like Mark Bolas and Palmer Luckey.
After she got the build to work and was invited to Sundance, Bolas told de la Peña that she couldn’t take the lab’s expensive headset on the road.
So a group, including Luckey, pieced together a working headset with wires, lenses and duct tape.
"Palmer crashed in my hotel room and made these crazy duct tape goggles," de la Peña says. "He had this very weird status of being a part-time employee at the lab. So when it came to Sundance, he sort of became my intern for the project.
"He was, like, the kid that made sure things were working."
In some ways it was Hunger that became the turning point for de la Peña. It was that experience, and the people who were transformed by it at Sundance or later, that helped opened the door to VR and the people working in the realm.
"But I also have to give Bolas credit," she says. "He just opened up the lab to all of us. He let us come together and play. He let us come together and do the work. That’s really to his credit."
"He let us come together and play. He let us come together and do the work. That’s really to his credit."
After the success of Hunger, which was self-funded and nearly drove her family into bankruptcy, de la Peña began getting jobs, she says.
In 2014, the World Economic Forum commissioned her to create an immersive journalism experience to try and showcase the plight of children fleeing Syria. And groups like Al-Jazeera America, Standard Chartered Bank and the AP Google Fund have asked her to create experiences examining things like the use of force by police officers, domestic violence and even the shooting of Travyon Martin by George Zimmerman.
De la Peña creates the experiences now through her company, Emblematic Group, and remains deeply committed to the idea that virtual reality can better communicate, at times, the ideas some might struggle to impart through the written word, images or even flat video.
She talks about a guestbook at a museum showing off Project Syria that filled up with 55 pages of comments in five days, and the police officer who cried while experiencing Kiya, which drops users into the mayhem of police responding to domestic violence.
"My assessment is that this works," she says.
And it’s only going to get bigger.
Asked if her project on hunger was the turning point for her, she says yes, but that the "super turning point" for her and many people at the time was when Luckey sold his company, Oculus VR, to Facebook for $2 billion.
That proved to so many people, she says, that virtual reality wasn’t just viable as a tool for communication and experiences, but that it could be profitable too.
Now she gets called by major media companies to examine the reporting they are doing and find ways to turn those written works into accompanying virtual reality pieces.
Video games, she says, will also be huge for virtual reality because they will help create the audience that a budding form of expression needs to thrive.
Illustrator: Chris Kindred