/cdn.vox-cdn.com/uploads/chorus_image/image/48505073/Screen_Shot_2016-01-05_at_4.26.08_PM.0.0.png)
During my demo with the latest version of Valve and HTC's virtual reality hardware, now called the Vive Pre, I was told that the individual setting up the technology was going to help configure the controllers in my hands. I double-clicked the new button under the thumb pads, and a blue, ghostly image of my real-world surroundings came into view through the clean lines of the virtual holding program I was in.
It was a bit shocking, like being able to see the world under the world. It felt a bit like being able to peek under the veil of the holodeck to see the grid patterns on the wall.
I was able to see the person walking towards me, and I let him fuss with the controllers. The virtual version of the controller seemed to float slightly within the blue video image I could see through the camera. I could read the text on the banner on the wall of the room, and see the people talking and drinking outside of my demo area. The camera didn't just work well, it showed me a limited representation of the entire room.
What's fascinating about this is that it didn't take me out of the experience as much as it made me feel more comfortable. I knew where the walls were, and where the people were around me. I could click twice, take a look at where I was, and then click again to go back into the game. Even better, the blue version of my real-world surroundings would slowly come into view as I stepped close to the wall as a sort of warning.
This, of course, is not as easy as simply hooking a camera to the development kit.
"This is extremely hard, because it's just one camera," Shen Ye, VR coordinator for HTC told Polygon. They had to limit the resources being used by the PC, while making sure the image was clear enough that you can use it to guide your real-world movements. It was a balance act.
"Our goal for the camera was that we're going to open it up with an SDK so anyone can use it, and the second goal was to stay in VR and interact ... so we had someone who wanted to check their phone during the demo, so they double-clicked [the controller] and checked their phone. We want to give you the option of interacting with the real world without taking the headset off."
This isn't a Kinect-style technology where IR dots are being thrown off and then sensed as they bounce back to get a sort of echolocation image of the room. "It's really clever camera processing," I was told. The camera is actually presenting a 2D picture that the hardware and software is able to fake into a sort of 3D representation of what's around you. How this happens is a detail that no one seems willing to really discuss until closer to launch, and for now it feels like a bit of a magic trick.
What's important is that it works, and fills a large hole in the current strategy for room scale VR. Someone was trying to take my picture as I played with the hardware, and I was able to click through to turn and see them, and then click again to become immersed in the game. It was seamless, and it never required me to remove the headset. The SDK will also allow developers to play with the camera and experiment with games and experiences that could potentially mix the real and virtual.
Room-scale VR is a huge selling point of the Vive, as long as you have the space, and the improved fit and finish of the hardware was nice, but the front-facing camera and the ability to move in and out of VR so easily is a massive improvement from the original version of the development kit. If this is what caused the delay, it may have been worth it.
And of course I had to ask about price, but the answer was about what I was expecting. "We will talk about price closer to launch in April," I was told.