A Disney Research Zurich project developed an algorithm for creating 3D models from static photographs, which could prove useful in recreating real-world objects in video games.
The algorithm analyzes photographs, determines depth information from the light information in them and reconstructs the scene in a virtual environment, as described in a paper (PDF link) called "Scene Reconstruction from High Spatio-Angular Resolution Light Fields." By using multiple photographs, the method can "easily capture the scene from different viewing positions" to fill in the obstructed areas you'd find in a single photo, as demonstrated in the video above.
Disney Research's technology could streamline the modeling processes in several industries, its creators say. The paper contrasts it with the present method of laser scanning, a comparatively inaccurate process that involves a lot of modeler cleanup. The technology was also designed to be used on a "standard graphics processing unit (GPU)."
"Densely sampled light fields ... allow us to capture the real world in unparalleled detail"
"Scene reconstruction in the form of depth maps, 3D point clouds or meshes has become increasingly important for digitizing, visualizing, and archiving the real world, in the movie and game industry as well as in architecture, archaeology, arts, and many other areas," the paper reads.
Disney Research was founded in 2008 as "an informal network of research labs" at the Walt Disney Company and partners with universities like Carnegie Mellon University and the Swiss Federal Institute of Technology Zurich.
The technology for using photographs to recreate objects in the digital world is already making its way to the consumer market. Last month, for example, developer Dekko launched DekkoScan, an iOS app that allows users to photograph real-world objects and import them in to Minecraft.