New camera designed by Stanford researchers could improve robot vision and virtual reality – Stanford University News

A new camera that builds on technology first described by Stanford researchers more than 20 years ago could generate the kind of information-rich images that robots need to navigate the world. This camera, which generates a four dimensional image, can also capture nearly 140 degrees of information.

We want to consider what would be the right camera for a robot that drives or delivers packages by air. Were great at making cameras for humans but do robots need to see the way humans do? Probably not, said Donald Dansereau, a postdoctoral fellow in electrical engineering.

Assistant Professor Gordon Wetzstein, left, and postdoctoral research fellow Donald Dansereau with a prototype of the monocentric camera that captured the first single-lens panoramic light fields. (Image credit: L.A. Cicero)

With robotics in mind, Dansereau and Gordon Wetzstein, assistant professor of electrical engineering, along with colleagues from the University of California, San Diego have created the first-ever single-lens, wide field of view, light field camera, which they are presenting at the computer vision conference CVPR 2017 on July 23.

As technology stands now, robots have to move around, gathering different perspectives, if they want to understand certain aspects of their environment, such as movement and material composition of different objects. This camera could allow them to gather much the same information in a single image. The researchers also see this being used in autonomous vehicles and augmented and virtual reality technologies.

Its at the core of our field of computational photography, said Wetzstein. Its a convergence of algorithms and optics thats facilitating unprecedentedimaging systems.

The difference between looking through a normal camera and the new design is like the difference between looking through a peephole and a window, the scientists said.

A 2D photo is like a peephole because you cant move your head around to gain more information about depth, translucency or light scattering, Dansereau said. Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess.

That additional information comes from a type of photography called light field photography, first described in 1996 by Stanford professors Marc Levoy and Pat Hanrahan. Light field photography captures the same image as a conventional 2D camera plus information about the direction and distance of the light hitting the lens, creating whats known as a 4D image. A well-known feature of light field photography is that it allows users to refocus images after they are taken because the images include information about the light position and direction. Robots might use this to see through rain and other things that could obscure their vision.

Link:

New camera designed by Stanford researchers could improve robot vision and virtual reality - Stanford University News

Related Posts

Comments are closed.