4-D camera could improve robot vision, virtual reality and self-driving cars – Phys.Org

August 7, 2017 Two 138-degree light field panoramas (top and center) and a depth estimate of the second panorama (bottom). Credit: Stanford Computational Imaging Lab and Photonic Systems Integration Laboratory at UC San Diego

Engineers at Stanford University and the University of California San Diego have developed a camera that generates four-dimensional images and can capture 138 degrees of information. The new camerathe first-ever single-lens, wide field of view, light field cameracould generate information-rich images and video frames that will enable robots to better navigate the world and understand certain aspects of their environment, such as object distance and surface texture.

The researchers also see this technology being used in autonomous vehicles and augmented and virtual reality technologies. Researchers presented their new technology at the computer vision conference CVPR 2017 in July.

"We want to consider what would be the right camera for a robot that drives or delivers packages by air. We're great at making cameras for humans but do robots need to see the way humans do? Probably not," said Donald Dansereau, a postdoctoral fellow in electrical engineering at Stanford and the first author of the paper.

The project is a collaboration between the labs of electrical engineering professors Gordon Wetzstein at Stanford and Joseph Ford at UC San Diego.

UC San Diego researchers designed a spherical lens that provides the camera with an extremely wide field of view, encompassing nearly a third of the circle around the camera. Ford's group had previously developed the spherical lenses under the DARPA "SCENICC" (Soldier CENtric Imaging with Computational Cameras) program to build a compact video camera that captures 360-degree images in high resolution, with 125 megapixels in each video frame. In that project, the video camera used fiber optic bundles to couple the spherical images to conventional flat focal planes, providing high-performance but at high cost.

The new camera uses a version of the spherical lenses that eliminates the fiber bundles through a combination of lenslets and digital signal processing. Combining the optics design and system integration hardware expertise of Ford's lab and the signal processing and algorithmic expertise of Wetzstein's lab resulted in a digital solution that not only leads to the creation of these extra-wide images but enhances them.

The new camera also relies on a technology developed at Stanford called light field photography, which is what adds a fourth dimension to this camerait captures the two-axis direction of the light hitting the lens and combines that information with the 2-D image. Another noteworthy feature of light field photography is that it allows users to refocus images after they are taken because the images include information about the light position and direction. Robots could use this technology to see through rain and other things that could obscure their vision.

"One of the things you realize when you work with an omnidirectional camera is that it's impossible to focus in every direction at oncesomething is always close to the camera, while other things are far away," Ford said. "Light field imaging allows the captured video to be refocused during replay, as well as single-aperture depth mapping of the scene. These capabilities open up all kinds of applications in VR and robotics."

"It could enable various types of artificially intelligent technology to understand how far away objects are, whether they're moving and what they're made of," Wetzstein said. "This system could be helpful in any situation where you have limited space and you want the computer to understand the entire world around it."

And while this camera can work like a conventional camera at far distances, it is also designed to improve close-up images. Examples where it would be particularly useful include robots that have to navigate through small areas, landing drones and self-driving cars. As part of an augmented or virtual reality system, its depth information could result in more seamless renderings of real scenes and support better integration between those scenes and virtual components.

The camera is currently at the proof-of-concept stage and the team is planning to create a compact prototype to test on a robot.

Explore further: Lensless camera technology for adjusting video focus after image capture

More information: Technical paper: http://www.computationalimaging.org/w 04/LFMonocentric.pdf

Hitachi today announced the development of a camera technology that can capture video images without using a lens and adjust focus after image capture by using a film imprinted with a concentric-circle pattern instead of ...

By combining 3-D curved fiber bundles with spherical optics, photonics researchers at the University of California San Diego have developed a compact, 125 megapixel per frame, 360 video camera that is useful for immersive ...

When taking a picture, a photographer must typically commit to a composition that cannot be changed after the shutter is released. For example, when using a wide-angle lens to capture a subject in front of an appealing background, ...

Traditional cameraseven those on the thinnest of cell phonescannot be truly flat due to their optics: lenses that require a certain shape and size in order to function. At Caltech, engineers have developed a new camera ...

A camera that can record 3D images and video is under development at the University of Michigan, with $1.2 million in funding from the W.M. Keck Foundation.

(Tech Xplore)A team of researchers with the University of Stuttgart has used advanced 3-D printing technology to create an extremely small camera that uses foveated imaging to mimic natural eagle vision. In their paper ...

Energy loss due to scattering from material defects is known to set limits on the performance of nearly all technologies that we employ for communications, timing, and navigation. In micro-mechanical gyroscopes and accelerometers, ...

Most of the nuclear reactions that drive the nucleosynthesis of the elements in our universe occur in very extreme stellar plasma conditions. This intense environment found in the deep interiors of stars has made it nearly ...

New results show a difference in the way neutrinos and antineutrinos behave, which could help explain why there is so much matter in the universe.

Engineers at Stanford University and the University of California San Diego have developed a camera that generates four-dimensional images and can capture 138 degrees of information. The new camerathe first-ever single-lens, ...

A research team at the University of Central Florida has demonstrated the fastest light pulse ever developed, a 53-attosecond X-ray flash.

Air travel may be the quickest way to get to your vacation destination, but it's also one of the speediest ways for infectious diseases to spread between people, cities and countries.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

The rest is here:

4-D camera could improve robot vision, virtual reality and self-driving cars - Phys.Org

Related Posts

Comments are closed.