A team of researchers at Stanford have designed a 4D camera which could improve vision for applications in robotics and virtual and augmented reality technologies.
The new vision technique could also be used in autonomous vehicles, add the researchers – Donald Dansereau, a postdoctoral fellow in electrical engineering, and Gordon Wetzstein, assistant professor of electrical engineering, and others.
The new camera Stanford has developed is a proof-of-concept and the university plans to start building a smaller prototype suitable for commercialisation in the coming months.
The technique builds on discoveries made 20 years ago, according to the Stanford news website, and uses is technically a light field camera.
Basically, it gathers much more information from one single image than do conventional cameras, as Stanford explains.
Currently, robots and other machines with vision need to capture images from different perspectives to understand their environment.
But the Stanford 4D camera uses more powerful software to simulate the different perspectives from one picture from one angle.
Dansereau says: “It’s at the core of our field of computational photography. It’s a convergence of algorithms and optics that’s facilitating unprecedented imaging systems.”
Perhaps a simpler way of putting it is that the camera measures the amount of light and where that light is. So, the objects in the picture reflecting the most light are more likely to be nearer to the lens, and so on.
This is different from what is often called a two-dimensional photograph, where all of the light is flattened onto a single plane or level.
As Dansereau suggests, looking through a light field camera is like looking through a window of a room.
Dansereau says: “A 2D photo is like a peephole because you can’t move your head around to gain more information about depth, translucency or light scattering.
“Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess.”
The camera’s applications are many and varied, but perhaps the fields that will be most advanced through its use include robotics, autonomous vehicles and augmented and virtual reality, as well as wearables.
Wetzstein says: “It could enable various types of artificially intelligent technology to understand how far away objects are, whether they’re moving and what they’ve made of.
“This system could be helpful in any situation where you have limited space and you want the computer to understand the entire world around it.”