Trilobite Camera Lens That Can Focus Up Close And Far Away At The Same Time.
The large depth of field helps recover distance information from a single image.
The now extinct trilobite Dalmanitina socialis had a superior version 400 million years before the founding father produced bifocals (SN: 2/2/74). The marine animal could not only see things close up and far away, but it could also see both distances in focus at the same time, a skill that most eyes and cameras lack.
A new sort of camera now observes the world through the eyes of this trilobite. Researchers write in Nature Communications on April 19 that the camera, inspired by D. socialis’ eyes, can simultaneously focus on two spots anywhere between three millimeters and nearly two kilometers distant.
“In optics, there was a problem,” says Amit Agrawal, a physicist at the National Institute of Standards and Technology in Gaithersburg, Md. If you wanted to focus a single lens to two different points, you just simply could not do it, he says.
Agrawal reasoned that if a camera could see like a trilobite, it could record high-quality photographs with greater depths of field. The relatively new technique of light-field photography, which utilizes several tiny lenses to make 3-D photographs, requires a large depth of field – the gap between the closest and farthest spots that a camera can bring into focus.
The scientists created a metalens, a flat lens made up of millions of different-sized rectangular nanopillars arrayed like a cityscape — if skyscrapers were one-hundredth the width of a human hair. Because of their structure, size, and organization, the nanopillars behave as impediments that bend light in diverse ways. The researchers adjusted the pillars such that some light passed through one part of the lens while others passed through the other, resulting in two distinct focal points.
To use the device in a light-field camera, the team then built an array of identical metalenses that could capture thousands of tiny images. When combined, the result is an image that’s in focus close up and far away, but blurry in between. The blurry bits are then sharpened with a type of machine learning computer program.
Achieving a large depth of field can help the program recover depth information, says Ivo Ihrke, a computational imaging scientist at the University of Siegen in Germany who was not involved with this research. Standard images don’t contain information about the distances to objects in the photo, but 3-D images do. So the more depth information that can be captured, the better.
The trilobite approach isn’t the only way to boost the range of visual acuity. Other cameras using a different method have accomplished a similar depth of field, Ihrke says. For instance, a light-field camera made by the company Raytrix contains an array of tiny glass lenses of three different types that work in concert, with each type tailored to focus light from a particular distance. The trilobite way also uses an array of lenses, but all the lenses are the same, each one capable of doing all the depth-of-focus work on its own — which helps achieve a slightly higher resolution than using different types of lenses.
Regardless of how it’s done, all the recent advances in capturing depth with light-field cameras will improve imaging techniques that depend on that depth, Agrawal says. These techniques could someday help self-driving cars to track distances to other vehicles, for example, or Mars rovers to gauge distances to and sizes of landmarks in their vicinity.