Like the billion installed in any smartphone used today, traditional picture cameras record the saturation of light and the color gamut. Based on standard off-the-shelf camera technology called CMOS, these cameras are getting smaller and more powerful every year and now have a resolution of a few dozen megapixels. However, they can only see in two dimensions, creating a flat, drawing-like image.
Scientists at Stanford University have developed a way to allow ordinary image sensors to perceive three-dimensional light. In other words, it will soon be possible to measure the distance to objects with these conventional cameras.
It opens up a lot of engineering possibilities. To date, sensing the distance between objects using light is only available with high-tech and expensive LiDAR systems. If you've seen a moving self-driving car, you'll immediately distinguish it by the presence of a slide of equipment mounted on the roof. It is the LiDAR anti-collision system, which uses lasers to measure the distance between objects.
The LiDAR system resembles radar, and only it uses light instead of radio waves. By pointing the laser at objects and determining the intensity of the reflected light, the system can determine how close the thing is, its speed, whether it is approaching or moving away, and, most importantly, can estimate if the paths of these two driving objects will intersect at some future point.
For engineers, this success opens up an intriguing two possibilities. The first is to create LiDAR with megapixel resolution, an unattainable threshold today. With higher resolution, LiDAR will detect targets at greater distances. For example, an autonomous car would distinguish between a cyclist and a passerby at a farther distance - that is, much earlier - and allow the vehicle to prevent an accident quickly. The second point is that any sensor available today can produce high-quality 3D images with minimal hardware modifications.
Changes in the way cars are seen
One method of producing 3D images using traditional sensors is to add a light source (easy to do) and a modulating device (not so easy to do) that turns the light on and off at high-speed millions of times per second. By picking up the oscillations, developers can compute the range. Current modulators can also do this, but they require quite a lot of electricity to operate, making them completely impractical for everyday use.
The Stanford team developed a simple sound modulator using a thin lithium niobate layer, a crystal with fundamental optical, electronic, and acoustic properties, with two precise edges.
The most crucial feature of lithium niobate is its piezoelectric properties. In a technical sense, the piezoelectric effect creates an acoustic wave in the crystal that rotates the polarization of light in a desired, tunable, and usable manner. It is the leading technical feature that allowed the team to succeed. A polarizing filter is then carefully placed behind the module, which converts this spin into an intensities modulation - turning light brighter and darker - switching the light on and off millions of times per second.
The best part is that the modulator device is simple and built into the proposed circuit, which uses off-the-shelf cameras. The value of the proposed modulator is excellent; it is said that it can complement the missing three-dimensional image in any sensor.
The team constructed a laboratory prototype LiDAR system that used a readily available digital camera as the receiver.