When Google unveiled its first self-driving car in 2010, the roof-mounted rotating cylinders really stood out. This is the vehicle's light detection and Ranging (LiDAR) system, which works in a similar way to light-based radar. LiDAR works with cameras and radar to map the environment to help the cars avoid obstacles and drive safely.
Since then, cheap, chip-based cameras and radar systems have become mainstream for collision avoidance and highway autonomous driving. However, LiDAR navigation systems are still bulky mechanical devices that cost thousands of dollars.
That may be about to change, thanks to a new high-resolution lidar chip developed by Ming Wu, professor of electrical engineering and computer science at the University of California, Berkeley, and co-director of the Berkeley Center for Sensors and Actuators. The new design was published Wednesday, March 9, in the journal Nature.
Engineers at the University of California, Berkeley, have used microelectromechanical system (MEMS) switches to significantly improve the resolution of chip-based LiDAR sensors. In this LiDAR chip schematic, lasers are emitted from an optical antenna connected to a tiny switch. The reflected light is captured by the same antenna. 3D images are obtained by turning on switches in the array in turn. Credit: Zhang Xiaosheng, University of California, Berkeley.
Wu's lidar is based on the Focal plane Switching Array (FPSA), a semiconductor-based antenna matrix that collects light-like sensors in digital cameras. Its 16,384-pixel resolution may not sound impressive compared to the millions of pixels found on smartphone cameras, but it dwarfs the 512 pixels or less found on FPSA so far, Wu said.
Equally important, the design uses the same complementary metal-oxide-semiconductor (CMOS) technology used to produce computer processors, scalable to megapixel sizes. This could lead to a new generation of powerful, low-cost 3-D sensors for self-driving cars, drones, robots, and even smartphones.
Lidar works by capturing the reflection of light from its lasers. By measuring the time it takes for light to return, or changes in the frequency of the beam, lidar can map the environment and record how fast objects around it are moving.
Mechanical lidar systems have powerful lasers that can visualize objects hundreds of yards away, even in the dark. They also generate 3D maps with enough resolution for the vehicle's AI to distinguish between vehicles, bicycles, pedestrians, and other hazards.
However, putting these functions on chips has stymied researchers for more than a decade. The most powerful obstacle is the laser.
"We want to illuminate a very large area," Wu said. "But if we tried to do that, the light would be too weak to get far enough. So, as a design trade-off to maintain light intensity, we reduced the area we illuminated with the laser."
That's where the FPSA comes in. It consists of a matrix of tiny light transmitters or antennas and switches that quickly turn them on and off. In this way, it can channel all available laser power through a single antenna at once.
Scanning electron micrograph of a LiDAR chip showing a raster antenna. Image credit: Kyungmok Kwon, University of California, Berkeley.
However, switching can cause problems. Almost all silicon-based LiDAR systems use thermal-optical switches, which rely on huge changes in temperature to produce tiny changes in refractive index and bend and redirect lasers from one waveguide to another.
However, thermal-optic switches are both large and power-hungry. Put too many things on the chips and they generate too much heat to function properly. This is why the existing FPSA is limited to 512 pixels or less.
Wu's solution replaces them with microelectromechanical system (MEMS) switches that physically move waveguides from one location to another.
"The building is very similar to a motorway exchange," he said. "So imagine you are a beam of light from east to west. We can mechanically lower a ramp so that you suddenly turn 90 degrees and get you from north to south."
MEMS switches are known technologies for routing light in communication networks. This is the first time they have been used in lidar. Compared to thermal-optical switches, they are smaller, consume less power, switch faster, and have very low optical losses.
That's how Wu was able to cram 16,384 pixels into a 1-centimeter square chip. When the switch turns on the pixel, it fires a laser beam and captures the reflected light. Each pixel corresponds to 0.6 degrees of the array's 70-degree field of view. By looping rapidly through the array, Wu's FPSA builds a 3D picture of the world around it. Installing several of these in a circular configuration will produce a 360-degree view around the vehicle.
We need to improve the FPSA resolution and range before his system is ready for commercialization. While optical antennas are hard to make smaller, switches are still the biggest components, and researchers think they can be made even smaller.
Conventional CMOS production technology promises to make cheap, chip-level lidar part of our future if we can.
"Look at how we use cameras," Wu says. "They're embedded in vehicles, robots, vacuum cleaners, surveillance equipment, biometrics, doors. Once we shrink lidar to the size of a smartphone camera, there will be many more potential applications."