When a team of engineers went to work in 2015 looking for a new technique to boost the cost-effectiveness of solar cells, they didn’t realize they’d end with a bonus – a way to help improve the collision avoidance systems of self-driving cars.
The twin discoveries started, they say, when they began looking for a solution to a well-known problem in the world of solar cells. Solar cells capture photons from sunlight in order to convert them into electricity.
The thicker the layer of siliconin the cell, the more light it can absorb, and the more electricity it can ultimately produce. But the sheer expense of silicon has become a barrier to solar cost-effectiveness.
So the Stanford engineers figured out how to create a very thin layer of silicon that could absorb as many photons as a much thicker layer of the costly material. Specifically, rather than laying the silicon flat, they nanotextured the surface of the silicon in a way that created more opportunities for light particles to be absorbed.
Their technique increased photon absorption rates for the nanotextured solar cells compared to traditional thin silicon cells, making more cost-effective use of the material.
Then came the surprise. After the researchers shared these efficiency figures, engineers working on autonomous vehicles began asking whether this texturing technique could help them get more accurate results from a collision-avoidance technology called LIDAR, which is conceptually like sonar except that it uses light rather than sound waves to detect objects in the car’s travel path.
LIDAR works by sending out laser pulses and calculating the time it takes for the photons to bounce back. The autonomous car engineers understood that current photon detectors use thick layers of silicon to make sure they capture enough photons to accurately map the terrain ahead. They wondered if texturing a thin layer of silicon, much like on the solar cells, would lead to more accurate maps than the current thin silicon.
Indeed, in their new paper, the Stanford engineers report that their textured silicon can capture as many as three to six times more of the returning photons than today’s LIDAR receivers. They believe this will enable self-driving car engineers to design high-performance, next-generation LIDAR systems that would continuously send out a single laser pulse in all directions. The reflected photons would be captured by an array of textured silicon detectors, creating moment-to-moment maps of pedestrian-filled city crosswalks.
Harris said the texturing technology could also help to solve two other LIDAR snags unique to self-driving cars – potential distortions caused by heat and the machine equivalent of peripheral vision.
The heat problem occurs because the LIDAR laser apparatus can heat up during extended use, causing photon wavelengths to shift slightly. Such shifts could cause light particles to bounce off traditional silicon that is made to absorb specific wavelengths. But the Stanford nanotexturing technology can absorb photons across a broad spectrum, eliminating this heat-shift issue.
With respect to the machine equivalent of peripheral vision, Harris and Zang believe it may be possible to make a flexible version of their nanotextured silicon receptor. Flexibility would allow them to curve the receptor. Between that and the light-trapping advantage of their nanotextured surface, they think it may be possible for LIDAR systems to enlarge the angle of acceptance for photons, in order to more completely identify all potential obstacles.
Harris said he always thought Zang’s texturing technique was a good way to improve solar cells. “But the huge ramp up in autonomous vehicles and LIDAR suddenly made this 100 times more important,” he says.