Driving in extreme foggy weather conditions claims hundreds of lives and is the cause of thousands of car accidents every year in the United States. Unlike human drivers, however, who are limited in their ability to see clearly in foggy weather, autonomous vehicles may be able to ‘see’ objects a lot better than humans can due to new technology coming out of MIT.

Last month, a group of researchers announced that they have developed new technology that claims to enable autonomous vehicles to see through fog at rates much higher than the human eye can. Ensuring AVs can drive through mist, fog, and other weather conditions with zero-visibility is essential to ensuring the safety of humans with AVs roaming the streets. The new system is explained in a paper that is slated to be presented at the International Conference on Computational Photography in Pittsburgh in May of this year.

The technology, which uses a depth-imaging system, was able to see images up to 57 centimeters (22 inches) through dense fog, whereas the naked eye could only see up to 36 centimeters (14 inches). Researchers tested the system in a small tank of water and immersed a humidifier in it with a vibrating motor to simulate foggy weather conditions.

While 57 centimeters might seem like too minuscule to celebrate, the testing conditions were reportedly far more rigorous than actual weather conditions. In realistic fog conditions, visibility might reach up to 30 to 50 meters (90 to 164 feet). Guy Satat, a graduate student who led the research, said to MIT News that foggy weather, as a climatic condition, is naturally dense and dynamic, making it difficult to work with. “It is constantly moving and changing, with patches of denser or less-dense fog. Other methods are not designed to cope with such realistic scenarios.”

The new technology uses what are called time-of-flight cameras that shoot short spurts of laser light and measure the time it takes to bounce back. Without any climatic hindrances, the time taken for the light to bounce back reflects the distance of any objects the light met in its path. But when foggy weather is brought into the equation, the light ‘scatters’ randomly. When the light returns to the sensors, what is measured is not the reflection of the light from the objects they intercepted, but the air droplets suspended by the fog, meaning the measurement isn’t accurate. If an AV depended solely on this kind of light dispersing, the results could be catastrophic. 

To address this challenge, the new system uses basic statistics to correlate a pattern, known as a gamma distribution, that points to arrival times of the light, regardless the thickness of the fog. Most gamma distributions, which may be asymmetrical and take on a number of different shapes, are determined by two variables. The researchers guesstimated the variables as they worked to eliminate the fog that shrouded the light as it returned to the time-of-flight sensor.

Unlike other attempts, the new system calculates the gamma distribution for each one of the 1,024 pixels in the camera. Albeit laborious, this process enabled the researchers to see how much fog was intercepted by each pixel.

After this statistical procedure, the camera counts the number of light particles every one trillionth of a second to produce a kind of bar graph. The system then finds the best fitting gamma distribution for the bar graph and subtracts the number of light particles from the total calculation. The remaining measurement correlates with the obstacles that are shrouded by the fog.

“If you look at the computation and the method, it’s surprisingly not complex,” Satat said to MIT News. “We also don’t need any prior knowledge about the fog and its density, which helps it to work in a wide range of fog conditions.”

This technology comes in the wake of the death of 49-year-old Elaine Herzberg in Tempe, Arizona last month, who was struck by an autonomous SUV Uber, making her the first pedestrian AV death. 

*Never miss a story like this - subscribe to our weekly highlights and stay up-to-date