The challenge of recovering image data from objects hidden from direct observation or concealed by corners, termed non-line-of-sight imaging (NLoS), can be tackled in several ways, often involving methods to recover information from highly scattered or diffuse optical signals.
Techniques involving time-of-flight measurements as a means of retracing a signal’s journey have been studied for some time, joined more recently by methods using the inherent spatial correlations encoded in scattered light to reconstruct an image. These approaches are hindered in photon-starved scenarios, however, partly due to the poorly understood noise characteristics of that correlation.
A project at Stanford University, Princeton University, Southern Methodist University and Rice University has now made a significant step forward, by developing a better noise model for recovering occluded objects from indirect reflections and training a deep neural network to crunch the numbers. The work was published in Optica.
“Compared to other approaches, our non-line-of-sight imaging system provides uniquely high resolutions and imaging speeds,” said team leader Christopher Metzler. “These attributes enable applications that wouldn’t otherwise be possible, such as reading the license plate of a hidden car as it is driving or reading a badge worn by someone walking on the other side of a corner.”