Spying around corners just got easier


It’s simply a matter of making a wall act like a mirror. Natalie Parletta reports.


Reconstructions from the research. Column A shows a hidden scene of interest; B the still photograph the researchers observed; C and D reconstructions produced without knowing the position of the occluding object; E reconstructions produced with knowledge of the position of the occluding object (only slightly better than in C and D, the researchers suggest).

Springer Nature / Saunders C et al

Objects that are out of sight could now be captured with an ordinary digital camera, according to new research published in the journal Nature.

Previously, this feat could only be accomplished with expensive, specialised optical equipment.

The computational periscopy technique developed by Vivek Goyal, associate professor at the Boston University College of Engineering, US, and his team vastly simplifies the process, so it can be achieved with a single two-dimensional image taken with the camera and a computer.

The new method, that Goyal developed with colleagues Charles Saunders and John Murray-Bruce, is based on different principles to previous approaches.

Reconstructing a hidden object involves calculations that essentially enable a wall to act like a mirror. The camera is pointed at the wall, and the light that is emitted from the captured image is computed to reconstruct a hidden image facing the wall.

“The way I like to think about it is ‘what is the difference between a mirror and a wall, and how do you counteract the deficiencies of using a wall as if it were a mirror’?” says Goyal.

Each incoming ray of light hitting a mirror at one angle goes out at a single angle, he explains, whereas light hitting a matte surface (like a wall) scatters in all directions. So you need to separate the light that has travelled separate paths – somehow.

Previous methods all separated the light paths based on their lengths, and separated length by using time of flight (the time taken for light to traverse a distance), Goyal explains. These require costly pulsed lasers and super fast light detectors.

Instead, his team used an occluding object that partially obscures an image on a hidden screen.

The key insight that enables its work is that an opaque object creates a spatial selection of light rays.

The object throws a partial shadow (penumbra) on a reflective wall. The digital camera captures light from the wall, comprised of light emanating from the hidden screen and the penumbra from the occluding object.

“We have a photograph of the wall that has a lot of pixels,” Goyal says. “And the pixels have different combinations of light from the hidden scene, at different points on the wall.

“So I have a bunch of different combinations of light, then it’s like having a bunch of equations that I can solve to form the image of the hidden scene.”

Thus, computer algorithms reconstruct the photos to create a two-dimensional colour image of the hidden scene.

Reconstructed images presented in the paper include cartoon faces, letters and striped patterns. As reconstructed features become smaller, such as eyes, the images are still visible although less precise.

The accuracy of the reconstructions did not vary considerably when the researchers did not know the position of the occluding object.

The main findings were achieved in a relatively dark room, where most of the light was coming from the scene of interest, Goyal adds. In supplementary material, additional experiments are presented at different levels of ambient light.

“The performance gradually gets worse as the amount of ambient light increases. And I think that’s probably the most important limiting factor, that the penumbra cannot be completely washed away by the ambient light.”

The images were reconstructed from a two-dimensional screen, but Goyal says the same method can be used to reconstruct three-dimensional objects.

As the technique is developed further, Goyal believes it could have many different potential applications, such as surveillance.

Another could be search and rescue, “where it might be dangerous to walk around an entire area, so that you want to form images outside the line of sight to find survivors or dangerous objects”.

An application of special interest to Goyal is extending environmental awareness of automated cars. “It would be interesting to sense if there’s a child on the other side of a parked car, or to see vehicles around a corner on city streets.

“If there’s even a little bit of extension beyond the line of sight that could be very valuable.”

Explore #photography
Parletta.png?ixlib=rails 2.1
Natalie Parletta is a freelance science writer based in Adelaide and an adjunct senior research fellow with the University of South Australia.
  1. https://doi.org/10.1038/s41586-018-0868-6
Latest Stories
MoreMore Articles