If the general public has a fairly dystopian and muddled view of computer vision and the machine-learning neural networks needed to make it work, it is largely the fault of science fiction authors.
That’s the argument advanced by computer scientist and roboticist Robin Murphy from Texas A&M University, US, writing in the journal Science Robotics.
The mechanics of robot vision in SF stories, she notes, are never explained. Robots simply see things. On the other hand, the implications of robo-sight, notably the negative ones, have been quickly picked up by authors, and robot vision is generally portrayed as a tool of oppression and surveillance.
Surveillance, indeed, was the subject of the earliest piece of fiction featuring seeing-eye robots that Murphy cites: a 1931 short story called The Doom from Planet 4, written by Jack Williamson.
The yarn essentially features a couple of camera-eyed alien bots tasked with tracking the movements of an unfeasibly pretty woman and an unreasonably handsome man, the only two inhabitants of an island. The exercise goes tits-up when the comms link between the robots and their home planet is severed.
This, Murphy notes, echoes – presumably unintentionally – some modern robotic machines with processing gear so complex and large that it has to be housed externally, in the cloud, rather than onboard.
More to the point, though, she adds, Williamson paid no attention at all to telling his readers how the camera-eyes actually worked.
“Although the story correctly described vision, active perception, and centralised control, it neglected the mechanics converting the signal into a symbol to enable semantic understanding,” she writes.
Isaac Asimov, a decade or so later, had a crack at filling in the blanks.
The neural architecture that govern the human visual cortex was at the time still unknown, so Asimov concocted something called the “positronic brain” to characterise intelligent robots.
In his long and illustrious career, he never got around to explaining just how positronic brains worked. However, so pervasive was his influence that it became the standard term to describe vision-enabled robots for decades – right up to, and including, the second iteration of the Star Trek franchise, in 1987.
In the series, the robot character called Data had a positronic brain, although one that contained, Murphy notes, “neural networks as a learning mechanism”.
In terms of explanations of exactly how robotic vision worked in the world of science fiction, Murphy cites a Fritz Lieber story from 1953, A Bad Day for Sales, and Michael Crichton’s 1973 smash hit, Westworld, as exemplars.
In both stories the robots actually “saw” only thermal shapes, and applied a method known as “blob analysis” to decide whether those shapes represented adults or children, and males or females. This device, Murphy writes, emphasises that “the robot is not really seeing in an intelligent way”.
More recent science fiction authors, she notes, have started “replacing blob analysis with high-performance computing and neural networks”.
The tendency towards dystopian setting, however, remains. Murphy cites Person of Interest (2011), Kill Decision by Daniel Suarez (2012), Autonomous by Annalee Newitz (2017), and The Robots of Gotham by Todd McAulty (2018) as examples of SF works wherein machine-learning enabled computer vision becomes a tool of surveillance and enforcement for government, police and the military.
This, it could be argued, is only to be expected. Novelists in general, and science fiction writers in particular, have long found more entertainment value in bad-news stories than utopian paeans.
And that, Murphy concedes, is entirely reasonable, although unrealistically limiting.
“While science fiction’s unsettling predictions are rapidly becoming reality, it misses the benefits of computer vision for the less thrilling, but profoundly valuable, fields of robot surgery, agile manufacturing, autonomous cars, rescue robotics, and eldercare robots,” she writes.
She ends her essay with a hopeful statement – albeit one that is unlikely to result in a bestseller: “Perhaps in a near future, scientific progress will cause science fiction writers to pen a future where computer vision and machine learning are unequivocal forces for good.”
Andrew Masterson is a former editor of Cosmos.
Read science facts, not fiction...
There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today.