Researchers from the Massachusetts Institute of Technology, in the US, have linked advances in computerised artificial intelligence (AI) with breakthroughs in optical technology to enable machines that could revolutionise robot workforces in jobs such as warehousing, manufacturing and even housekeeping.
Peter Florence, Lucas Manuelli and Russ Tedrake say they have achieved breakthroughs in computer vision that enable robots to inspect random objects that they have not previously seen, and then understand them enough to use them to accomplish specific tasks.
In a new paper lodged on the preprint server arXiv, the scientists, working in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), say they’ve made a key development with a system they call “dense object nets” (DON), in which objects are perceived as collections of points that serve as “visual roadmaps” of sorts.
In this way, robots can better understand and manipulate items, and, most importantly, pick up a specific object among a clutter of similar ones – a valuable skill for warehouse-based machines that companies use to sort through collections of products.
For example, a robot might be required to grab onto a specific spot on an object – say, the tongue of a shoe. After scanning a shoe and recognising it as a series of points, it can then look at another shoe , one it has not previously seen, and successfully grab its tongue.
The DON system records any object as a series of points, which are then mapped together to visualise its 3-D shape. The process is similar to compiling a single panoramic image by stitching together multiple smaller shots.
After a robot has been DON trained, a human operator can specify a point on an object. The robot will then compile its map, identify and match the appropriate points, then pick up the object at the place specified.
Florence and colleagues compare the learning curve involved to the developing intelligence of young children. Ask a toddler to pick up a specific toy, they say, and he or she will likely grab lots of them until happening upon the one required. In contrast, a four-year-old can be instructed to “go grab your truck by the red end of it” and comply without difficulty.
In one set of tests done on a soft toy, a robotic arm powered by DON could grasp the toy’s right ear from a range of different configurations. This showed that, among other things, the system has the ability to distinguish left from right on symmetrical objects.
When testing on a bin of different baseball caps, DON could pick out a specific target hat, despite all them having similar designs – and having never seen pictures of the hats in training data before.
Manuelli explains that in factories, robots often need complex mechanical aids in order to work reliably. “But a system like this that can understand objects’ orientations could just take a picture and be able to grasp and adjust the object accordingly,” he says.
In the future, the MIT team hopes to improve the system so that it can perform specific tasks with a deeper understanding of the corresponding objects. For example, it could learn how to grasp an object and move it in a specified way with the ultimate goal of, say, cleaning a desk.
The paper will be presented at the Conference on Robot Learning, organised by the International Foundation for Robotics Research (IFRR), in Zurich, Switzerland, in October.
Originally published by Cosmos as AI and optics combine to produce picky robots
Jeff Glorfeld is a former senior editor of The Age newspaper in Australia, and is now a freelance journalist based in California, US.
Read science facts, not fiction...
There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today.