Australian Tango victory energises orbital dirty dancing

The challenge: using a single snapshot to establish the position and orientation of a satellite. The winner? The Adelaide-based Australian Institute for Machine Learning (AIML).

An artificial intelligence competition organised by the European Space Agency pitted global research teams against each other to help solve a growing problem in space navigation. The Satellite Pose Estimation Challenge provided computer-generated images of a satellite as a machine learning dataset. Contenders then had to code a system to assess a picture of a physical scale model accurately.

Sounds easy? It’s not for a robot. 

Tango spacecraft
The scale-model photographs the machine learning algorithms were tested against. Credit: ESA
Tango spacecraft
Computer-generated images of the Tango satellite used to train machine learning algorithms. Credit: ESA

The current state of artificial intelligence isn’t all that good at extrapolation, and this challenge is an example of a practical perception hurdle called the domain challenge that machine intelligence must overcome. 

In this instance, a Swedish satellite named Tango acted as the experimental template.

The microwave-sized spacecraft was shut down in 2013. but it remains in orbit.

The competition’s AIs were trained on 60,000 computer-generated images based on Tango’s construction plans. However, the 9531 scale-model photographs they were pitted against were emulating the realistic conditions of space photography.

It’s a real-life scenario.

The ability to grapple dead or malfunctioning satellites to refuel, repair, recycle or de-orbit them has become a pressing issue. The Near-Earth Orbit (NEO) space lanes are rapidly becoming overcrowded. And it takes decades for these objects to tumble back into the atmosphere by themselves.

Service robots need to know the target satellite’s orientation, movement, and condition before a rendezvous is possible. To do that, they need to recognize what they’re “seeing”.

“In the vacuum of space, for instance, the contrast between light and dark is always very high, as with shaded craters on the Moon,” says event organiser Professor Simone D’Amico. “Unfiltered sunlight can create intense reflection on satellite surfaces, blurring views, while at the same time the reflected glow of Earth gives rise to diffuse lighting.”

In essence, they won’t look as clean and crisp as a computer-generated model.

“The challenge is to perform accurate estimates using simply raw pixels from a single monochrome camera, representative of smaller low-cost missions lacking extra hardware such as radar, lidar or stereoscopic imagers,”  adds ESA software engineer Marcus Martens. 

Tango spacecraft
The Tango spacecraft was photographed by its accompanying Mango spacecraft while in orbit. This demonstrates the perception challenge artificial intelligence faces. Credit: OHB Sweden

Only then can successful approach manoeuvres be achieved.

The AIML-based Sentient Satellites Lab joined forces with European space startup Blackswan Space to compete against 35 other teams.

“And it’s not just satellites,” adds University of Adelaide research student Mohsi Jawaid. “It could be any space bodies, space junk, asteroids. You want to get accurate vision of it so you can approach it safely.”

Jawaid’s team finished first and third in the competition’s two categories.

“Pose estimation is of high interest to machine vision researchers in general – for instance in terms of robotic hands trying to safely pick up packages, self-driving cars or drones – but is especially crucial for space,” concludes Martens.

Please login to favourite this article.