Where are we? Are we there yet? Such questions may be annoying when coming from a kid. But when it’s a space probe orbiting the Moon, they’re a serious problem with potentially catastrophic implications.
“If we’ve taken sensor readings and decided that there is water located at this particular spot, but we’re 10km off on where we actually are … that could completely ruin where we chose to set up a future base camp on the moon,” says University of Adelaide doctoral candidate Sofia McLeod. “It could be 10km away from its water source. That would be terrible”.
McLeod works at the Australian Institute for Machine Learning and is focussed on research and application of Computer Vision for spacecraft guidance and navigation.
McLeod is in Japan working on a way to accurately position future geological survey data of the Moon’s surface. These surveys could locate crucial resources such as minerals and water.
The global positioning system (GPS) used to pinpoint our mobile phones within a few metres on the Earth’s surface doesn’t work for the Moon.
GPS satellites orbit the Earth at about 20,000km. The Moon orbits at 384,400km. That means their super-accurate timing signals are facing the wrong way, can be eclipsed by the Earth and are so weak and distorted by the time they reach the lunar surface as to be useless.
Most satellites and space probes rely on internal navigation systems. But these use a form of “dead reckoning” – where speed and course are plotted to generate an estimated position.
“However, it does shift. It does vibrate. It does change its orientation,” McLeod told Cosmos.
This could result from buffeting solar winds to the unexpected side effects of activating onboard gyros.
“They could be 10 kilometres off their true position,” she adds. “That’s still not good enough, unfortunately.”
The most viable alternative, McLeod says, is to have a look around: “What if we take a picture of what’s down below? What does that give us?”
One proposed geological survey sensor will orbit the Moon at a height of 100km. That’s too high for fine surface detail. But it’s also too low for prominent landmarks to remain in view.
But the Moon’s surface has already been extensively mapped.
A day in the life of the moon
Now, it’s a matter of identifying one desolate crater-pocked moonscape photo from another.
“There’s a lot of craters on the Moon. And if we know the location, shape and size of those craters, we can create a crater map,” says McLeod. “Maybe this isn’t like a two-dimensional map we can roll out on the table. Instead, it’s like a list of all known craters and their positions on the Moon. And that’s what we’re using as our equivalent map.”
From an orbit of 100km, the goal is to synchronise the resource sensing image with the visible surface features to within 300m.
From that height, craters bigger than 2km present enough detail for their individual characteristics to be recognised. And more than a million of those are scattered across the lunar surface.
“So if we can see maybe three or more of these craters on an image, we can try and identify them in our database,” McLeod says.
And that produces the reference points needed to triangulate the satellite’s position.
The University of Adelaide team, led by Professor Tat-Jun Chin, SmartSat CRC Professorial Chair of Sentient Satellites, is currently working on generating specialised crater detection algorithms for the mission to do just that.
“There are three things we must do to estimate our position,” says McLeod. “One of them is to detect the craters that we see in the image and identify them as craters. The second step is we have to match these detected craters to those craters in that known database. And then the third step is to estimate our position based on those crater matches.”
Machine learning processes are being used to detect craters in the space probe’s photographs. But McLeod says the need for accuracy means the matching and positioning must be done via classical computer vision techniques.
“That’s my job. Now we know which craters in the images match the craters in the database, where are we? My step is basically telling everyone – okay, I have decided that we are here!”
And that’s a daunting task where the width of a pixel could throw out your map’s accuracy by some 3km.
“So that’s something that gets fed back through to me. Was the crater recognition algorithm a little out? What if we have incorrectly matched the craters in the database? I have to account for those issues as part of solving the overall problem of ‘how do we figure out where we are?’.
“It’s terrifying – we have to be so accurate in our initial stages for this to really work”.
The science behind accurately positioning the spacecraft has broader applications back on Earth.
“A lot of this can be applied anywhere,” says McLeod. “You can land a drone the same way you land a spacecraft. You can drive a car the same way you would navigate a rover on the Moon. There are so many crossovers.”
That’s because each example involves teaching machines how to “see”.
“I personally find computer vision something that’s quite intuitive,” says McLeod. “There’s something nice about having an image and then getting a computer to “see” it how a human would see it and identify what’s there. I think that’s quite intriguing.”