Don’t open ET communications, researchers say

Should astronomers ever receive an encrypted message from an extraterrestrial source the only sensible way to respond will be to junk the thing before reading it and wipe any hard-drive that came into contact with it. 

That’s the startling advice from Michael Hippke, from Germany’s Sonneberg Observatory, and physicist John Learned from the University of Hawaii, US. 

In a paper posted on the preprint server Arxiv, Hippe and Learned demonstrate that both logic and available technology make it impossible to know if any message received from ET is contaminated. Opening – or decoding – a communication from an extraterrestrial intelligence (ETI), therefore, is a task suffused with potentially world-ending risk. 

The chance that some form of alien would want to destroy humanity on very first contact is, they concede, probably rather low, but can’t be altogether discounted. Therefore, any communication should be treated with extreme caution. 

“After all,” they write, “it is cheaper for ETI to send a malicious message to eradicate humans compared to sending battleships.” 

Such messages, they suggest, may arrive along a number of avenues, including being beamed through electromagnetic radiation, and thus picked up by a radio telescope, or blipped out through an alien probe. Short messages could conceivably be copied down at the receiving end onto a piece of paper; longer ones, especially those that appear to need decrypting, will have to be fed into a computer. 

Either way, say Hippke and Learned, presents major problems. In analysing the possibilities, the researchers note that dangers can arise either from any translated text itself, or from the code ostensibly used to handle the message. 

What happens, they ask, if the translated plaintext is very short, but says “We will make your sun go supernova tomorrow”? 

“True or not,” they write, “it could cause wide-spread panic.” 

It’s perhaps more likely, however, that received texts will be longer and more involved, in which case the danger arises of “a demoralising cultural influence” arising, with the possibility that “a cult could form” in response to its content. 

More likely still, the authors suggest, is that any textual meaning would be irrelevant. The real issue would be the code used to encrypt it and, just as importantly, the data compression used to send humans a very large body of work in a comparatively small package. 

Either the encrypted material itself or the decompression instructions could contain a malicious computer-network-destroying code that would be inadvertently activated as attempts to read the communication were made. 

Hippke and Learned are not the first to think of this. In 2006, physicist Richard Carrigan of the Fermi National Accelerator Laboratory, in the US, raised the question of whether extraterrestrial intelligence signals needed to be decontaminated. 

He acknowledged an argument that suggests human-derived computers would be immune to infection from ET sources because “code is idiosyncratic and constitutes an impenetrable firewall”, but remained unconvinced. 

One possibility, he suggested, was to create a “prison”, by analysing alien communications using only a single, isolated, quarantined machine. 

Hippke and Learned take the suggestion to extreme lengths, and posit a computer inside a box placed on the Moon, next to “remote-controlled fusion bombs to terminate the experiment at any time”. 

Even given such safeguards, they argue, no prison is escape-proof. What if the ET decryption reveals the existence of code for a super-smart AI that says it can answer all questions and provide the cure for cancer? Should the people in charge believe the claim and activate the program, or refuse to do so on the grounds that it might be a cruel world-ending trick? 

But even getting that far down the track – even building the box on the moon and ringing it with bombs – is edging too close to destruction, the writers suggest. Perhaps hard-hearted professionals could remain unmoved by alien AI making promises or pleading for release, but what about the guard whose daughter is dying, the daughter the AI says it can cure in return for some small favour? 

“We can never exclude human error and emotions,” the authors note. 

And what then, if somehow the ET’s clever software escapes its prison – or perhaps is simply downloaded from source by a well-meaning amateur star-gazer? {%recommended 6112%}

“The worst possible result would be human extinction or some other unrecoverable global catastrophe,” the pair suggest. 

“The main argument is that the human species currently dominates planet Earth because of our intelligence. If ETI-AI is superior, it might (or might not) become more powerful and consider us as irrelevant monkeys (or maybe not).” 

Oddly, though, at the conclusion of their densely argued paper, Hippke and Learned end by demonstrating one of their own assertions – namely that you can never rely on human beings to be completely logical. 

Having established that the risk of malicious or catastrophic consequences arising from opening an extraterrestrial communication is “not zero”, they then execute a remarkable volte-face and suggest that the potential benefits outweigh the risks and that, thus, they “strongly encourage” reading any incoming message. 

Please login to favourite this article.