AI will never conquer humanity

Artificial Intelligence (AI) has progressed enormously in recent years, to the point that people now seriously wonder whether or not machines will ever take over from humankind. In this article I will explain why we should not worry about it.

It has to do with the fact that computers, as we know them today, operate with a finite set of discrete states, known as binary logic. If a state can only assume two possible values, namely either 0 or 1, it is called a bit (which stands for Binary digIT).

Binary logic is the ideal representation of a simple concept: a switch. A switch can either be on, or off, but cannot truly be half-on or half-off: that would be nonsensical (except among a bunch of people messing with something called fuzzy logic). But does the world work as an ensemble of wisely orchestrated switches? Or is a switch an idealisation of a deeper, fuzzier reality, where half-bits can exist? 

Let’s take a peculiar kind of switch – and professionals in the field may forgive the stretch here – the transistor, the tiniest physical element able to realise the binary logic concept.

All modern computers contain microchips, in which billions of transistors can be found. The way a transistor works is rather straightforward: there’s an input, an output and a switch. The switch consists of a semiconductor able to allow the input to reach the output, or not, depending on the voltage provided.

If the voltage is below a threshold, the switch stays open; that is, there is no output. As the voltage crosses its threshold, the switch closes and the input signal can pass through, thus becoming the output. But wait: what does it mean to cross a threshold? Is that as immediate as it sounds? Absolutely not!

The voltage controlling the switch is not isolated, so a certain amount of “noise” also exists. This means that we can answer the question “does the voltage cross the threshold?” only statistically. Without this approach, noise will dominate output, and so we won’t be able to use the physical device to realise binary logic.

One peculiar kind of noise is cosmic radiation, which originates from other celestial bodies around the planet Earth, including the far ones outside the solar system. When it impacts the Earth’s atmosphere it produces showers of particles that are responsible for thousands of daily computing errors and hardware failures. But how does this help with determining whether AI can or cannot take over humanity?

AI is a bunch of mathematical models that need to be realised in some physical medium, such as, for example, programs that can be stored and run in a computer. No wizards, no magic. The moment we implement AI models as computer programs, we are sacrificing something, due to the fact that we must reduce reality to a bunch of finite bits that a computer can crunch on.

Because of that, we can easily reformulate our initial question as: can a bunch of bits represent reality exactly, in a way that can be controlled and predicted indefinitely?

The answer is no, because nature is inherently chaotic, while a bunch of bits representing a program can never be so, by definition. For one thing, memory storage would be nonsensical.  For the same reason, people are chaotic by virtue of existing in what we define as real world, while computer programs are not, because they can only exist in a rational world of bits.

We could now argue that if determinism is true, any real system (for instance, a chair, a human, or a computer) is a program of possibly uncountably infinite length, and therefore written with a logic based on the use of irrational numbers – an irrational logic, to be precise.

From this it follows that irrational logic cannot be validated by rational logic, meaning that while irrational humans can clearly build and operate rational, logical computer programs, the reverse is not true.

So, half-bits do in fact exist, but no computer can rationally operate with them, but nor can humans. This conclusion may suggest that we might be living in a giant simulation, but that would be impossible to assess scientifically. 

Another interesting observation is that no computer program can generate more information than the one it encodes. Even if from this analysis it seems unlikely that machines can ever take over humanity, we can be the devil’s advocate and imagine that since humans can only operate rationally, so do computers.

Moreover, computers are as real as humans, although humans do not function in bits. Could it then be that environmental noise is leading computers running AI programs to become creative? Is the noise inherent in the functioning of the world responsible of our human creativity?

Are our mathematical models just a desperate, failed attempt to de-noise an otherwise very confusing, extremely blurred reality?

Please login to favourite this article.