It’s easy to take clocks for granted – fewer people are wearing wristwatches, instead preferring to check their smartphone or laptop for the time. But making sure that timekeeping is accurate was a problem only solved in the 1940s.
The basic principles of how a clock works haven’t really changed much in more than 350 years. The most important part of any time-keeping device is called the “frequency reference” which ensures each second is exactly the same.
Take, for instance, a pendulum. A small force taps it to make sure it takes a second to complete a swing, known in the time business as an oscillation. The problem, though, is that a slight jostle, or even a temperature fluctuation, can change its duration.
{%recommended 969%}
The quartz clock was invented in 1927, replacing the pendulum with a quartz crystal oscillator as the frequency reference. A quartz crystal vibrates when an electrical current is passed through it. It is also piezoelectric, meaning it accumulates electrical charge when flexed.
A small fuel source – a watch battery, for instance – provides power to a microchip circuit, which makes the crystal oscillate at a certain frequency: 32,768 times each second. The circuit then detects these vibrations and converts them into electric pulses – one every second – which feed a miniature motor to keep the second hand sweeping along.
Today, many clocks continue to use quartz because it’s easy to maintain and keeps time with reasonable accuracy. But like pendulums, the oscillation of a quartz crystal can change due to temperature and pressure. And while these tiny changes don’t affect everyday life, advancements in technology and highly accurate experiments require much more reliable time-keeping.
In 1949, scientists from the National Institute of Standards and Technology in the US developed the first atomic clock. Its frequency reference was what’s called the “resonant frequency” – the natural rate of vibration determined by the physical parameters of the vibrating object, at which it will attain a high-energy state.
For instance, a caesium-133 atom has a resonant frequency is 9,192,631,770 hertz. This means its outermost electron will most likely “jump” to the next higher energy level – producing a high-energy version of the atom – in the presence of microwave radiation at that exact frequency.
Caesium is commonly used in atomic clocks because in 1967 the second was redefined based on the oscillation frequency of photons spat out when it jumped between energy levels.
In a standard atomic clock, liquid caesium – the element’s natural state at room temperature – is placed in an oven and heated to a gas, which boosts some atoms to a high-energy state. The gas is funnelled through a magnet to filter them out.
The remaining stream of low-energy atoms then passes through a wave transmitter, which bombards them with microwaves at 9,192,631,770 hertz. If the frequency is exactly right, all the caesium-133 atoms should flick into a high-energy state.
The atoms then pass through another magnet. But this time, only the high-energy ones are allowed to pass and hit a detector.
If the detector senses gaps between impacts, it knows that not all the atoms were boosted – and that therefore the wave transmitter isn’t generating the correct frequency.
An electrical signal is then sent back to the generator, correcting the frequency transmitted, until a steady stream of caesium-133 atoms hits the detector. The product is then divided by 9,192,631,770, to produce one second.
This self-correcting ability is what makes atomic clocks such accurate devices. If left alone, the best today – which use elements such as strontium – would only change by less than a second after more than a billion years.
Although these accuracies seem unnecessary, aside from experimental measurements, these clocks are an incredibly important part of the Global Positioning System (GPS). Without such precision, for instance, mistakes as small as a microsecond could make your GPS think that you are hundreds of metres away from where you actually are.