The measure of a metre

On 7 April 1795, the metre was born but the search for a precise, universal standard of measurement was just beginning.

A copy of the standard metre installed on a wall in Paris in 1796 or 1797 for public use.
A copy of the standard metre installed on a wall in Paris in 1796 or 1797 for public use.
Ken Eckert

What makes a metre? Today we take for granted this standard unit of length, recognised in countries around the world (with the exception of the odd holdout like the USA), but it’s a relatively recent invention. We can date its birth quite precisely: 7 April 1795, when the French government of the First Republic legislated it into being.

In the late 17th century, natural philosophers such as John Wilkins and Tito Livio Burattini began thinking about a “universal measure” that could be based on natural phenomena and help standardise scientific measurements conducted different countries. While the idea had its adherents, it was a century before any of them had the power to implement such a scheme.

In the aftermath of the 1789 French Revolution, throwing off the legacy of the Ancien Regime also meant throwing off the traditional units of measure, which varied hugely from place to place and were mathematically untidy. In their place was introduced the metric system, based on natural quantities, which is the forerunner of the SI (Système international) units we use today.

The metre was initially defined as one ten-millionth of the distance from the equator to the North Pole. (It had earlier been proposed that it be defined as the length of a pendulum that would complete one swing per second, but careful measurements showed that this length would change slightly at different places on Earth due to small variations in the local strength of the gravitational field.)

That’s a nice round number, but in practice not much use to a tailor who wants to measure off a metre of cloth. Once a very precise measurement of the equator-pole distance had been made (the surveying expedition took several years), authorities in Paris constructed a platinum bar with a length of exactly one metre to serve as an official reference point.

That reference bar was good enough for 90 years, but an increasing desire for precision led to the introduction of a new one in 1889 – a platinum-iridium bar – and the stipulation that its length be measured at the melting temperature of ice, to avoid heat-related expansion and contraction.

The next big advance in precision came in 1960, when the metre was redefined as precisely 1,650,763.73 wavelengths of a specific frequency of light emitted by an atom of krypton-86.

And finally (so far), in 1983 the metre was again redefined, this time as “the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second”.

At each stage, the definition has become more precise and – just as importantly – includes a set of specifications for measuring a metre in such a way that, with the right equipment, it will always produce the same result.

Alongside a handful of other base units such as the second (which is defined in terms of the frequency of radiation from a caesium atom) and the kilogram (which is still defined by comparison to a lump of platinum-iridium alloy kept in a lab in France), the metre forms the backbone of the system we use to measure everything else.

Latest Stories
MoreMore Articles