Some people have bad memories of high school algebra, with all those *x’s* and *y’s* that seem to come from nowhere, as if they’ve been conjured solely to torment students with meaningless “exercises”.

Maths geeks, on the other hand, tend to see the fun of it: once you see the symbolic pattern in the *form *of an equation, you can see that an algorithm for solving one equation will do for all equations of the same form.

Either way, though, there’s much more to symbolic mathematics than economical ways of solving equations. Those arcane symbols have a fascinating history, and they have enabled mathematicians and physicists to think in entirely new ways.

You can appreciate the sophistication of symbolic thinking when you realise that it took thousands of years for mathematicians to stop laboriously writing out equations in words and develop the symbols we take for granted today.

The first fully symbolic, recognisably modern algebra textbook (Thomas Harriot’s *Artis Analyticae Praxis*) only appeared in 1631. In 1637, René Descartes’ *La Géometrie* took symbolic algebra even further, and you can blame him – or better still, praise him – for the *x’s* and *y’s* we use today (although Harriot had similarly used *a’s*, *b’s*, and so on).

The beauty of symbolic equations is that it’s much easier to see general patterns and methods when you can see a problem at a glance. Compare this –

*“Take the square of the unknown number, then add the unknown number to itself and take the sum away from the square; now let the total be eight”*

– with this:

*“x ^{2 }– 2x = 8.” *

If you hated algebra at school, imagine having to do it *without *symbols! Even the +,-,= and × signs only came into widespread use in the seventeenth century.

What’s more, if you rewrite *x*^{2}*– 2x = 8 *as *x*^{2}*– 2x – 8 = 0*, it’s easier to see that the second form is equivalent to *(x – 4)(x + 2) = 0*, and so the unknown can be either 4 or -2. Harriot was the first to introduce this “trick” for solving polynomial equations, although Descartes often gets the credit, because he discovered it too.

My choice of the word “trick” belies the skill needed for this discovery – the skill of actually *thinking *symbolically – but it does highlight the nub of a current maths-in-STEM debate about how much mathematical rigour students need if they’re going to *apply* mathematics rather than *do it*. Even the simplest of algebra problems is easier to solve using symbols than words – but there’s more to science, even to physics, than knowing lots of neat methods of solving equations.

In a recent paper, Mark Eichenlaub and Edward Redish put it this way: “For physicists, equations are about more than computing physical quantities or constructing formal models; they are also about understanding [physical concepts].”

On the other hand, Michel Roland argues that physics and mathematics educators need to work more closely together if each is going to understand the conceptual import of the other’s symbols.

He also points out that in the seventeenth and eighteenth centuries, Isaac Newton and his peers were inventing mathematical methods at the same time as they were applying them to physics.

In other words, in the early days of modern science, there *was* a close connection in the development of mathematical and physical symbols and concepts.

The mathematics wasn’t always rigorous, though, and this is the rub for today’s STEM educators. Let me explain with some more history, beginning with calculus, the great enabler for so much of STEM.

Calculus is the mathematics of change, which has been key to understanding so many natural processes: the movement of “solid bodies” such as planets, planes and people; the flow of fluids; the spread of electromagnetic and gravitational fields; the work done by a changing force; and much, much more.

It’s common knowledge that in the 1660s and 1670s Newton and Gottfried Leibniz independently generalised the algorithms of calculus, and that they (and especially their supporters) quarrelled over which of them did it first. It’s less well known, perhaps, that they each left a problematic legacy over their choice of symbols for expressing their wonderful new algorithms.

To measure a change such as speed – the rate of change of distance with respect to time – you compare the moving object’s position at one instant and the next; but what do you mean by an “instant”? Obviously, it’s a very small quantity, but how small?

That is a question of mathematical rigour, and it wouldn’t be satisfactorily resolved until the nineteenth century, when the notion of an infinitesimal “limit” was made precise. (Or as precise as a mathematician can make such an abstract concept.)

Meantime, Newton and Leibniz were concerned with procedures for naming and manipulating these poorly defined “instants” of time and other “infinitesimally small” quantities. Newton called them “moments” and denoted them by the symbol *ο* (a “not quite zero” represented by the Greek letter omicron), while Leibniz called them “differentials”, and denoted them as *dt*, *dx*, and so on.

If you did high school calculus you can see whose symbols won out! Here’s why.

Leibniz’s symbol for the gradient, or rate of change, of a function *y = f (x)* was *dy ⁄ dx*, while Newton’s was *y ̇*. (Actually, Leibniz mostly wrote *dy:dx*; the Bernoulli brothers popularised the fractional notation in the 1690s.) A century later, when the idea of a mathematical function was being formalised, Joseph Louis Lagrange introduced the notation *f'(x*) as a more rigorous version showing that the form of the derivative will depend on that of the function* f* as well as the variable *x*.

In today’s high school mathematics classes, all three notations may be used (Newton’s *y ̇* being used when the variable is time), but Leibniz’s *dy ⁄ dx* is the most common – especially when it comes to introducing calculus concepts to students, because it’s so intuitive and practical.

In particular, students learn that you can treat *dy ⁄ dx* as if it really were a fraction, even though it’s actually an operation, a “derivative” – an operation that involves taking the limit of the ratio of very small changes in *y* and *x*.

You can see the pedagogical problem – we’re back where we started, wondering if it’s OK to gloss over limit theory and go on using *dy ⁄ dx* in this way because it *works*, brilliantly.

At least, it works for simple cases: once you have to solve equations with second and higher order derivatives, it no longer makes sense to think of the symbols in this simplistic way.

That’s because Leibniz’s form for the *n*^{th} derivative is *d** ^{n}y ⁄ dx^{n}*, but the index notation has nothing to do with the similar notation denoting algebraic powers, and the expression

*cannot*be treated as a fraction.

So you need different methods to solve these higher order “differential equations”, and this was one of the bones of contention between mathematicians and physicists in Roland’s study of interdisciplinary communication: the mathematicians couldn’t understand that physics majors hadn’t studied these methods in their dynamics classes.

Still, there’d be no modern physics if Newton had allowed himself to worry over mathematical rigour at the expense of applying new ideas to the study of motion and gravity. Actually, both he and Leibniz were aware of the lack of rigour in their treatment of calculus, and some scholars suggest this is one reason that Newton chose to present most of the proofs in his famous *Principia* using geometry instead of calculus.

Nevertheless, his conception of calculus was remarkably close to the modern one,. But it was Leibniz’s user-friendly symbolism that Newton’s late-eighteenth-century Continental disciples used – notably Lagrange and Leonhard Euler – to rewrite and develop his laws in terms of the powerful calculus we use today.

And it was a self-taught woman, Mary Somerville, who wrote the textbook [*Mechanism of the Heavens*] that brought these new Continental symbols and methods to English-speaking students.

Perhaps the last word should go to James Clerk Maxwell, though. He was born the same year that Somerville published her book, and his use of symbolic patterns to predict the existence of wireless electromagnetic waves is legendary. In describing the power of symbols, however, he advised caution.

He said that in their search for mathematical rigour, Lagrange and his peers had sought to think *solely* in terms of mathematical symbols, whereas physicists need to constantly keep in mind the physical meaning associated with the symbols.

This is just what Eichenlaub and Redish are arguing! So one moral of this story is the importance of history – and history shows that while rigour clearly matters, so does the ability to apply mathematics imaginatively.

Which suggests that a simple way to sum up the maths-in-STEM debate, and the power and pitfalls of symbolism, is that it’s fine to use *dy ⁄ dx* as a fraction as long as you know it’s not!