To people and cultures unfamiliar with it, the game of cricket can appear baffling, baroque and, well, boring.
But that is merely the outsiders’ view. When you start to drill down deeply into the fine-grained rules of the sport – one that approaches the reverence of religion in participating nations, and prompts billions of dollars in financial exchange – matters very quickly become even more baffling, baroque and boring, with an extra dollop of exasperation added for good measure.
Cricket games tend to be long, with test matches lasting almost a week and each of the various shorter forms occupying the best part of a whole day.
It is also critically weather dependent (indoor professional cricket not being a thing), which means, inevitably, every year a certain number of games are cut short, or interrupted, by rain.
This is where the confusion starts to creep in.
In rain-affected games, actual scores often need to be adjusted to calculate what the eventual score would have been had a downpour not sent everybody running for shelter. Such adjustments can, and often are, the difference between winning and losing.
With reputation, national pride and a huge amount of gambled money riding on the result, it’s hardly surprising that the methods used to calculate how many runs would have been gained and wickets taken had the weather not turned nasty are subject of intense scrutiny and debate.
Now, however, mathematicians Robin de Regt and Ravinder Kumar from the University of Coventry in the UK are hoping to change all that, by formulating a new and transparent way of calculating adjusted results.
In a paper lodged on the pre-print platform Arxiv, the pair propose “a simple approach to calculate target scores in interrupted games by considering the area under a run rate curve”.
There is currently no agreement among cricketing regulators regarding the best existing approach to use. The two most popular are called the Duckworth-Lewis (DL) method and the VJD method. The latter, by the way, is not so much a set of initials as a portmanteau acronym derived from the name of the man who derived it in 2002, Mr V. Jayadevan.
The DL method bases its function on the fact that cricket teams have two main resources – wickets and overs – with which to make their runs. As the game progresses, the availability of both tends to decline.
When rain stops play, cricket regulators simply calculate the difference between the maximum resources available at the start of the match and the proportion remaining when the heavens opened. The result can be used to readjust scores to account for the period of play lost.
The VJD method takes a different, and more complex approach. First it derives a scoring curve based on breaking down the first innings into seven different phases. A second curve – known as the target curve – is then derived for the second innings (in which the batting team is chasing a known total).
Adjusted scores for truncated matches can be calculated by reordering the phases of the first innings in line with the number of overs lost.
Confused? Don’t worry. Even people passionate about the game find the determination of adjusted scores a mysterious and perplexing process.
de Regt and Kumar hope to end the uncertainty by proposing a new method that balances the mathematical relationship between overs, wickets and time with the empirical finding that batsmen go plug-crazy towards the end of an innings and try to hit every ball as far as possible.
This second fact – which the mathematicians prove by plotting run rates across several forms of cricket including One Day Internationals, games played in the Indian Premier League, and the faster-moving T20 format – confounds both the DL and VJD methods, which both assume a constant rate of scoring throughout an innings.
The new method operates by basing calculations on the “area under the curve” of plotted run rates to adjust scores to accommodate for stoppages. Any runs added following a truncated game are calculated according to where in the innings the stoppage occurred.
This approach, the mathematicians suggest, can be worked out using nothing more powerful than a pocket calculator, and should result in a better approximation of what would have happened through the day if it hadn’t rained.
de Regt and Kumar add that there is still room for improvement in their method, through the incorporation of “more sophisticated data averaging techniques”.