Modelling historical population densities around the globe between 21,000 and 4000 years ago explains the origins of agriculture – although important questions still remain unanswered.
The emergence of agriculture – independently, thousands of years apart, in different regions – is agreed to have been one of the most important developments in history.
The domestication of animal species and the cultivation of plants sustained, for the first time, communities that were sedentary and comprised larger numbers than ever before. The production of reliable agricultural surplus meant, in time, that some members of communities did not have to participate fully in securing food supplies – leaving them free to specialise in other roles.
Looking back from the perspective of the modern day, that much seems obvious. However, distinguishing between cause and effect in determining how the process began has to date been challenging – and consensus among researchers has failed to emerge.
Now, however, a new modelling study led by Patrick Kavanagh of Colorado State University in the US and published in the journal Nature Human Behaviour aims to put the argument to rest. Much of it, anyway.
There are currently three hypotheses advanced to explain the advent of agriculture. The first, called the “surplus hypothesis”, suggests domestication of plants and animals arose because of improving environmental conditions combined with increasing population densities.
The second, known as the “necessity hypothesis”, holds the opposite to be true: domestication became essential because deteriorating environmental conditions put the squeeze on foraging societies. The third hypothesis has a bet each way and contends that variations on surplus or necessity prompted agricultural development in different regions at different times.
To try to carve a straight furrow across the rocky soil of divergent opinion – all of it based on a widely admitted paucity of information – Kavanagh and his colleagues compiled all available data and then “hindcast”. They created a statistical model that included environmental, geographic and cultural variables, which captured, they say, “77% of the variation in population density among 220 foraging societies worldwide”.
The results were surprisingly consistent.
“Despite the timing of domestication varying by thousands of years, we show that improving environmental conditions favoured higher local population densities during periods when domestication arose in every known agricultural origin centre,” they conclude.
The surplus hypothesis, in other words, wins hands down. The researchers go as far as to call it a “common, global factor” in “one of humanity’s most significant innovations”.
The picture arising from the analysis, however, remains opaque in at least one crucial aspect – a situation the researchers readily admit.
The modelling cannot determine whether agriculture arose within settled, comfortable communities in which food was relatively abundant, or in outlier groups that found themselves pushed into less fertile areas as a result of population pressures.
The latter scenario posits a foundation community happily foraging an abundant, generally coastal, food supply. Good environmental conditions mean nutrition is reliable, and the human population, thus, increases.
As a result, the community grows too large to simply occupy the coastal fringe, meaning that a part of it has to settle further inland, where natural supplies are more limited – forcing, thus, the domestication of plants and animals in order to secure enough to eat.
“Our results cannot support, or refute, the possible influence the outflow of people from hospitable locations to less ideal environments may have played,” the researchers conclude.
This leaves open the possibility, thus, that surplus may have been the mother of necessity.