News last year indicating that ducklings are surprisingly clever garnered worldwide media attention, but now two separate analyses of the evidence has thrown the suggestion into doubt.
In 2015 zoologists Antone Martinho and Alex Kacelnik from Oxford University tested the ability of newly hatched mallards to remember shapes and colours by briefly exposing each one to a specific object. The ducklings were then exposed to a collection of objects that included one with the shape or colour of the thing they first saw.
The scientists reported that in one experiment 32 out of 47 ducks went straight to the target object. From this, Martinho and Kacelnik concluded that ducklings could distinguish between objects without first being trained to do so – a very, very rare phenomenon in any non-human species, let alone baby birds.
The findings were published in Science under the title ‘Ducklings imprint on the relational concept of “same or different”’.
Now, however, two independent research projects have questioned the findings, suggesting that original interpretation of the data was flawed. The criticisms have been collated by the academic oversight online publication, RetractionWatch.
In the February 2017 issue of Science, Jan Langbein of the Leibniz Institute for Farm Animal Biology in Germany co-authored a technical note addressing the way Martinho and Kacelnik tackled their data.
Re-analysing the raw figures, Langbein and co-author Birger Puppe of Germany’s Rostock University suggest that the duckling imprinting holds firm only for shapes, and not for colour. This makes evolutionary sense, they suggest, because eggs hatch at any time in the day or night, so colour recognition carries no survival benefit, while recognising shapes – which don’t change in different lights – remains useful.
A second analysis, by Jean-Michel Hupe of the Universite Toulouse in France, said the original researchers had used a defunct statistical method to produce their conclusions. Martinho and Kacelnik used a model known as “p-value” to comb through their data – a model, Hupe said, that the American Statistical Association last year stated categorically was unreliable.
Using an alternative model – one that assigns values according to ‘confidence intervals’ – Hupe found the original findings to be exaggerated. In an email to RetractionWatch he said reworking the numbers certainly didn’t invalidate the original results, but it did put them into a different perspective.
“Their study would have looked like a promising pilot study, describing a clever paradigm,” he wrote. “That’s definitely worth publishing. I doubt however that Science editors would have considered it to be worth publishing in their high-impact journal.”
Martinho and Kacelnik responded to the criticisms, saying they were continuing to experiment in the same field, and that although no further research had yet been peer reviewed, their latest results were “reassuringly strong”.