It’s hard to talk about modern education without the phrase “digital natives” popping up. But how real is this phenomenon? Are younger people really just inherently better at using technology?
Associate Professor Edward Palmer, from the University of Adelaide’s School of Education, is not convinced.
“The literature on digital natives is [some] of the most highly cited literature there is – but the literature that debunks that is almost equally as powerful,” Palmer told Cosmos at a Science City forum.
According to Palmer, the idea of the digital native “makes a whole bunch of assumptions that I don’t think are actually validated by evidence”.
The term “digital native” was coined in 2001 by American education writer Mark Prensky. It sits in opposition to “digital immigrants,” older generations who have not grown up with the internet or other forms of communications technology.
According to Prensky, younger people who have always had access to the internet and other forms of information technology “think and process information fundamentally differently” from older people.
“There’s a nice conceptual idea that can be made about changes in generations dependent on the impact of technology,” says Palmer. “The problem is that people have taken that to mean that [those] who’ve grown up with technology are somehow masters of that technology.”
In reality, younger people are not necessarily any better at using tech.
“Yes, they know how to use a remote control, they know how to turn on a machine, they know how to get onto social media and engage with their friends.
“[But] a lot of them definitely don’t know how to use it for learning.”
Is this a problem? Yes, if students aren’t being taught how to use technology in school.
“People think that you don’t have to teach people this stuff, and you do have to teach people this stuff,” says Palmer.
“They can’t become digital experts in their own learning journeys if they actually haven’t been taught that. That’s not how learning works – it has to be scaffolded to some level.”
Everything from knowing what a hyperlink is (not intuitive for young children raised on iPads), to thinking critically about the information chugged out by AI tools like ChatGPT, needs to be taught.
“With artificial intelligence rising, we really need to make sure that [students] have got the capability to engage with it in an intellectual way. That’s beyond asking them what the best movie to go see is on the weekend,” says Palmer.
So how do we do this?
That’s the tricky part, according to Palmer. “How much space is there in the curriculum to do all the things that everyone says teachers need to do?”
His solution is to build digital teaching into other parts of schooling.
“For me, it’s integration. I think if you’re trying to do all this stuff individually – if you’re trying to look at ethical understandings of how to learn, diversity of learners, Indigenous perspectives, you can’t just have a session on that.”
Palmer also believes that students need a thorough base of knowledge to work with AI.
“I think our role as educators is to make sure that [students] have got that fundamental knowledge and those critical skills when they’re using AI,” he says.
“For me, it’s the most challenging thing – when you’ve got all of these tools that give you all the answers instantly, how do you encourage people to take the time to learn? Because learning’s hard, grabbing the answer from a tool is easy.
“Once again, I think our teachers who are trained in the primary and secondary sector are well suited to do this. They can do this if we can free up enough time for them to be able to do it.
“For higher education, it’s more of a challenge, but it’s also doable.”
Cosmos is a not-for-profit science newsroom that provides free access to thousands of stories, podcasts and videos every year. Help us keep it that way. Support our work today.