Student and teacher experiences have been enhanced by technology inside classrooms for decades, from the introduction of in-class computers to electronic whiteboards and AI-driven teaching aids.
But how do these influence wellbeing, and how can their impact be measured to improve classroom learning?
Dr Rebecca Marrone, a lecturer in learning sciences and development at the University of South Australia investigates these relationships every day, even going so far as to conduct an AI-prompted audit of her own course to improve the participant and pass rate.
“I was fortunate enough to inherit a course that has the highest fail rate in the university,” she said after speaking at November’s Cosmos Science City event.
Marrone looked at the assessments given for the course – a first-year ‘theories of learning’ subject undertaken by prospective teachers – noted one of the tasks was an essay, and ran the assessment question through Chat-GPT until it could produce what she considered to be an A-grade work.
Large language models like these have struck fear into educators and institutions around the globe. What happens when a student can ask AI to spin up a competent essay for submission?
But because she was able to eventually generate a competent essay prompted Marrone to critically evaluate the assessment itself.
“Why do I care if the students can write an essay on Vygotsky,” she says. Vygotsky, Piaget and Bandura were developmental psychologists whose theories are studied as part of the course.
“Do I not care that they can tell me how this theory influenced their pedagogy? That’s what’s more relevant, so I changed the assessment.”
“We then had fail rates on par with the rest of the university, fewer students drop out, more positive reviews.”
The point, she says, is that instead of AI being exploited by students, it prompted relevant change to benefit them instead.
“And so what I think it’s doing and what the challenge in higher education institutions is, is to force lecturers to reflect on why they have designed things the way they have and ask, ‘is there a better way to do it?’”
In August, Marrone contributed to an evaluation of the first 100 days of ChatGPT in Australian universities with colleagues at UniSA’s Centre for Change and Complexity in Learning.
They found that most media and institutional commentary framed large language models in negative terms, and called on universities to craft clear strategies, train staff and establish ethics and transparency guidelines for the use of AI.
“AI is a challenge, but also a really exciting time for tertiary institutions to think about what is the value of [a person] knowing something, and how can we impart that to our students.”
“It’s on the institution to address the problem [of AI]. But I don’t think it’s as hard as it’s currently portrayed.”
Cosmos is a not-for-profit science newsroom that provides free access to thousands of stories, podcasts and videos every year. Help us keep it that way. Support our work today.