AI in the workplace is a phrase that tends to stir up angry sentiments. Will we be replaced by machines? Will automation make us redundant? How will the fourth industrial revolution affect future jobs for young people training in replaceable fields?
But our tendency towards revelling in dystopian rhetoric has a flaw – some people dream of a utopia instead.
Some dreamers collaborate on CSIRO Data61’s $12 million Collaborative Intelligence (CINTEL) Future Science Platform, which aims to shift the focus of AI in the workplace and find ways to improve it.
“A lot of the attention in artificial intelligence is about how we can automate things and how we can have machines do things faster than humans, or even replace humans,” says Professor Jon Whittle, an expert in software engineering and human-computer interaction, and director of CSIRO Data61.
“But we feel that there’s actually more to be gained from having machines and humans work together because they have relative strengths.
“Machines tend to be good at crunching large datasets or doing things very fast and very efficiently. But they lack that creativity that we as human beings have.”
CINTEL positively reimagines workflow models, pushing towards the concept of human and AI as co-workers.
“I want the human at the helm,” explains Dr Cecile Paris, chief research scientist at CSIRO Data61 and leader of CINTEL. “I want the AI in the loop. I want the AI there, but I really want the human to be a very integral part of whatever is to be done.
“Humans have intelligence, skills and experience that is extremely hard to duplicate.
“So, let’s capitalise on the human expertise.”
The thing is, we already have AI partners. Between spell check, transcription software and simple traffic surveillance apps – that help us get to work in the first place – we are already living with AI partners that make our work lives easier. Mostly they operate without taking away the important skills humans contribute – creativity and adaptability.
Instead of automating the whole working experience, and ending up with an inferior product, humans would more likely create a superior product when they benefit from an AI’s skills.
“For example, AI can write stories themselves or poetry themselves, but generally they tend to not be that good,” says Whittle.
“But if they’ve got access to a large database of metaphors or previous stories, they can search it very, very quickly, in a way that a human can’t.
“So, the human can think about the overall structure of the story, the overall narrative, while the AI can suggest particular sentences or particular ways of phrasing as they go along.”
Instead of either human or AI writing the whole story themselves, they could act as companions, where the human employs their creativity, and the AI does all the boring stuff. Not a replacement, but a partner.
Which means that, just as in any workplace interaction, there are three important factors to determine whether co-workers will work well together: collaboration, communication, and trust.
Collaboration is incredibly dependent on the latter two pillars – how can you successfully collaborate with somebody if you neither trust nor understand them?
Short answer: you can’t.
And communication is key, because an AI won’t necessarily produce understandable outputs that are easily communicated to its human partner.
“When I talk to a doctor and all they give me is medical jargon, I have no idea what they’re talking about, right?” says Paris. “I don’t know whether that helped me.
“What we need to understand is how the human and the machine should communicate and what’s the appropriate thing to do.
“We’re so used to communicating, and we’re pretty good at communicating, so for humans that doesn’t look difficult. But it’s actually quite complex.”
Even if the machine makes decisions and displays those outputs to a user in language they can understand, that doesn’t guarantee functional communication at the level where the human and machine can have a conversation.
A better way of doing it may be if the machine gives hypotheses or suggestions, the human then selects the hypothesis they think is appropriate and gives the AI instructions to go back into the database and search for information related to the issue.
But even if this is possible, will humans willingly and usefully engage?
We don’t know how humans will react to these scenarios, which is why social scientists are integral to the work of building up platforms to talk to each other.
“Computer scientists, including myself, we’re very technical,” Paris says, with a laugh. “We don’t always understand the human aspect of things. I think we need to look a bit broader than the technical side of the world and look at the social aspects of the world.
“To reimagine the workflow, you need somebody who can see things from the human perspective; what is it people are trying to do? What are they going to enjoy doing? To me, that’s a very important aspect because otherwise we may be too technology oriented. That isn’t good, especially for CINTEL, which is about collaborative intelligence.
“We are still just people, studying problems assisted by a machine. So, we really need those social scientists.”
And good communication breeds trust. But therein lies the problem: how do you truly trust an AI? How can you trust that an AI’s outputs – no matter how well communicated – are reliable and relevant?
Trust depends on context; for people working with databases, they want to trust that the AI accurately read and analysed the correct data, and that it won’t be accidentally leaked to the wrong person.
For a person working with huge robots, they want to trust that their companion won’t harm them.
“I think one of the key challenges from a software engineering perspective is how we can take values that we have as human beings – things like equity and fairness and social responsibility – and encode those in the software so that the AI does things that are ethical and moral,” says Whittle.
“[Ideally to] build up that partnership between the human and the machine, the human needs to know that it can trust decisions coming out of the AI.”
Because once we establish trust and communication, we can give our AI co-workers all the boring bits of the job and focus more on our extraordinary repertoire of human skills.
“I hope it’s going to make more interesting workplaces,” Paris says. “Hopefully, people will have more time to be creative, more critical and to think more deeply because the [boring] things will be done by an AI.”
Whittle is similarly upbeat when he riffs about the ideal outcomes of the project: “In my wildest dreams, I imagine a kind of utopia where we’re using these technologies, but it’s actually having a positive benefit on society, rather than a negative benefit on society.”
We might not yet know how to build this communication or trust – but learning how to do it is the whole point.
“It’s more than a project,” says Paris. “It’s a set of projects. It’s a whole research program. And the goal is to be able to combine human intelligence with machine intelligence.
“To me, developing the science to help people and machines work better together is part of designing and developing responsible artificial intelligence.”
The result? Hopefully a workplace where human and machine are complementary, with sustained, meaningful relationships that empower us to push the limits of creativity and critical thought.
When it really comes down to it, developing and implementing AI partners in the workplace sounds incredibly, well, human.