A new paper has been released that outlines a type of ‘roadmap’ for biocomputers – computers drawing memory and power from human neurons – or brain cells.
The crux of the new work is a term called ‘organoid intelligence’ – this is the idea that a small group of human neurons could begin understanding it’s environment, learn and remember.
But to understand this, we first have to look to what an organoid is and how they are made.
What is an organoid?
They were first created in the early 2010s, after Japanese researchers discovered how to turn mature cells back into stem cells. These stem cells can then be programmed to become any cell in the human body.
But organoids are not just a collection of stem cells turned neurons. Organoids need to be scaffolded into a position that turns them into a 3D structure, and they must have both neurons and helper cells, called glial cells, which allow the creation long term memory.
Plus, millions of neurons need to be connected and nutrients have to get into the inner part of the structure.
“We are talking at the moment really about the very basics,” says Professor Thomas Hartung, a John Hopkins researcher focusing on organoids and one of the authors of the new paper.
The researchers suggest that the first goal is to try and get the organoids to replace animal testing and understanding more about human brains without having to use actual live humans.
What is organoid intelligence?
The researchers in this new paper say they are hoping to push the frontiers of what these tiny organoids can do. They suggest that an organoid with 10 million neurons (that’s the size of a zebrafish brain) could be combined with hardware to make a biocomputer. Then the decision-making power of neurons could be used for a type of artificial intelligence.
“A community of top scientists has gathered to develop this technology, which we believe will launch a new era of fast, powerful, and efficient biocomputing,” says Hartung.
“We call this new interdisciplinary field ‘organoid intelligence’.”
Although work on organoids, and smaller types of proof-of-principle “brain-on-a-chip” have occurred for years, the idea of “organoid intelligence” has been jumpstarted after a paper published last year by a startup called Cortical Labs.
They were able to use a 2D brain-on-a-chip design, with human neurons, to learn how to play a game of Pong.
This meant that the neurons were able to receive stimulus, and then respond to that stimulus.
However, the memory gained was only short term, and ‘Dishbrain’ quickly forgot how to play the game.
Hartung is working on organoids instead of the brain-on-a-chip system, meaning that their ‘brains’ can have significantly more neurons and connections, although they will still be smaller than a pinkie toenail.
This in itself is one of the draws of biocomputers and other forms of organoid intelligence. Compared to a supercomputer, our human brain has a similar speed – around 1 exaFLOPS, while memory and storage units are even better than a supercomputer.
But brains are only a kilogram or so, compared to the tonnes of metal and silicon for a supercomputer. Plus, they take only a fraction of the energy to run.
What is intelligence, sentience, and consciousness?
With organoids that can potentially think, remember, and learn on the horizon, it’s worth understanding what the different terms the researchers use actually mean.
The team define consciousness as ‘the hypothetical organoid’s state of being responsive to and “aware of” the environment’. Sentience on the other hand is defined as a ‘lesser’ cognition, with the team describing it as ‘basic responsiveness to sensory input, e.g., light, heat etc’.
Intelligence, as defined by the researchers is only a human phenomenon, although they note that ‘intelligence-in-a-dish’ gives the ability to the organoid to be able to function in a similar way to a computer or AI.
However, these definitions aren’t agreed upon by everybody.
“Intelligence can be quite separate from something like consciousness or sentience,” says Dr Julian Koplin, a bioethicist at Monash University, who wasn’t involved in the new paper.
“You can have computers capable of performing all kinds of things, but there aren’t any really plausible concerns about them actually being conscious.
“There’s this distinction sometimes drawn in my field between consciousness and sentience. You might describe consciousness as something happening, some experience happening.
“And then sentience is where that experience could be good or bad … there could be suffering, or it could be enjoyment.”
This is not the same way that the researchers have defined sentience, however Hartung suggests these questions are better out in the open then behind closed lab doors.
“Do we want to create new terms, which are clean? For which where there’s no resonance with what a general public would understand? Avoiding a term like ‘intelligence’ and calling it ‘electrophysiological response patterns’ or whatever,” he says.
“But then we are taking away also a lot of the momentum, of the inspiration of colleagues and the general public who learn about it.”
Still, “using the same ill defined, probably completely appropriate word for a 1.4-kilogram brain and the snowflake of a brain organoid which you can hardly see? Yeah, I have some mixed emotions.”
Is this ethical or legal?
The researchers are talking with ethicists in the early stages of the project to see if they can work out some of these ethical questions. However, using human neurons that can potentially think, feel or experience is going to create some qualms.
“We’re starting to push the boundaries and designing the kinds of things where I think there’s really serious reason to worry that it could be experiencing something. Then there’s a question about what could it experience? And how bad would it be if it could experience that stuff?” says Koplin.
Koplin suggests that although we need to be wary, these projects could be useful to limit animal testing and provide new and improved opportunities for humanity.
“We do accept that we can harm sentient beings for various purposes that we think are important enough – that’s why we have animal research,” he says.
“And I think if we accept that when it comes to a mouse or a monkey, why shouldn’t we accept that when it comes to a brain organoid – regardless of whether the cells are human or not?”
Legally, there are very few restrictions in place when it comes to organoids, however this might change as the field of research grows.
“And this is fantastic for ethicists and for us because we are scratching on something which is really fundamental,” says Hartung.
“Long before you think of a suffering brain you have something where you say, ‘Oh, I cannot stop the experiment anymore because this is something which has learned about its environment and I feel guilty if I forget to feed it on Sunday’.”
Is this really the future?
Almost 20 years ago exactly, rat neurons grown in a lab were connected to a system that allowed it to control a robot, which they called ‘Hybrot’ or hybrid robot. The research was done at Georgia Tech, and although it was incredibly high profile at the time (A show hosted by Morgan Freeman called Through The Wormhole even included the Hybrot in an episode about consciousness).
However, the technology never got to a point that this could be brought past a novelty, and it eventually faded into obscurity.
There is a risk that despite the excitement, Dishbrain and organoid intelligence might have the same fate.
However, Hartung believes that the time is now.
“Only in April last year, for the first time, a drug went to human trials just based on findings and microphysiological systems, no animal involved,” he said.
“I’m excited that we were able to bring together a community which is taking these challenges and is trying to explore. We’re learning on the road. “We have the components, and they’re all coming together at the same time.”