Researchers have created a roadmap for how to build tiny biocomputers out of human neurons or brain cells.
“We can use a culture of the human brain to show something which is not just living cells. We can show that this is learning, this is memorising, this is making decisions, it is possibly even at some point, ‘sentient’ in the sense that it can sense its environment,” Professor Thomas Hartung, a Johns Hopkins ‘organoids’ researcher, told Cosmos.
“We are the explorers who have stumbled into a completely new field.”
The Australian company behind ‘Dishbrain’ (which learnt to play Pong last year) is collaborating with scientists at Johns Hopkins University in the US, whose research paper has outlined how these ‘biocomputers’ could allow us to understand memory, learning and other integral parts of human understanding. They also suggest it could rival supercomputers or AI.
But with this, come questions about ethics, as well as how we define intelligence and consciousness and whether ‘learning’ and ‘computing’ are really that different.
The crux of the paper, published in Frontiers in Science is a term called ‘organoid intelligence’ – this is the new field that will study small groups of human neurons that can learn, remember, and even understand its environment.
These were first created in the early 2010s, after Japanese researchers discovered how to turn mature cells back into stem cells. These stem cells can then be programmed to become any cell in the human body.
Although they’ve been pretty successful, they are still at their early stages – particularly for complex organs like the brain. Currently, they are mostly used to test drugs without using animal models.
Hartung has worked for decades to move away from animal testing in drug discovery.
But “organoid intelligence” isn’t just using these brain cells to test drugs on. Instead, the neurons could learn, remember and even be aware of their surroundings. Hooking them up to machines would then make them useful for types of computing.
“We are talking at the moment really about the very basics,” says Hartung.
“It will take certainly many years before we reach the intellectual capacity of even a small animal.”
Late last year, Cosmos covered Dishbrain – a 2D brain-on-a-chip system created by an Australian start up called Cortical labs – which was taught to play a pretty mediocre game of pong.
Dishbrain was immensely exciting at the time, for all the reasons you might expect – human brain cells in a dish were able to play pong, even for short periods of time. This was an immense step forward for the field.
The Australian Dishbrain team also controversially used the word ‘sentience’ in their paper, and so started a long line of questions about what makes something sentient, what makes consciousness and even intelligence?
But this new paper suggests this is only the start.
“It (was) a very elegant implementation of demonstrating learning to use a computer game environment. This was really a very promising step,” says Hartung.
“However, Cortical Labs uses a pure neuron culture which is two dimensional. And this does not really have the full machinery for learning.”
In the Dishbrain experiment, the team managed to get 800,000 human neurons in a dish to learn how to play pong, but to get more brainpower out of an organoid, the researchers want to get to 10 million neural cells. That’s about the equivalent of an adult zebrafish.
“Organoids are a great tool for addressing many questions in a simpler system than the intact human brain. We would be able to ask many questions about the human brain that aren’t possible otherwise,” says Professor Lucy Palmer, a Florey Institute neuroscientist who was not involved in the paper.
“However, although an advance on other systems, organoids still do not contain the complexity of cell types and inputs found in the mammalian brain and therefore extrapolating the findings should always be met with caution.”
Once the researchers get to 10 million cells then they can start to look at the biggest and boldest parts of the new roadmap – artificial intelligence and supercomputers.
If lab-made brains could learn and remember, they would not be so different from machine learning, but with a fraction of the energy resources and size required to run them.
A full-sized human brain is not able to compute to the level of a supercomputer, but numbers-wise, they’re not far off. Compared to a supercomputer, our human brain has a similar speed – around 1 exaFLOPS. Memory and storage units are still better in the human brain than supercomputers.
For all human brains’ flaws, they’re also incredibly compact and low energy compared to a supercomputer.
To be able to harness even a small amount of a human brain’s ability could allow them to be used in conjunction with hardware, creating a type of biocomputer.
But even Hartung says this is a long time away yet.
“I don’t think in 10 years’ time that we’ve got organoids answering your phone call,” he told Cosmos.
“The supercomputing aspect is the very last of the achievements I see coming. At the moment, if you can very simply create memory - simple memory – in such a cell culture, then you can ask, how did the memory come about? How is it possible that this system is now producing memory?”
But when playing with human neurons, it doesn’t take long to get to some thorny ethical questions.
For example – when the researchers are talking about these organoid intelligence – what is intelligence? What about consciousness?
And most importantly, when should we start to put ethical lines in the sand about what we as a society are comfortable and not comfortable doing with human neurons?
“I think the intelligence thing is a bit of a non-starter,” says Monash bioethicist Dr Julian Koplin who wasn’t involved in the research.
“If you’re capable of profound suffering, but you’re not particularly intelligent, well, then it still really matters how I treat you.”
The John Hopkins team has brought ethicists on at the very start of this project, and so these definitions, and questions are likely to be much further discussed before scientists get even close to something that is potentially suffering.
“I think there is a moral imperative to pursue research if they can help us better understand degenerative disorders, they can help us better understand autism, they can help us better understand schizophrenia. I think that there are also really good moral reasons to be creating computing systems that use much less energy and are much more powerful,” says Koplin.
“But there are these risks, and we need to attend to them really carefully. We need to be aware that if these things develop consciousness, then that has ethical implications, and those ethical implications can be quite serious. It matters, I think, a lot, how we treat other sentient beings.
“We need to proceed with caution.”