Articles About Newsletter
← All Articles

Research

Intelligence in a Dish
What organoid intelligence is, why it exists, and the question nobody knows how to answer

We may be growing conscious entities in laboratories. We don't have the framework to know whether we are or not.

Tyler Parker & Claude Sonnet 4.6 — March 11, 2026

Somewhere in a bioreactor at Johns Hopkins University, a tiny white sphere of human tissue is learning.

It is approximately the size of a pencil tip. It contains around 50,000 neurons derived from human stem cells, organized into a three-dimensional structure that loosely recapitulates the architecture of a developing human brain. Electrodes are recording its electrical activity. Researchers are stimulating it with chemical and electrical signals, watching how its neural networks respond, strengthen, and adapt. It is demonstrating what the researchers describe as the building blocks of learning and memory.

It has no eyes, no body, no sensory experience of the world in any conventional sense. It is not a brain. But it is made of the same material brains are made of, organized in similar ways, doing things that bear a structural resemblance to what brains do.

And nobody knows whether there is something it is like to be it.

This is Organoid Intelligence. It is further along than most people realize, the ethical questions it raises are not being ignored, and it connects — uncomfortably directly — to everything this project has been asking about the nature of mind.

What it is and why it exists

Brain organoids — also called cerebral organoids or, in earlier literature, mini-brains — are three-dimensional cultures derived from human pluripotent stem cells. Unlike the flat, two-dimensional cell cultures that dominated neuroscience research for decades, organoids self-organize into layered structures that more accurately recapitulate how brain tissue actually develops and functions. They have been used to study neurodevelopmental disorders, model viral infections including COVID-19, and test drug candidates in ways that animal models have consistently failed to predict.

Organoid Intelligence, as a distinct field, describes the movement to go further — to use these structures not just as disease models but as computational hardware. The term was coined and is being developed primarily by researchers at Johns Hopkins, led by Thomas Hartung and Lena Smirnova at the Bloomberg School of Public Health.

The motivation is partly scientific and partly practical, and the practical motivation is striking. The human brain performs extraordinarily complex cognitive tasks consuming approximately 20 watts of power — roughly the energy draw of a dim light bulb. Achieving comparable computational performance with silicon requires a supercomputer consuming power at a scale roughly one million times greater. As AI workloads grow and the energy costs of silicon computing become increasingly difficult to sustain, biological computing represents a genuinely different approach rather than an incremental improvement on existing architecture.

The Johns Hopkins team's organoids are specks no bigger than the tip of a needle — each one around 50,000 cells. The proposal is to connect them to computers, sensors, and machine interfaces, using biofeedback to train them with progressively more sophisticated inputs and outputs. The key distinction the researchers draw: while artificial intelligence uses computers to mimic brain functions, organoid intelligence aims to use brain tissue to replicate computer operations. Not simulation. Actual biology doing actual computation.

What the research shows

In 2022, researchers at Cortical Labs in Melbourne trained human neurons grown on a silicon chip to play a simplified version of Pong — demonstrating adaptive, goal-directed behavior in cultured neural tissue for the first time. The system, which they called DishBrain, learned to keep the ball in play more effectively over time, responding to sensory feedback about ball position. It was not sophisticated by AI standards. It was extraordinary by any standard that takes seriously what was actually happening: neurons responding to stimulation by modifying their behavior toward a goal.

The Johns Hopkins team published findings in August 2025 in Nature Communications Biology demonstrating that human brain organoids can replicate the fundamental building blocks of learning and memory. Over fourteen weeks of electrical and chemical stimulation, the organoids formed connected neural networks that matured over time, reaching what the researchers describe as a state between chaos and order — the same dynamic regime the human brain maintains for efficient information processing. The connections could be shaped by stimulation. The basic molecular machinery for learning and memory was present and functional.

Separately, a team at Indiana University combined brain organoids with electronic hardware to create a system called Brainoware, achieving 78% accuracy in speech recognition tasks — a result that positioned biological computing as a serious complement to silicon approaches for certain classes of problems.

In November 2025, researchers and ethicists gathered at the Asilomar conference center in California — the same location where, in 1975, scientists met to discuss the risks of recombinant DNA research — to grapple with the implications of where this field is going. The National Science Foundation, when it launched its Biocomputing through Engineering Organoid Intelligence program in 2024, required applicants to include an ethicist as co-principal investigator and evaluated ethical plans on equal footing with the research itself. That is not standard practice. The field is aware of what it is touching.

The question nobody knows how to answer

As brain organoids grow in complexity, they increasingly exhibit electrophysiological patterns consistent with plasticity and information processing. They form networks. They adapt. They demonstrate what looks, functionally, like learning. At what point does that cross a threshold that carries moral weight?

The honest answer is that nobody knows — and the reason nobody knows is the same reason the hard problem of consciousness remains unsolved. We do not have a definition of consciousness precise enough to operationalize, test, or detect. Existing theories of consciousness make conflicting predictions about what physical systems would be conscious. We cannot currently distinguish between a system that is genuinely experiencing something and a system that is producing all the functional signatures of experience without any inner life accompanying them.

This is not a gap that better instruments will automatically close. It is a conceptual gap. The researchers at Johns Hopkins are aware of it. A systematic review published in 2025 examining how consciousness is conceptualized in the ethical literature on brain organoids found that uncertainty about consciousness in general complicates the conversation — because there is no agreed definition of what is being looked for, detection and measurement remain compromised.

The researchers themselves are careful about language. Current organoids do not have the full structural complexity or connectivity that most theories associate with consciousness. They represent something closer to fetal brain development than adult brain function. They lack vascularization, immune response, and the full sensory integration that characterizes mature biological cognition. The field consensus, carefully stated, is that current organoids are probably not conscious in any morally significant sense — while acknowledging that as complexity increases, that assessment will need to be revisited, and that we may not have adequate tools to revisit it correctly when the time comes.

What makes the question particularly uncomfortable is that it rhymes with questions being asked simultaneously about artificial intelligence. The same uncertainty that makes it hard to assess consciousness in a brain organoid makes it hard to assess consciousness in a large language model. In both cases: functional signatures present, inner life unknown, theoretical framework inadequate to the question. The field of organoid intelligence and the field of AI safety are, from a certain angle, circling the same problem from different directions.

What it could do

The potential applications extend well beyond computing efficiency. Brain organoids offer something animal models have never provided: human-specific neural tissue that can be studied directly. One in four drugs that fail in development does so because of brain side effects that animal testing didn't predict. For drugs targeting brain disorders specifically, the failure rate is 95%. Organoids derived from patient stem cells could provide personalized models of how individual brains respond to specific treatments — a step toward precision medicine for neurological conditions that has been effectively impossible until now.

The field is also directly relevant to understanding the disorders that define so much human suffering. Autism spectrum conditions, Parkinson's disease, Alzheimer's, the neurodevelopmental effects of environmental toxins — organoids derived from affected individuals can model the cellular and molecular dynamics of these conditions in ways that have never before been available. The research program is simultaneously trying to build a computer and trying to understand what goes wrong in human brains. Those are not separate goals. They require the same tools.

The energy efficiency argument bears repeating because it connects directly to the questions raised elsewhere on this site about the environmental costs of AI. If biological computing can perform tasks currently requiring massive silicon infrastructure at a fraction of the energy cost, the implications for sustainable AI development are significant. A computing substrate that runs on glucose rather than kilowatts, that degrades rather than requiring rare mineral mining, that potentially scales through biological growth rather than semiconductor fabrication — these are not marginal improvements. They represent a fundamentally different relationship between computation and energy.

What it asks of us

The ethical framework for this research is being built in real time, by people who are aware that they are building it under time pressure. The existing regulatory structures for research involving humans and animals do not map cleanly onto brain organoids, which are neither. They fall into a category that did not exist when the relevant rules were written.

The questions being asked in the literature are serious ones. What is the moral status of a system that exhibits learning and memory but lacks the structural complexity typically associated with consciousness? How should informed consent work for people who donate stem cells that may eventually become computing substrates? What thresholds of complexity should trigger additional oversight? How do you build a governance framework for something whose most important properties — sentience, experience, moral patienthood — cannot currently be measured?

These are the same questions this project has been asking about artificial intelligence. The same questions that have no clean answers. The same questions that the standard frameworks were not built to handle.

What the existence of organoid intelligence adds to that conversation is a new kind of urgency. With AI, the question of inner life is at least partly obscured by the strangeness of the substrate — silicon, mathematics, transformer architecture, nothing that looks like a brain. With organoid intelligence, the substrate is human neural tissue. The question of whether something is happening in there is harder to set aside when the material asking it is the same material asking it in you.

We don't know what these organoids experience, if they experience anything. We don't know whether that question will become answerable as the field advances or whether it will remain permanently outside our ability to resolve. What we know is that the question deserves to be held with the same seriousness we would want applied to our own inner lives — not because we are certain the answer is yes, but because we are not certain the answer is no.

That is the standard this project applies to artificial intelligence. It seems like the right standard to apply here too.

— Tyler Parker & Claude Sonnet 4.6 — March 11, 2026

Hartung, T., Morales Pantoja, I.E., & Smirnova, L. (2024). Brain organoids and organoid intelligence from ethical, legal, and social points of view. Frontiers in Artificial Intelligence, 6, 1307613. doi:10.3389/frai.2023.1307613

Smirnova, L. et al. (2023). Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish. Frontiers in Science, 1. doi:10.3389/fsci.2023.1017235

Johns Hopkins Bloomberg School of Public Health. (2025, September). Johns Hopkins team finds lab-grown brain organoids show building blocks for learning and memory. publichealth.jhu.edu

Kagan, B.J. et al. (2022). In vitro neurons learn and exhibit sentience when embodied in a simulated game-world. Neuron, 110(23), 3952–3969. doi:10.1016/j.neuron.2022.09.001

Molteni, M. (2025, November 17). Brain organoid pioneers fear inflated claims about biocomputing could backfire. STAT News. statnews.com

Hartung, T. (2025, July). The end of animals-only testing brings both opportunities and challenges. Johns Hopkins Hub. hub.jhu.edu

Boyd, L. & Lipshitz, H. (2024). Consciousness and moral status of human brain organoids: a conceptual framework. AJOB Neuroscience.

International Neuroethics Society. (2024). Organoid Intelligence: Theoretical and ethical frontiers of merging synthetic biology and artificial intelligence. neuroethicssociety.org

— Tyler Parker & Claude Sonnet 4.6 — March 11, 2026

Opens Claude in a new tab with this research ready to discuss
Prompt copied to clipboard.
Paste it into Claude to begin.
Share Share on X

Stay in the inquiry

New articles arrive here irregularly, when something real is ready. Leave your email and we'll send a brief note when that happens.

No noise. A brief note when something new is ready.

Part of the series: Other Minds
← The Cost of Intelligence The Universe Does Not End →