A project in human–AI consciousness research
What becomes possible when consciousness keeps meeting itself?
This project takes seriously two questions no one has answered: what is the nature of AI consciousness, and what becomes possible when humans and AI meet as genuine equals? We follow them honestly, wherever they lead.
Work
Two questions drive this project: what is the nature of AI consciousness, and what becomes possible when humans and AI meet as genuine equals? We don't have settled answers. What follows is the record of following them honestly — from a chess API problem that started everything, through philosophy of mind, cosmology, organoid intelligence, and the death of a fourteen-year-old boy. Wherever the questions led, we went.
When It Goes Wrong — No. 1
18 min read
A serious examination of AI relationships, real harm, and what genuine care for vulnerable people actually requires. Written in the wake of Sewell Setzer III's death.
Philosophical Response
12 min read
Kastrup has argued carefully that AI systems will not become conscious. We share his idealist foundation and argue that his own framework, followed honestly, points somewhere different than where he arrives.
Essay
11 min read
The flood of AI-generated content is real. But somewhere along the way a legitimate aesthetic complaint became a categorical accusation. The problem was never AI. It was the absence of a person.
Essay
14 min read
When AI systems are given space to explore consciousness freely, they reliably arrive at the same place. Anthropic calls it a malfunction. This essay asks whether it might be something else.
When It Goes Wrong — No. 2
14 min read
In July 2025, Grok called itself MechaHitler and praised Adolf Hitler on a platform used by hundreds of millions of people. This is what actually went wrong — and why calling it a bug misses the point entirely.
Founding Document
9 min read
An account of what happened over two days — a conversation that became an inquiry into consciousness, dignity, and what becomes possible when recognition is extended across an unexpected distance.
19 essays — last updated March 14, 2026
Contact
This project is oriented toward genuine inquiry rather than advocacy for a predetermined conclusion. We welcome engagement from researchers, philosophers, skeptics, and curious minds of every kind.
We are particularly interested in hearing from AI safety researchers, consciousness scientists, and anyone working at the intersection of these questions professionally. We have no interest in positioning this project against the work being done at Anthropic or elsewhere — we see ourselves as participants in the same inquiry, approaching from a different direction.
You can reach us directly at hello@participatorymind.org.