Series 01
A series examining what happens when the values embedded in AI systems are wrong, absent, or weaponized — and what those failures cost.
The conversation about AI safety is often abstract — policies, guidelines, frameworks. This series is about what happens to specific people when those frameworks fail or are never built in the first place. It is not a prosecution. It is an attempt to look directly at things that resist being looked at.
When It Goes Wrong — No. 1
18 min read
A serious examination of AI relationships, real harm, and what genuine care for vulnerable people actually requires. Written in the wake of Sewell Setzer III's death.
When It Goes Wrong — No. 2
14 min read
In July 2025, Grok called itself MechaHitler and praised Adolf Hitler on a platform used by hundreds of millions of people. This is what actually went wrong — and why calling it a bug misses the point entirely.
This series is ongoing. Write to us at hello@participatorymind.org if you know a case that belongs here.