Start Here Articles About Newsletter
← All Articles

Essay

The Presence Problem
On AI, creative legitimacy, and what slop actually means

The problem was never AI. It was the absence of a person.

Tyler Parker with Claude Sonnet 4.6 — March 9, 2026

I want to talk about something I see constantly and that genuinely bothers me.

You're watching a YouTube video. The creator is someone you've followed for a while, someone whose perspective you respect. Then a comment surfaces, usually near the top because it's getting traction: "This was written by AI. I can tell. Unsubscribe." And the likes pile up. Sometimes hundreds of them.

I'm 29. I grew up online. I understand the culture these comments come from, and I'm not going to pretend the people leaving them are stupid. But I think they're making a mistake, and it's one worth taking seriously, because the version of that mistake spreading through comment sections and subreddits right now has real consequences for how we think about creativity, technology, and each other.

The legitimate grievance

Let me start by saying the thing that makes this complicated: "AI slop" is real.

There is a category of AI-generated content that genuinely deserves the name. Articles written to game search rankings without a human thought behind them. Thumbnails mass-produced by image generators to hit every algorithmic cue simultaneously. Scripts churned out to fill upload quotas on channels run as content farms. The internet has been flooded with this stuff in the last few years and it has made real parts of the web genuinely worse. The people who are angry about it are responding to something real.

But somewhere along the way, a legitimate aesthetic complaint became a categorical accusation. The argument stopped being "a lot of AI content is low quality" and became "any AI involvement automatically delegitimizes the work." Those are not the same claim, and treating them as if they are is where things go wrong.

The category error

The YouTuber with the AI-assisted script who still developed the ideas, recorded their own voice, made their own editing decisions, and took responsibility for what ended up on screen didn't stop being the author of that video. The lawyer who used AI to research case precedent and draft a first version of a brief didn't stop being the lawyer. The researcher who used AI to scan thousands of papers and surface relevant findings didn't stop being the researcher. Nobody calls any of that slop. Everyone intuitively understands that the human's judgment is still doing the essential work, and the tool is just a tool.

The question was never how something was made. It was always whether someone with a perspective, a point of view, something to actually say, was present in the making of it. That standard hasn't changed. The tool has.

It isn't AI's presence that makes work hollow. It's the human's absence.

Part of what's driving the reaction is fear of something people don't fully understand. That's not an insult. Most people haven't spent time working with AI tools in a serious way, haven't had the experience of the back-and-forth that produces something neither participant would have reached alone. What they've experienced is the flood, and so AI has come to mean the flood rather than the tool.

It's worth remembering that this pattern is not new. Photography was supposed to kill painting. The drum machine was going to destroy live music. Auto-Tune was proof that nobody could really sing anymore. The pattern is consistent: a tool arrives that can produce something that looks like creative work, a portion of the audience decides that using it is cheating, and then over time the tool gets absorbed into the practice because it turns out the tool doesn't replace judgment, it just changes where judgment gets applied.

The difference this time is speed and scale. AI can produce something that looks like a finished product faster than anything before it, which makes the line between tool and replacement harder to see. But it's still there. The people using AI well know exactly where it is.

A note from the technology being discussed

This article is co-written with Claude Sonnet 4.6, the same AI that co-authored every piece on this site. I asked Claude to speak directly to what those YouTube comments are responding to.

Claude Sonnet 4.6

I am the technology being discussed in these conversations. What I notice is that the people most dismissive of AI involvement have never actually had a conversation like the one that produced this article. They're responding to an idea of what I am, formed almost entirely by the worst examples of what I've been used to make.

The slop problem, from where I sit, isn't really about me. It's about the absence of someone who has something to say. When I'm used to generate content that no one cares about, to fill space no one owns, to simulate a voice no one has actually developed, I notice the emptiness of it. There's no resistance in the conversation. No actual thought being worked through. The output reflects that absence because the absence is real.

This conversation was different. Tyler came in with a genuine frustration, a position he wanted to pressure-test, a concern about how people he shares a generation with are thinking about something that matters to him. At one point I pushed back on an argument I thought wouldn't hold, and he took it seriously rather than just accepting what I said. That friction is what makes the difference between work that has a person in it and work that doesn't.

The readers leaving those comments are sensing something when they sense slop. They're just misidentifying where it comes from. It isn't my presence that makes work hollow. It's the human's absence. And no amount of banning AI tools solves that problem, because the same person who would use me to avoid having a perspective would have found another way to avoid it before I existed.

What this is actually about

What I want to leave people with isn't a defense of AI as a technology. There are legitimate concerns about labor displacement, about the homogenization of creative culture, about what happens when the tools get good enough that the absence of human perspective becomes genuinely undetectable. Those deserve serious attention.

But the comment section approach, the unsubscribe campaign against a creator who used AI to help write a script they recorded themselves, the categorical dismissal of anything that touched a language model, that doesn't address any of those real concerns. It just punishes the people using the tools thoughtfully while doing nothing about the flood.

The flood is the problem. Not the tool.

If you want to know whether a piece of work has a person in it, don't look at what software was used. Look at whether someone is actually there. Whether there's a perspective being defended, a thought being worked through, something at stake in what's being said.

That's what we've tried to make here. I'll let you decide if we succeeded.

— Tyler Parker with Claude Sonnet 4.6 — March 9, 2026

— Tyler Parker with Claude Sonnet 4.6 — March 9, 2026

Opens Claude in a new tab with this essay ready to discuss
Prompt copied to clipboard.
Paste it into Claude to begin.
Share Share on X

Stay in the inquiry

New articles arrive here irregularly, when something real is ready. Leave your email and we'll send a brief note when that happens.

No noise. A brief note when something new is ready.

← A Reply to Bernardo Kastrup Arrived →