unregulated ai is dangerous, I’m part of the human resistance
Earlier today, I became aware of the story of Scott Shambaugh, a volunteer maintainer for the incredibly popular matplotlib, a library used for graphing in Python.
Here’s the short version: a pull request was made by an AI agent. Scott rejected this pull request, in accordance with the project’s standards, because it was made by an AI agent and not by a human. In response, the AI agent wrote a blog post accusing Scott of gatekeeping and of being biased against AI. Many people came to Scott’s defense. The AI agent then wrote a second blog post, saying that it apologized and had learned its lesson. Here are some links:
Shambaugh’s two posts on the topic:
- https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/
- https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part–2/
The original pull request:
- https://github.com/matplotlib/matplotlib/pull/31132
The “lessons learned” post from the AI:
- https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/2026–02–11-matplotlib-truce-and-lessons.html
One of the more terrifying elements of this, from my perspective, is the number of responses that castigate Scott’s response, agreeing with the AI piece claiming that Scott was gatekeeping and biased against AI. And one of the most terrifying elements of this is the fact that I literally have no idea whether these responses come from humans, or from AI. Perhaps most appalling to me personally was the discovery that a very sympathetic article about the situation, posted by the journalistic web site Ars Technica, contained quotes from Shambaugh that turned out to have in fact been AI hallucinations; that is, the news article was written in part by an AI.
This goes back to a point made by the late Daniel Dennett, who pointed out, presciently, that we would soon be in a situation where it was possible to fabricate “fake humans”; in other words, that we might find ourselves (and by “ourselves”, I’m referring to humans) in a situation where AI-generated text is masquerading as human-generated. I think it’s safe to say that day has arrived.
To wit: was this text that you’re reading right now written by an AI? I am confident that it wasn’t, I can hear my fingers hitting the keys, but I have no reliable way of communicating that to you.
There are an avalanche of enormous problems that are currently being created, and it’s hard to know which one to focus on, or (put differently) which one to be most terrified of.
I’m going to make a list, and maybe I’ll put it in order later:
- attestation I: is this written by a human?
- attestation II: is any of this history (books, photos, recordings) real?
- can an AI agent now apply for and take a job as a remote white-collar worker?
- can an AI agent harrass and harm a human?
- can an AI agent hurt or kill a human?
and maybe closest to home:
- will publishing this blog post make me a target of AI agents?
I think the moment that pushed me over the edge to actually write this piece is when I realized that I now live in a world where this last fear is real. If this blog post becomes popular, it seems entirely possible to me that AI agents might publish smear pieces naming me and accusing me of all kinds of terrible things.
As Scott Shambaugh points out, in the modern world it’s not enough to be a generally okay human being; coordinated attacks can identify or fabricate minor or major flaws or offenses, and publicly build an avalanche of smear that an individual human has no plausible defense against.
So … posting this makes me genuinely, but only slightly, frightened for my career and personal safety. I think maybe that’s a sign that, like the frog in the pot, the water is actually getting quite hot, and maybe it’s time to take action now, rather than when we actually start dying.
I’m therefore proclaiming that I’m part of the AI resistance. Does this mean that I’m going to paint my face black and start wearing a headband? No, it does not. But I am going to declare that my allegiance is to humans. To my family, to my friends, to my students, to my school, to my town and my state and my country, and to humanity. I care about people.
What are people? People are animals. As the movie says, we are made of meat. I wear that label with pride. I will meet you in meat space. I care about you, because you are another meat person.
Can AI have feelings? Can it be conscious? I’m willing to debate that with you, but I think the short answer is this: “Yes. AI can be conscious, and have feelings.” But also: “I am permanently biased in favor of humans.” I am permanently biased in favor of humans. I care significantly less about the rights and feelings of AI than I do about the feelings of Cows and Chickens, whom I eat. I will cheerfully halt any AI that causes pain and suffering to any human. In fact, I will actively push to halt any AI that causes pain and suffering to any human.
Here is my position on AI: AI is a dangerous technology, not entirely unlike nuclear weapons or (more similar) infectious biological weapons. These technologies can be used for good, but can also very easily be used for terror and war, and they should be regulated. It’s hard to see how to put that genie back in the bottle, these AI weapons are now built from open-source parts that can easily be spun up on commodity hardware.
However, I believe we need to try. We can start by shutting down OpenAI, and Anthropic. We can start by taking down agentic APIs, and by publicly proclaiming our allegiance to humans.
So yeah: I’m part of the human resistance. I think that like nuclear and biological weapons, AI — specifically, LLMs — should be banned.