top of page

What the AI Temperance Movement Gets Wrong

  • Writer: Gael MacLean
    Gael MacLean
  • 5 days ago
  • 12 min read

When systems produce horrors nobody chose


A whale swimming through an old underground city.
AI art is not the crime.

A Companion to The Digital Infection Series

“Technology is neither good nor bad; nor is it neutral.” — Melvin Kranzberg, historian of technology, 1986

Here’s what keeps me up at night about AI—and it’s not the villains.


The villains I can handle. The tech executives who designed addictive interfaces. The bad actors generating deepfake revenge porn. The propagandists flooding the zone with synthetic misinformation. These people made choices. They can be named, shamed, regulated, prosecuted.


What terrifies me is something weirder. Something that emerged from AI systems but now operates beyond anyone’s intent.


Let me explain with an example that should make your skin crawl.


The AI Hiring System That Nobody Programmed to Discriminate

In 2014, Amazon began building an AI hiring algorithm. The goal was simple: predict job success. The engineers trained it on ten years of résumés submitted to the company. Decades of hiring decisions, performance reviews, promotions, retention rates. The system worked beautifully. It identified patterns humans missed. It was efficient, consistent, objective.


Except it wasn’t.


The AI learned that being female correlated with worse career outcomes. Not because women perform worse—but because women were historically denied promotions, excluded from mentorship, pushed out after having children. The system faithfully learned the industry’s discrimination and encoded it as predictive insight. It reportedly penalized résumés containing the word “women’s” or the names of certain all-women’s colleges.


Amazon’s engineers might have been feminists. The company had diversity initiatives. Executives gave speeches about equity. Didn’t matter. The AI produced discrimination. Not despite working correctly, but because it worked correctly.


Amazon scrapped the project in 2017 after losing confidence they could fix it. When Reuters broke the story in 2018, it became the canonical example of algorithmic bias—but here’s what most coverage missed: there was no villain. No one programmed the system to discriminate. The engineers tried to remove the bias once they discovered it. They couldn’t guarantee they’d found all of it.


Who do you prosecute? Where do you march? What executive do you shame?


The harm is distributed across a million micro-decisions, each one defensible in isolation, collectively producing something monstrous. There’s no intent. There’s no villain. There’s just an AI system doing exactly what it was designed to do, generating outcomes nobody wanted.


That’s the emergent nightmare.


The AI Recommendation System That Radicalizes Teenagers

Here’s another one. A platform builds an AI recommendation engine. The goal is engagement. Keep users watching, clicking, scrolling. Standard business model. Nothing sinister in the boardroom.


But engagement, it turns out, has a shape. Outrage engages. Fear engages. Tribalism engages. The AI doesn’t know what outrage is. It just knows certain content patterns correlate with longer sessions. So it surfaces more of that content. And more. And more.


A fourteen-year-old searches for fitness videos. The AI notices he watches longer when the content is more extreme. It serves increasingly intense material. Bodybuilding becomes biohacking becomes men’s rights becomes incel communities. The radicalization pipeline isn’t designed—it’s emergent. A pattern the AI discovered, not one any human programmed.


The content moderators are trying to remove extremism. The engineers genuinely don’t want to radicalize teenagers. The CEO would be horrified if he understood what was happening. But the AI optimizes for engagement, and radicalization is engaging, so radicalization is what you get.


No conspiracy. No intent. Just optimization functions doing their job.


Why AI Is Different From Every Technology Before

Fire can warm you or burn you. The printing press produced both the Bible and propaganda. The internet enabled global communication and industrialized harassment. Every technology gets weaponized by the worst among us. That’s the depressing constant of human history.


But those abuses required intent. Someone had to decide to use fire for arson. Someone chose to print propaganda. Someone typed the harassing message.


AI introduces something genuinely new: systems that produce harm as an emergent property. Not because anyone chose the harm. Not because evil people subverted good tools. The harm arises from optimization processes that nobody fully understands, pursuing objectives that seemed reasonable, generating consequences that nobody anticipated.


The deliberate abusers are actually the easier problem. They’re identifiable. Prosecutable. Subject to social sanction. You can write laws against deepfake revenge porn. You can ban accounts that spread misinformation. You can punish the humans who choose to do evil.


But how do you regulate an emergent property? How do you prosecute an optimization function? How do you sanction a pattern that exists in the interaction between AI and society, reducible to no individual decision?

“We shape our tools and thereafter our tools shape us.” — John Culkin, media scholar, 1967

We built these systems. Now they’re building us. And not in ways anyone chose.


The Temperance Movement Gets It Backwards

And yet.


What I keep encountering—in op-eds, at dinner parties, in legislative hearings—is something I can only call AI temperance. A reflexive prohibition instinct that treats all AI as inherently corrupting, all algorithms as necessarily evil, all automation as essentially dehumanizing.


These people make me tired.


Not because they’re wrong to be concerned. They’re not. The emergent harms are real and terrifying. But their analysis is so crude, so uninterested in precision, so determined to ban rather than understand.


They haven’t spent an hour learning how these systems actually work. They don’t know the difference between a large language model and a recommendation algorithm. They think “AI” is one thing rather than a thousand different technologies with a thousand different risk profiles. They want to regulate a field they’ve made no effort to comprehend.


Marshall McLuhan saw them coming sixty years ago:

“Our conventional response to all media, namely that it is how they are used that counts, is the numb stance of the technological idiot.”

He was criticizing both the naive “it’s just a tool” crowd and those who refuse to understand what they’re critiquing. The temperance movement falls squarely into the second camp.


And here’s what really galls me: they’re using AI every day. The spam filter protecting their inbox. The fraud detection on their credit card. The voice recognition on their phone. The search algorithm that surfaces their information. They’re swimming in AI and don’t even know it. Their moral panic is directed at a cartoon villain while the actual technology has already woven itself into the infrastructure of their lives.


It’s like demanding we ban “chemicals” while drinking water.


What the Temperance Crowd Gets Right (But Misdiagnoses)

I should be fair. The strongest version of the temperance argument isn’t “AI is inherently bad.” It’s “AI controlled by five companies is dangerous regardless of the technology.”


This is actually correct. The concentration of AI capability in a handful of corporations—with their own interests, their own biases, their own profit motives—is a genuine structural problem. When the same companies that created addictive social media now control the most powerful AI systems, that’s worth worrying about.


But the temperance crowd misdiagnoses this as a technology problem when it’s actually a power problem. You don’t solve concentration of power by banning the technology. You solve it through antitrust enforcement, public investment in alternatives, open-source development, data rights, and democratic oversight.


Prohibition doesn’t redistribute power. It just ensures that only the already-powerful can access the technology—because they’ll build it anyway, in jurisdictions that don’t prohibit it, while everyone else falls behind.


The temperance crowd’s instinct toward prohibition is actually a gift to the oligarchs they claim to oppose.


The Jobs Question

There’s one temperance concern I take seriously: labor displacement.


AI will eliminate jobs. Some of those jobs won’t come back. The transitions will be brutal for individual workers, and the benefits will flow disproportionately to capital owners. This is a real problem requiring real policy responses—job retraining programs, portable benefits, strengthened safety nets, perhaps even more radical redistribution mechanisms.


But notice: none of those solutions require banning AI. They require managing the transition, distributing the gains more equitably, protecting the vulnerable during disruption. The Luddites weren’t wrong that the textile machines would destroy their livelihoods. They were wrong that smashing the machines would solve the problem.


The jobs question is serious. But it’s a question about economic policy, not technology policy. Conflating them—demanding we halt AI development rather than addressing the distributional consequences—is how the temperance movement ensures that workers get neither protection from disruption nor access to the benefits.


What the Porn Industry Taught Us About Technology

Let me tell you a story about the internet.


In the early days, online commerce was clunky. Payment processing was insecure. Video streaming barely worked. Bandwidth costs were prohibitive. The infrastructure for the digital economy we now take for granted simply didn’t exist.


You know who built it? The porn industry.


They pioneered secure credit card processing because they had customers who needed anonymity. They solved video streaming because their product demanded it. They invested in bandwidth optimization because their business model required high-quality delivery. They figured out subscription models, content delivery networks, user verification systems.


Today, that same infrastructure powers telemedicine. Online education. Video calls with your grandmother. The financial systems your bank depends on. The streaming service where you watch nature documentaries.


The origin doesn’t contaminate the utility.


Every time someone suggests we should have banned those early technologies because of how they were first used, I want to ask: Should we have banned the printing press because of the heresy it enabled? Should we have prohibited film because of the propaganda it amplified? Should we have stopped the development of radio because dictators used it for mass manipulation?


The tools that get built for questionable purposes become the infrastructure for everything else. The early adopters are often unsavory. The technology transcends them.


AI Art Isn’t the Crime

Here’s something that makes the temperance crowd apoplectic: I think AI-generated art is art.


Not all of it. Most of it is garbage, same as most human-generated art is garbage. Sturgeon’s Law applies to every medium:

“Ninety percent of science fiction is crud. That’s because ninety percent of everything is crud.” — Theodore Sturgeon

If I were updating Sturgeon for the AI age: Ninety percent of AI output is slop. That’s because ninety percent of everything is slop.


What is slop? Content with no point of view. No artist embedded in the process. No human intent shaping the output.


But when an artist uses AI as a tool—refining prompts, iterating on outputs, compositing elements, incorporating AI-generated components into larger works—that’s creative expression. The artist is making aesthetic choices. The artist is embedded in the process. The human intent is present in the work.


The people screaming that AI art isn’t “real” art remind me of the painters who said photography wasn’t art. I was that photographer—I heard the sneers. The photographers who said digital manipulation wasn’t art. Photoshop? I was there for that war too. The traditionalists who said electronic music wasn’t music. Just me and my Kurzweil, apparently destroying culture while creating new soundscapes. Video cameras in the hands of amateurs would destroy the film industry. Now we have YouTube, and somehow cinema survived.


I’ve been the barbarian at the gate many times already. The gate kept moving. The barbarians kept becoming the establishment. And the art kept getting made.


Every new tool threatens existing practitioners. Every expansion of creative possibility feels like an attack to those who mastered the previous constraints. This is how it’s always been.


And here’s what those tools actually did: they democratized creation.


Digital effects let a small team conjure ancient Rome for audiences who’ll never walk the Forum. Sample libraries let a single composer score a film with a full orchestra. No session musicians, no studio rental, no six-figure budget. Motion capture turns one actor into an army. These tools brought the magic of stories from the distant past to impossible futures, and made that magic accessible to creators who couldn’t afford what the studios could.


AI is the next tool in that kit. And for the first time, these capabilities are accessible to people without industry connections or investor backing. A teenager with a laptop can now produce what required a production company ten years ago.


The temperance crowd sees democratization and calls it theft. They see accessibility and call it degradation. They’re protecting guild boundaries, not art.


Here's what AI won't do: replace the Louvre. No one's going to stand in line for hours to see a well-crafted prompt. The spaces where humans seek transcendence will remain human. So if you're an artist anxious about AI, redirect that energy. Stop fighting the tool. Beat Sturgeon's Law. Make art that belongs in the ten percent.


The crime isn’t using AI to make images. The crime is training AI on copyrighted work without permission or compensation. Those are different problems requiring different solutions. But the temperance crowd doesn’t distinguish. They want prohibition rather than precision.


Two Completely Different Realities

What frustrates me most is this: we’re not even having the same conversation.


On one side, people who understand AI—its capabilities, its limitations, its actual risk profiles. Who recognize that some AI applications are transformative goods: medical imaging catching tumors radiologists miss, protein folding predictions accelerating drug discovery, accessibility tools giving sight to the blind and hearing to the deaf. Who can distinguish between a recommendation algorithm optimizing for engagement and a diagnostic system optimizing for accuracy. Who want precise regulation targeting specific harms.


On the other side, people who’ve absorbed vibes. Who know AI is somehow threatening but couldn’t explain how. Who see the word “algorithm” and think “manipulation.” Who want blanket prohibitions because granular policy is hard.


These aren’t positions that can be reconciled through debate. They’re not even disagreements about values. They’re disagreements about facts. About what AI is and how it works. And one side has made no effort to learn.


The Synthesis: How Deliberate Abuse Became Emergent Harm

So here’s where we are.


The architects of the attention economy—the ones I wrote about in The Digital Infection—made deliberate choices. They designed addictive interfaces. They optimized for engagement over wellbeing. They harvested children’s behavioral data. They knew what they were doing. They chose it anyway.


But something happened that even they didn’t fully anticipate.


The AI systems they built started producing harms they didn’t design. The recommendation engines began radicalizing users through pathways no human programmed. The predictive models started encoding discriminations no engineer intended. The optimization functions discovered patterns—radicalization, polarization, fragmentation—that served their metrics while destroying the social fabric.


The original sin was intentional. The descendants are emergent.


We now have AI systems that colonize consciousness through mechanisms their creators don’t fully understand. The deliberate exploitation documented in The Digital Infection created the infrastructure for emergent exploitation that exceeds anyone’s intent.


This is worse, not better. It means the villains can’t fix it even if they wanted to. The machine has learned patterns beyond their control. Turn it off and the economy collapses. Leave it running and the emergent harms continue.


What Smart AI Regulation Actually Looks Like

The temperance crowd wants prohibition. The tech libertarians want nothing. Neither is serious.


Smart regulation targets specific harms with specific interventions. And contrary to what both extremes claim, this isn’t theoretical—it’s happening.


The EU AI Act, which entered into force in August 2024, represents the first comprehensive AI regulatory framework. It bans certain applications outright—social scoring, real-time biometric surveillance in public spaces, AI designed to manipulate behavior. But it doesn’t ban AI. It creates risk categories, requires transparency and auditing for high-stakes systems, and establishes accountability without demanding prohibition.


In the United States, the patchwork is messier but instructive. Colorado’s AI Act, set to take effect in 2026, requires impact assessments for high-risk AI systems, mandatory disclosure when AI makes consequential decisions, and legal liability for algorithmic discrimination. California has enacted over a dozen targeted AI laws addressing everything from election deepfakes to employment discrimination to training data disclosure. Over a thousand AI bills were introduced in state legislatures in 2025 alone.


Some of these interventions will fail. Some will need revision. But they represent precisely what the temperance crowd claims is impossible: granular regulation that targets specific harms without requiring blanket prohibition.


Here’s what smart regulation looks like in practice:


For AI discrimination:

Mandatory auditing of high-stakes automated decisions. Required human review for consequential outcomes. Legal liability for disparate impact regardless of intent.


For AI-driven radicalization:

Transparency requirements about optimization objectives. Circuit breakers when engagement patterns correlate with extremism. User control over algorithmic curation.


For AI-generated deception:

Mandatory disclosure of synthetic media. Criminal penalties for malicious deepfakes. Investment in detection infrastructure.


For children specifically:

Age-gated design requirements. Prohibition of engagement optimization for minors. Data minimization mandates. Meaningful consent requirements.


Notice what these have in common: precision. Each intervention targets a specific harm through a specific mechanism. None requires banning entire categories of AI. None pretends that prohibition would work even if it were desirable.


The Uncomfortable Position

This puts me in an uncomfortable position. I’ve spent four articles documenting how Silicon Valley is colonizing children’s consciousness. I’ve called for radical resistance. Screen-free childhoods, educational rebellion, parallel systems outside the machine.


And now I’m defending AI? Criticizing the temperance movement? Arguing that the technology isn’t the enemy?


Yes. Both things are true.


The deliberate exploitation of children’s developing brains is a crime that demands resistance. The emergent harms of AI systems demand regulation. But AI also cures cancer, translates languages, composes music, reveals protein structures, extends human capability in ways we’re only beginning to imagine.


The problem isn’t AI. It’s the specific way certain AI systems have been deployed against specific vulnerabilities for specific gains. Precision matters. Understanding matters. Knowing what you’re fighting matters.


The temperance crowd wants to feel righteous. I want to win. And winning requires understanding the actual landscape, not tilting at cartoons.


Where This Leaves Us

So here’s the uncomfortable truth I’ve arrived at:


The villains I documented in The Digital Infection series—they’re still villains. The deliberate design choices that created addictive interfaces, the intentional targeting of children, the knowing exploitation of neurological vulnerabilities—all of that deserves condemnation and sanction.


But they created something that now exceeds their control. The emergent harms of AI systems will continue even if every tech executive suddenly developed a conscience. The discriminatory patterns encoded in training data will persist even with the best intentions. The radicalization pathways discovered by recommendation systems will keep discovering new ones.


We need accountability for the deliberate crimes. And we need structural interventions for the emergent harms. These are different problems requiring different solutions.


The temperance crowd can’t distinguish between them. The tech libertarians won’t acknowledge either. Both are useless.


What we need are people who understand AI deeply enough to target interventions precisely. Who can distinguish between tools and traps. Who recognize that the same technology that radicalizes teenagers can also catch tumors.


Who can hold two truths simultaneously: that AI has produced genuine wonders, and that it’s also produced emergent nightmares that nobody chose but everybody suffers.


That’s the position. It’s uncomfortable. It satisfies nobody.


But it’s true.

“There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.” — Marshall McLuhan

The temperance crowd won’t contemplate. They’ll only panic. That’s why they’re useless, And why understanding matters more than ever.



This piece is a companion to The Digital Infection series. Start here: Why your kid’s iPad matters more than the next election.


The original series documents the deliberate exploitation of children’s cognitive development by technology platforms. This companion piece examines how that deliberate exploitation created AI systems that now produce emergent harms beyond anyone’s intent—and why both require different but equally urgent responses.


©2026 Gael MacLean


bottom of page