The Past Isn’t What It Used to Be

The Past Isn’t What It Used to Be

Open a social media app of your choice, and you’ll soon be assailed by a strange, anaesthetising spectacle: AI-generated videos starring impossibly cheerful children who wax lyrical about their carefree “lives” in eras they never lived through. These digital infants assure us that times were far simpler, freer, and happier “back then.”

Not only are they oblivious to the inconveniences of dial-up or the pointed social cruelty of the “golden decades,” but, more insidiously, their pronouncements whitewash all that was ugly or unjust. This is not nostalgia in its innocent form, which evokes a longing for our yesteryears. No: this is nostalgia weaponised, supercharged by AI, and piped into millions of feeds at once—what Adorno would have recognised as a mass psychological operation. Here, history is not simply retold, but rebuilt—retroactively cleansed by the cold hand of generative AI, itself trained on the propaganda and prejudices of the past.

Notice how the children in these AI videos are all white? (Screenshot/Youtube)

Why does this matter? Because, as fascists of prior epochs knew all too well, nostalgia is exclusion at scale. The promise of a better past is also the threat of its defence: if things were better, someone must have ruined them. Enter the scapegoat—almost always minorities, immigrants, or any threatening Other.

Sedated by videos of sunlit suburbia and hassle-free mainstreets, we are gently nudged toward asking, “Who took this from us?” The answer is provided by those thriving on the instability of our liberal democracies.

Rebecca West once called fascism “a headlong flight into fantasy from the necessity for political thought.” To wallow in fantasy is to evade responsibility and complexity—precisely the conditions in which exclusion and division thrive. The promise of an impossible past essentially says: make the present resemble what never truly was, no matter who must be pushed aside, silenced, or forgotten.

Nostalgia sells because it blurs, cleans, and simplifies. But its political deployment does something more: it clears the ground for exclusion and invites fascism, digitally remastered, algorithmically precise, and disturbingly persuasive. It's the Tiktok version of making [your country] great again.


→ Recall your intent to pick up Aya Jaff’s latest after last week’s newsletter? Allow me to jog your memory: Broligarchie arrives in November. Do both your conscience and Aya a service—preorder a copy. (Again, I'm not getting paid to say this.)

Broligarchie - Hardcover
Bestellen Sie Broligarchie als Hardcover jetzt günstig im ULLSTEIN Online-Shop! ✓ Sichere Zahlung ✓ Gratis-Versand ab 9,00 Euro ✓ Vorbestellen möglich

Just your weekly reminder that ...

Chatbots — LLMs — do not know facts and are not designed to be able to accurately answer factual questions. They are designed to find and mimic patterns of words, probabilistically. When they’re “right” it’s because correct things are often written down, so those patterns are frequent. That’s all.

Katie Mack (@astrokatie.com) 2025-06-19T11:21:19.719Z

→ I want to recommend subscribing to “Bye-bye Big Tech,” a five-part newsletter that helps us become more digitally independent. Clear, practical guidance for switching from Whatsapp, Google & Co — step by step, with real alternatives that work.


Schrödinger’s AI: Both Right and Wrong

A new study is headlined “AI makes up every third answer.” Researchers asked the usual suspects, ChatGPT among them, a bundle of quiz‑like questions and found that many replies contained “fabrications.” Every third one, apparently.

But this underlines a misunderstanding: a large language model does not “know” or “lie”. It predicts, by statistical inference, what word is most likely to come next based on the billions that came before. When such a system gets something right, it is not revealing knowledge; it is stumbling into coincidence, an infinite monkey landing on Shakespeare.

Which brings me to a more fitting conclusion: AI doesn’t make up every third answer—it makes up every single one. It merely happens that a few of its fabrications correspond, by chance and training, to reality. The rest are dressed in the same velvet of syntax and confidence, which is why their errors look so seductively human. To blame the machine for this is to misunderstand it. The real fiction is the human insistence that words which sound authoritative must be so.


Here are a few things I’ve been reading lately — not all of which I’d sign my name to, but each provocative enough to merit the time it takes to disagree with them.

How Disabled People Get Exploited to Build the Technology of War
The cutting-edge products that Big Tech and the Pentagon are developing could be rebuilding an untold number of lives. Instead, they’re being sent to the battlefield to ruin more.

This is not a new article, but I feel like more people should read it! Here's a sentence that stuck with me: "Something truly cynical, perhaps even sinister, is wrought when the most innovative assistive technology isn’t being put to the purpose of “rehabilitating” bodies but, rather, the act of war itself."

Deflating “Hype” Won’t Save Us
The problem with AI isn’t hype. The problem is who and what it’s useful for.

The problem with AI isn’t hype. The problem is who and what it’s useful for, this text argues. If you enjoyed today's newsletter so far, this link is for you.

‘All We Wanted to Do Was Play Video Games’
Streamers such as Zack “Asmongold” Hoyt have more influence than ever. What are they really saying?

Remember Gamergate from a few editions ago? Today, Streamers have more influence than ever. The Atlantic did a good piece on what they are really saying. It's often right-wing garbage, but millions watch them.

Parents Fell in Love With Alpha School’s Promise. Then They Wanted Out
In Brownsville, Texas, some families found a buzzy new school’s methods—surveillance of kids, software in lieu of teachers—to be an education in and of itself.

A school called Alpha that relies on AI to teach its students is also surveilling them at home, videotaping young girls in their bedrooms. The future is here and it is bleak.

AI Is the Bubble to Burst Them All
I talked to the scholars who literally wrote the book on tech bubbles—and applied their test.

"AI may not simply be “a bubble,” or even an enormous bubble. It may be the ultimate bubble. What you might cook up in a lab if your aim was to engineer the Platonic ideal of a tech bubble. One bubble to burst them all." WIRED makes this make sense, I promise.


You've reached the end. Statistically, only a fraction of subscribers do. Thank you for reading!