The Machines Can Write; They Just Can’t Mean It
There was a time when my words came out stiffly, as if they feared my readers would laugh if they dared to take up space. I remember practising my writing style with newspaper clippings and books when I started out as a philosophy student, and three semesters later, when I changed majors to political science. It began, I suppose, with confrontation. I wanted my arguments to hold, to stand taller than the arguments I attempted to contradict, and so in my early twenties, I read Christopher Hitchens—first for the music of his polemic, then for his depth. Here was a man who wrote as he spoke: every line sharpened from years of disputation, each phrase built to provoke. I started mimicking his cadence—reading drafts aloud, listening for dullness, learning that every phrase had to justify its existence or be mercilessly struck out.
It didn’t stop with Hitchens, of course. In 2016, I joined the founding team of Business Insider in Germany. I became a journalist, and now my words had
to carry meaning and wit. Overnight, my writing had, depending on the topic,
hundreds of thousands of readers. I learned that clarity was king—and that it
took courage to let go of cleverness for its own sake. Language is a tool, not
a trophy, that was the lesson. With time, this discipline rooted itself deeper;
substance became inseparable from style.
Somewhere between deadline and daydream, conversation and revision, my own voice slipped in. But believe me, that took time. It brought the rough edges of argument, the relish for a well-turned aphorism, and an abiding impatience with anything unearned. So when I write now, it is with the knowledge that style is not just what decorates the argument—it is what carries it through the storm, lets it breathe, and makes it worth returning to, when the mist clears again.
That is why the arrival of large language models that can counterfeit voice unsettled me more than I expected or, initially, cared to admit. Today, anyone with a cursor and a prompt can make a sentence behave. The screen will fill with paragraphs that imitate rhythm and borrow authority. What, until large language models dropped, felt like a hard‑won craft to me now looks, at first glance, like a commodity: everyone is a writer, or at least everyone can summon something that passes for writing on demand. The old reassurance—“most people can’t write”—no longer holds when a machine can produce a decent op‑ed while you make coffee.
But “decent” is the point, and the problem. Artificial intelligence is good at arranging words into the shape of meaning, while remaining indifferent to whether there is any meaning there at all. Literature, more than journalism, has always demanded something harsher and more human. Franz Kafka insisted that a book must be an axe for the frozen sea within us. What he meant was that the books worth keeping are the ones that wound us awake. LLMs can stir the surface, remixing every sentence they have been fed; they cannot know what it costs to swing that axe, or what part of oneself gets shattered in the process.
The better readers already know this, even if they would not put it in so many words. Joan Didion wrote that we tell ourselves stories in order to live. She did not mean for us just to pass the time. Her point was diagnostic, not inspirational: stories are how we metabolise terror and contingency, not how we pass an evening. LLMs can help with the latter; they can stitch together anecdotes and arguments into a perfectly serviceable narrative. What they cannot do is decide what should be lived for, or what must be refused, or which private humiliation is worth dragging into daylight so that someone else might feel less alone.
So if the age of artificial intelligence has made everyone look like a writer, it has, in my experience, also made the real work more visible by contrast. The unique selling point is no longer the ability to produce tidy prose on command; the machine has closed that gap. It is the willingness to say something that could not have been written by anyone—or anything—else, because it is anchored in a particular life, with its particular scars, its unrepeatable pattern of compromises and refusals. Style, in that sense, is not just a way of sounding good in public. It is the record of what a person has come to believe, sentence by sentence, in full view of the storm. And this is exactly what writing my „Odds & Sods“ has been for me over the past eleven months. This marks the 28th edition of my newsletter.
I have nothing to add.
Let me know if you require an explanation.
Among many other things, so-called «generative AI» is a paranoia machine: It quite literally produces plausibility out of hallucinations
— Roland Meyer (@bildoperationen.bsky.social) 2025-11-19T15:22:29.622Z

Here are a few things I’ve been reading lately — not all of which I’d sign my name to, but each provocative enough to merit the time it takes to disagree with them.

An AI chatbot and image platform left millions of images exposed. They show what people are actually using the AI for: taking random women's yearbook, graduation, and social media photos and making super realistic hardcore porn with them. We are, of course, not surprised by this. Aya Jaff warned us a few weeks ago already that this would happen.
•

"GenerativeAI isn’t magic. It’s the product of millions of invisible creators—poets, coders, photographers, musicians—whose work has been harvested, scraped, & absorbed into AI models without consent or compensation…
This is not innovation. It is expropriation."
•

One reason LLMs are so popular is the promise of seemingly private, judgment-free interaction. But once users internalize that their data is being recorded, studied, or reused, it could lead to a shift in how people think, speak, and even feel when using AI, says computer scientist Koustuv Saha.
You've reached the end. Thank you for reading!


