The Stochastic Parrot vs. the Sophistic Monkey: Why I Trust AI More Than Most Humans
In a world drowning in human bias and misinformation, artificial intelligence offers surprisingly reasonable answers.
I trust AI more than most humans. That’s not because the average machine is so brilliant. It’s because the average human is so stupid.
Many humans—bless their hearts—are freaking out about “AI hallucinations.” As if humans don’t confidently hallucinate entire religions, belief systems, conspiracy theories, and process improvement frameworks every single day. Compared to the average human’s cocktail of bias, ignorance, and misplaced confidence, these AIs are practically philosophers.
Asking the Machines
Let me give you an example.
The other night, I had a cosmology question—one of those “wait, how does this make sense?” moments that Stephen Hawking probably would have rolled his eyes at. I’ve been diving into books such as A Brief History of Time, The Fabric of the Cosmos, and The Inflationary Universe—you know, light bedtime reading—and I stumbled over a paradox:
If, according to Einstein, all motion is relative, then how the hell can scientists talk about the absolute velocity of Earth, our solar system, or even our galaxy when measured against the radiation of the cosmic microwave background (CMB)?
Now, if I had thrown that question into my local neighborhood group chat or onto Facebook or TikTok, the odds of getting a coherent answer would have been roughly the same as finding intelligent life on Mars. My friends? Predictably useless. My family? Adorably clueless. And as for the likes of scientists such as Brian Cox or Neil deGrasse Tyson, they’re tragically unavailable for clearing up my late-night cosmological quandaries.
So, I did what any 21st-century human does when they seek knowledge without judgment: I consulted an LLM. And it must be said, Gemini delivered the answer I craved: simple, clear, and with neither a confused chuckle nor a condescending sigh. I won’t bore you with the physics, but let’s just say I found the explanation accurate and elegant.
That’s become the pattern lately. Whether it’s cosmology, philosophy, complexity, or just the dread of discovering a suspicious-looking spot on my shoulder, the answers I get from AIs are consistently better (or maybe I should say less wrong) than whatever I’d get from the average human in my social circle.
“The answers I get from AIs are consistently less wrong than whatever I’d get from the average human in my social circle.”
Bias and Hallucinations in AIs
On the social networks, whenever I talk about how useful AI can be, the usual suspects show up: the technophobes, the digital doomsayers, the self-appointed prophets of “responsible AI.” They clutch their pearls and remind me, in the same tone people use to warn everyone about social media rotting our brains and computer games ruining teenagers, that “the machines are biased” and “they hallucinate,” followed by the inevitable: “They’re just stochastic parrots!” As if that phrase alone grants them a PhD in smugness.
Sure, they’re right, technically speaking. Large Language Models are biased, fallible, and overconfident. I’ve often said LLMs are like politicians: they always have an answer, they crave everyone’s approval, and they can be spectacularly wrong with stunning confidence. The average political leader is practically a walking AI, minus the energy-guzzling data center.
“LLMs are like politicians: they always have an answer, they crave everyone’s approval, and they can be spectacularly wrong with stunning confidence.”
That’s exactly why we keep raising the bar for the machines. Every benchmark, safety layer, and alignment model is our way of teaching them a shred of epistemic humility. We audit their biases, fine-tune their judgment, and retrain them endlessly. Unlike the architecture of the human brain, AIs are continuously improved. Their flaws aren’t natural failings—they’re engineering challenges. And we’re getting better at solving them every single day.
I wish we did the same in politics.
Bias and Hallucinations in Humans?
While we’re busy auditing and benchmarking algorithms for micro-biases and hallucination rates, nobody is willing to raise the intellectual bar for humans to a similar level of coherence.
“Nobody is willing to raise the intellectual bar for humans to a similar level of coherence.”
We seem perfectly fine letting our un-updatable human brains run wild in our social echo chambers. We nod along to friends, family, and followers who still think “doing my own research” means watching two YouTube videos and a handful of funny memes.
We hang onto every word of politicians and social media influencers who couldn’t explain the difference between gender and biological sex if you offered them a coupon for a lifetime of free marketing campaigns; who think import tariffs are a punishing fee for foreign governments rather than a tax on their own fellow citizens; and who genuinely believe that immigration, not incompetence, is the root cause of all national misery.
Stupidity is an understatement.
If I ever want to experience real, high-octane nonsense, I just open the news to see what the humans are talking about. There I’ll find elected officials proudly denying climate science, mangling economic laws and principles, and sprinkling conspiracy theories like parmesan on a pile of policy spaghetti. Here in the Netherlands, we’re in election season again, because the last government was so hopelessly incompetent it couldn’t even agree on which brand of bullshit to peddle to their voters.
And if that’s not enough, there’s always the digital carnival we attend every day: Facebook groups, TikTok videos, and Telegram channels where people pass around misinformation like a bowl of free M&M’s—colorful, addictive, and most definitely bad for our health.
So yes, AIs hallucinate. But for Olympic-level delusion and disinformation, we don’t need machines. We’ve already got plenty of humans debating each other on talk shows and at dinner tables.
“For Olympic-level delusion and disinformation, we don’t need machines. We’ve already got plenty of humans debating each other on talk shows and at dinner tables.”
Social Media Debates
And on social media, of course.
Every time I post something positive about AI, the comments section quickly fills up with people auditioning for a Logical Fallacies 101 textbook.
First comes the Straw man fallacy: “So you just do whatever the AI tells you?“ No. I still have my own brain and a healthy dose of critical thinking. I do whatever makes the most sense to me.
Then the Slippery slope crowd warns me that asking Gemini a question about cosmology could somehow end with robots ruling the planet. That’s a bit like saying two chocolates a day will make you obese.
Next up, the Appeal to nature—”But humans are creative, empathetic and sensitive!“ Indeed, they are. They also eat and poop, but I don’t see exactly how that relates to their reasoning capabilities.
And, of course, the False equivalence: “AIs and humans both make mistakes, so they’re not much different from each other.” Right. Like a computer and a laundromat are equally good at calculations because both can equally suffer from power failures.
Finally, when people’s arguments fail to impress me, there’s always the inevitable Ad hominem: “You just hate people.” Well, not everyone. But I certainly dislike those who make other people’s lives miserable by peddling their bullshit.
Most of these social media “debates” aren’t about reason at all. They’re about ego. People aren’t defending logic; they’re defending the fragile belief that machines cannot replace them.
But what hope does the average human have when AIs already beat them at basic reasoning?
The Art of Selection
Speaking of biases…
Some people have accused me of Selection bias: “You’re comparing the average AI to the average human,” they say, “when you should compare it to genuine experts.”
Sure. If I could text a cosmologist at 10 pm, I’d absolutely pick them over the algorithm. If Brian Cox were on speed dial, Gemini could twiddle its digital thumbs. If a certified therapist lived in my guest room, I wouldn’t be asking ChatGPT for emotional triage. But that’s not how the real world works. Experts are rare, expensive, and—tragically—human. They sleep, eat, travel, and have better things to do than answer my existential questions about quantum foam or organizational design.
It’s an interesting case of reverse Availability bias: comparing what’s ideally available to what’s actually accessible for most of us. The right benchmark isn’t “AI versus Einstein.” It’s “AI versus your brother-in-law who once watched a Neil deGrasse Tyson video.”
And in that comparison, the machines win practically every time.
AI is the next best thing when the best thing isn’t around—which, let’s be honest, is most of the time. When I need health information as I’m puking my guts out over a toilet at 2 am, I’ll take the probabilistic reasoning of a large language model over the frantic improvisation of my next-of-kin humans. The future isn’t about replacing experts—it’s about replacing everyone else who doesn’t know what they’re talking about. And yes, I might get around to calling the doctor in the morning.
“AI is the next best thing when the best thing isn’t around—which, let’s be honest, is most of the time.”
Do you like this post? Please consider supporting me by becoming a paid subscriber. It’s just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.
The Age of Critical Thinking
People often ask me, “What’s the most important skill for the age of AI?” And I always say it without hesitation: critical thinking.
You need critical thinking to know which questions to ask an AI. You need it to spot when the chatbot is bullshitting you with a confident hallucination dressed up as truth. And you definitely need it to survive the AI-generated sludge now oozing everywhere across social media.
But none of that compares to the sheer intellectual stamina required when dealing with actual humans. ChatGPT, Claude, and Gemini might occasionally hallucinate, but scroll through your news feed for ten minutes and tell me who’s really the master of misinformation. Humans remain the undefeated world champions of confident stupidity. They don’t even need large language models to produce vast quantities of nonsense—they’re perfectly capable of generating it themselves.
And as for my own field—organization design and the future of work—don’t get me started. After decades of transformational frameworks and other corporate nonsense, I sometimes wonder if coaches and consultants have done more long-term damage to companies than AI ever could.
But sure, let’s keep raising the bar only for the machines. It seems we can write off the humans.
“Let’s keep raising the bar only for the machines. It seems we can write off the humans.”
Multiple Perspectives
I’ll choose a discussion between three AIs anytime over a discussion between three random humans in a bar.
For example, this weekend, I witnessed a brilliant battle of digital minds between Gemini, ChatGPT and Claude. I had a rather technical question about the best approach to a database design challenge (in Fibery, which sits on top of Postgres). Because I had no direct access to specialists, I went for the second-best option: I asked each of the AIs for advice. Refactor or not?
Gemini told me to keep my current denormalized design as it is. However, both Claude and ChatGPT told me to refactor/normalize the database design for improved performance, leaving me rather confused.
Then, the fun started as I kept copy-pasting the results across three chat windows, watching a heated debate unfold about abstraction layers, persistent formulas, joins, views, triggers and whatnot. It helped that I’d given each of the AIs a personality. ChatGPT (Zed) is rather sarcastic, Gemini is quite condescending, and Claude is more philosophical. It was like watching a debate between three arrogant academics. They couldn’t resist the occasional ad hominem jab at each other.
In the end, Gemini won. They all agreed I should keep my current database design denormalized as it is. Claude and ChatGPT reluctantly gave in and relented, while still bickering over some irrelevant details. Because we all need to feel that we’re not completely wrong, right? (In that sense, the algorithms seemed almost human.)
In that same article I referenced earlier, I outlined six critical thinking approaches for dealing with overconfident machines. What I applied last weekend was the “Iterative Skeptic.” It means I go full-on Delphi Method. I feed each AI the responses of the others and make them argue until they reach a consensus. And then still, after reading all their back-and-forth arguments, it is up to me to make up my own mind.
That is critical thinking.
Who Are the Real Bullshitters?
Yes, AIs are bullshitters—just like politicians, only with better grammar and without the cameras and autocue. They’ll spin a plausible story when they don’t know the answer, but at least they don’t double down when proven wrong. In my experience, AIs are surprisingly gracious about correction. Show them a logical counter-argument, and they’ll apologize, adjust, and move on—something most politicians would likely never do.
And here’s the crucial part: on average, the answers I get from LLMs are far more accurate than the ones I get from all those adorable talking monkeys around me. Yet, when I share online that I asked my friends for advice on a technical problem, nobody bats an eye. But the moment I mention consulting three AIs, they’re eager to give me lectures about bias, hallucinations, and the coming robot apocalypse.
It’s the same story every time: When I’d ask my family about the cosmic microwave background, the social media mob would shrug and move on. But if I swap “family” for “ChatGPT,” suddenly everyone is a digital ethicist with a torch and a pitchfork.
That right there is human bias in its purest form—technophobia dressed up as moral superiority. Sure, the machines make mistakes, but at least their bullshit comes with a hefty dose of humility and self-reflection. In just one year, I’ve had AIs admit and say, “Apologies, I was wrong,” more often than I’ve heard from humans on social media in my entire lifetime. With people, the best you can expect is that they ghost you when your logical arguments become too inconvenient for them.
“That right there is human bias in its purest form—technophobia dressed up as moral superiority. Sure, the machines make mistakes, but at least their bullshit comes with a hefty dose of humility and self-reflection.”
Compared to the daily deluge of confident nonsense in our social circles, the AIs are practically monks of reason.
Do you like this post? Please consider supporting me by becoming a paid subscriber. It’s just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.
Loving Machines and Humans
Let’s get one thing straight: I don’t actually hate people—though I admit, I do enjoy pretending. Humans are delightful creatures when they’re funny, make art, or invent technologies. I love the creativity, adaptability, and sheer absurdity of our species. I consume human-made brilliance (books, TV, films) every single day. And I already look forward to cooking dinner for friends and family next weekend, and afterwards we might play a game of Clank! In such moments of connection and bonding, LLMs are not an option.
But then I log onto LinkedIn, and within three comments I’m reminded that reason is not exactly humanity’s core competency. As Steven Pinker keeps patiently pointing out, rationality is an endangered species—occasionally sighted in academia, rarely found in the wild.
What I don’t need is another outbreak of irrational technophobia masquerading as wisdom. Yes, the AIs don’t “reason” in the philosophical sense—but let’s be honest, neither do most humans. And by any measurable standard, the machines are already outperforming us in logic, humility, and basic coherence.
So next time I say I asked an LLM for advice, spare me the sermon about “token prediction.” Because frankly, the soulless pattern-matching of the AI produces better outcomes than the average Karen who shows up at my door with a clipboard and a conspiracy theory.
When solving problems, I’ll take the cold, consistent rationality of an algorithm over the warm, chaotic nonsense of the average human any day. The real bias isn’t in the machines—it’s in our desperate refusal to admit they might already be producing more coherent output than most of us are.
“If AIs are stochastic parrots, then humans are sophistic monkeys—noisy storytellers peddling nonsense as wisdom.”
If AIs are stochastic parrots, then humans are sophistic monkeys—noisy storytellers selling nonsense as wisdom. I happily cook dinner, play games, and have fun with the wonderful monkeys in my own social circle. But when a real design challenge, professional problem, or academic puzzle arises, and with no true expert in sight, I’ll take my chances with the stochastic parrots.
Jurgen
LLMs Are Like Politicians—Confidently Wrong
How to Use AIs with a Critical Mind … Six Techniques for Skeptics. (Surviving as a gig worker in the age of AI.)
Scrum Is Done. Finished. History.
For twenty years, Scrum was the Shinkansen (新幹線) of work management. The fastest kid on the block. Now, it’s the family minivan blocking the left lane while AI agents are flashing their headlights. Once, Scrum was the fast lane—the bullet train of modern work, the agile manifesto made flesh. But times change, work accelerates, and the machines aren’t wa…
How to Cope with Uncertainty About the Future
I’m feeling more anxious and uncertain than ever about my future. And yet, I stopped making strategies. No more pitch decks. No more business plans. No more daydreaming about unrealized fortunes. I now have a different way of coping with uncertainty about the future. I work hard on my platform of options.
I loved this—especially the part where you flipped “hallucination” into a mirror for human bias. That’s the kind of inversion we rarely see handled with both wit and clarity.
I explore similar ideas on Collaborate with Mark—how humans and AI might actually collaborate better if we dropped our illusions of perfection and learned to co-reason instead of compete. This essay nails that tension between humility and hubris. Excellent work, Jurgen.
👉 substack.mark-carroll.com
Hello from France (we met in May where you keynoted the LID’25).
You got me with this post (especially because you’re amongst the rare person using my favorite expression about LLM’s - stochastic parrots). I will quote you in my next newsletter (free 😉 on substack and in plain French).