LLMs Are Like Politicians—Confidently Wrong
Six Techniques to Fact-Check AI and Stop Trusting Confidently Wrong LLMs
Politicians Lie. LLMs Bullshit. Here's How I Survive Both.
AI tools are confidently wrong more often than you think. Here are six techniques I use to catch them before they catch me.
TL;DR: Critical Thinking for Skeptical AI Users
Treat every AI like a vote‑grubbing politician—smooth talk, shaky facts. Survive the algorithmic gig economy by weaponizing critical thinking through six skeptic attitudes:
Sequential Skeptic – Bounce drafts across multiple models in sequence, forcing each one to clean up the last one’s mess.
Parallel Skeptic – Fire the same prompt at a gang of AIs; assume truth lives somewhere in their bickering.
Iterative Skeptic – Make the models argue with each other (Delphi‑style) until consensus or glorious meltdown.
Adversarial Skeptic – Order an AI to shred its own answer, revealing hidden gaps and dumb assumptions.
Contextual Skeptic – Flip audience, tone, or purpose to expose bias; if the answer shape‑shifts, beware.
Verifying Skeptic – Demand sources, then fact‑check them yourself like a responsible adult.
Bottom line: never outsource your brain. Question everything—especially confident silicon chatterboxes. Read more to dig deeper…
If there’s one thing you’ll never hear a politician say, it’s “I don’t know.” The electorate wants answers, not ambiguity. (And unfortunately, most voters are more than happy to inhale any confident nonsense that confirms their biases.) For politicians, admitting ignorance is career suicide. Instead, they radiate unwarranted certainty like it's an expensive cologne.
Does that sound familiar? Welcome to the world of large language models.
Why You Shouldn't Trust Any Single AI
I’ve been trying out Cal AI, a popular app that promises to help me track my food intake. I figured, hey, maybe staring at numbers all day will guilt me into taking less chocolate and fewer pastries. (So far, it’s working.) The app calculated my recommended daily intake goals: 1530 kcal, 120 grams of carbs, and other nutritional wizardry. But something smelled off—and it wasn’t my oat milk latte.
Being the skeptical nerd I am, I consulted my trusty cabal of AI advisors: Gemini, ChatGPT, and Claude. Think of it as getting a second, third, and fourth opinion—except none of these experts went to med school. Their suggestions for daily calories ranged from 1900 to 2050, and their fat and carb targets were all over the place. There was consensus around my protein target, but it was significantly lower than the goal set by the Cal AI app.
The message was clear: don’t trust any single AI. Especially not when your health or future depends on it. I glanced at the spread of suggestions, used common sense, and set my own targets, like the sentient rebel I am.
Why AI Hallucination Is Really AI Bullshitting
Gary Marcus recently reminded us that LLMs still refuse to say, “I don’t know.” Even after several years of technological progress, they remain glorified improvisers—spouting whatever sounds good, whether or not it’s true.
This isn’t “hallucinating.” This is bullshitting. LLMs don’t malfunction—they perform. They’re not broken; they’re just too eager to please, like politicians at a town hall: full of conviction, light on facts, and ready to charm their way into your confidence. The phrase “I have no idea” seems to be hard-coded out of their training data.
And sadly, just like voters, users believe what they want to hear.
But not me.
LLMs don't malfunction—they perform. They're not broken; they're just too eager to please, like politicians at a town hall: full of conviction, light on facts, and ready to charm their way into your confidence.
Why "I Don't Know" Is the Smartest AI Response
At a recent conference in Berlin, someone told me my keynote was “refreshingly authentic.” I took that as a compliment. “How will Kanban boards evolve in the age of AI?” “I don’t know.” “Is training LLMs on copyrighted content legal?” “I don’t know.” “Is Elon Musk a genius or a moron?” “Still undecided.”
I say “I don’t know” because I don’t want to be caught saying something that turns out to be false later. I’m not a politician. If anything, I’m annoyingly anti-politician, and some people seem to appreciate that. I get paid to say what I think, and that includes, “I don’t know.”
LLMs don’t say “I don’t know.” Authentic humans do. And it’s one of our few remaining advantages.
At an event in Brussels, someone else asked me, “What’s the number one skill for future leaders?” I immediately blurted out, “Critical thinking.” Ironically, it just flew out of my mouth, as if I was very sure about it. To be fair, it was top of mind. But in hindsight, I’m not sure if it’s really number one. Top three? Definitely. Number one? Eh, I don’t know.
You're reading The Maverick Mapmaker—maps and models for Solo Chiefs navigating sole accountability in the age of AI. All posts are free, always. Paying supporters keep it that way (and get a full-color PDF of Human Robot Agent plus other monthly extras as a thank-you)—for just one café latte per month.
Six Critical Thinking Techniques for AI Users
Here’s how I stay sane using LLMs: I assume they’re full of crap and work backward from there.
1. The Sequential Skeptic — Chain Multiple AI Models
When writing an article, I start with Claude to help me shape the outline from notes. Then I draft the piece myself (because I still believe in the outdated art of writing). Then I toss it to ChatGPT for style suggestions. Then Gemini critiques it. Then it’s back to Claude again for a final review. It’s like a daisy chain of second opinions. None of them are fully trustworthy, but together, they lift me up.
2. The Parallel Skeptic — Compare AI Answers Side by Side
Sometimes, I send the same prompt to three or four AIs at once. Like with the food tracking example. It’s a bit like polling three shady consultants without letting them talk to each other. When their answers disagree, I assume the truth is somewhere in the middle. You can compare it to double bookkeeping. Or triple, in this case. Though not fool-proof, it definitely brings down the error rate.
3. The Iterative Skeptic — Make AIs Argue With Each Other
Occasionally, I go full-on Delphi Method. I feed each AI the responses of the others and make them argue until they reach a consensus. “Hey Claude, ChatGPT disagrees with you—thoughts?” “Gemini says you’re wrong—care to respond?” It’s delightfully dysfunctional, and nobody’s feelings get hurt. Check my “The Four Moats Theory” post for an example of this approach. Machines don’t sulk—yet. And sometimes the best insights come from getting the AIs to duke it out.
4. The Adversarial Skeptic — Force AI to Challenge Itself
Instead of using multiple AIs, you can challenge a single AI to argue against itself by requesting counterarguments and limitations to its own reasoning. This reveals blind spots and weaknesses that consensus approaches might miss, teaching the critical thinker to probe beyond initial confidence.
5. The Contextual Skeptic — Reframe Prompts to Expose Bias
Sometimes, you can change the framing of a prompt—audience, tone, purpose—just to see how wildly the AI’s answers shift. It’s a way to test for bias, tone-dependence, and hidden assumptions. When one version sounds smart and another sounds like LinkedIn sludge, that tells me something. Context matters, and this technique helps expose just how easily LLMs can be steered—intentionally or not.
6. The Verifying Skeptic — Demand and Check AI Sources
This technique involves demanding verifiable sources and, crucially, independently fact-checking the AI's key claims against reliable, external, non-AI knowledge bases. This grounds the output in actual evidence, ensuring substantive accuracy rather than just accepting an internally consistent narrative potentially woven from sophisticated guesswork.
The Critical Conclusion
In any of these techniques, I’m practicing critical thinking. I safely assume they’re all bullshitting—like politicians on a podium—and I act accordingly. Sure, it’s harmless when you’re asking for a book recommendation or when you’re Muppetizing your family photos. But when you're making health decisions, product plans, or anything remotely important, blind trust in AIs is pure recklessness.
Critical thinking might be the number one skill for future leaders. Or it might just be top three. Whatever the rank, one thing’s certain:
Don’t trust politicians. Don’t trust LLMs. Don’t even trust me.
Question everything—especially the stuff that sounds a little too confident.
Use your brain.
Often.
Jurgen, Solo Chief
P.S. Which AI has bullshitted you the hardest? I want to hear your war stories.
Are Large Corporations Too Big to Fail?
Some people are talking about (or perhaps even hoping for) the death of corporations. I may have been one of them. But, do they have a point? Are networked organizations replacing static org structures?
I Started Drawing Maps
My new experimental hobby reflects something crucial about survival in the future of work: the rise of M-shaped professionals.






Great work