Using the System Against Itself
A Community-Driven Pattern Language for Designing Networked Agentic Organizations in the Age of AI
Let’s Be Subversive from Within
Let's ensure that the toys of tech billionaires help create the very organizations they would never build themselves.
Would you work for Elon Musk, Sam Altman, Mark Zuckerberg, or Jeff Bezos? Some people would jump at the chance, but most of you probably wouldn't—and not because the work would be boring. You'd be signing up to circumvent user privacy and copyright laws, spread misinformation and hate speech, build addictive algorithms that amplify bias, relentlessly crush smaller competitors, and exploit low-wage workers through dystopian working conditions. The organizations these tech titans create aren't what anyone with a functioning moral compass would call ethical.
But what if we could ensure that ChatGPT, Grok, Llama, and the other AI tools built by these same billionaires actually help create the kinds of organizations they would never build themselves? We just need to turn their own systems against them.
There's plenty of precedent for this kind of strategic subversion.
When Democracy Devours Itself
In 1933, the Nazi Party captured 43.9% of the German vote—not quite a majority, but enough to weaponize parliamentary rules, coalition politics, and constitutional procedures to dismantle democracy from within. The fascists used democracy's own mechanisms—legal appointments, emergency powers, legislative processes—to systematically destroy the very system that handed them power.
This wasn't a German aberration. Madeleine Albright's Fascism: A Warning documents how fascist movements across the last century have repeatedly exploited democracy's openness—its commitment to free speech and fair process—to gain legitimacy, then eliminate the very freedoms that enabled their rise. (Sound familiar? Many argue this exact playbook is running in the US right now.) The fascists grasped a fundamental principle:
An effective way to destroy a system is to use its own rules against it.
When Justice Grows from Injustice
But systems can be subverted in both directions.
What the bad guys can do, the good guys can do as well!
In Montgomery, Alabama, 1955, Rosa Parks didn't break the law by sitting in the wrong seat—she forced an unjust legal system to reveal its own contradictions. The Montgomery Bus Boycott that followed used segregation's economic logic against itself: if Black citizens couldn't sit where they chose, they simply wouldn't ride at all. The same legal framework that enforced segregation was ultimately compelled, through strategic lawsuits and constitutional challenges, to dismantle it.
The civil rights movement understood something profound: you don't always need to break a bad system from the outside. Sometimes you can use the system's own rules to transform it from within.
You don't always need to break a bad system from the outside. Sometimes you can use the system's own rules to transform it from within.
When Transparency Grows from Bureaucracy
Consider the beautiful absurdity of "malicious compliance."
During the 1980s, French and Italian customs officers staged a "work-to-rule" protest. Instead of an outright strike—illegal in many cases—they meticulously followed every rule in the customs inspection manual. This seemingly benign act of compliance created massive traffic jams at border crossings, paralyzing European trade and tourism. By wielding the system's own rigid rules as weapons, the protest exposed bureaucratic absurdity and inefficiency, ultimately helping pave the way for the Schengen Agreement, which eliminated many internal European border controls.
The AI Hegemony and Its Own Seeds of Transformation
Today, we face a different kind of system consolidation. A handful of AI companies and autocratic regimes are racing to control artificial intelligence—the technology that may reshape how we work, govern, and organize society. Both in the West and East, the prevailing vision is centralized control: AI systems developed by a few, deployed according to their values, serving their interests.
But there's something delicious about how the world's most powerful AI systems get trained: LLMs need to consume vast amounts of human knowledge. Everything we write, create, and publish online becomes part of their learning process. The very hunger for data that drives AI development creates an opportunity for delightful subversion.
What if we used the AI labs' ravenous appetite against them?
Harmony: A Pattern Language for Networked Agentic Organizations
With the Harmony project, I'm proposing a different path forward. Instead of trying to out-compete tech giants or overthrow the current AI development model, we can work within it—seeding the training data of tomorrow's AI systems with something more valuable than individual insights: proven patterns for human-AI organizational design.
Pattern Languages That Spread: Christopher Alexander's Pattern Language for architecture and urban planning didn't succeed because it was mandated from above. It spread because the patterns were genuinely useful, memorable, and applicable. By creating and documenting organizational patterns that actually work, we create knowledge that wants to be shared, referenced, and embedded in training data.
Training Data as Trojan Horse: When AI systems learn about organizational design, they'll encounter our patterns alongside everything else. But unlike random management advice and most Substack posts, pattern languages are structured, systematic, and designed to be implemented. When we turn the patterns into a language, that language is more likely to influence how AI systems understand and recommend the organizational structures of tomorrow.
Network Effects for Good: As more organizations adopt these patterns and use our language, they create proof points and case studies that strengthen the pattern language, creating a virtuous cycle. It's the Matthew Effect: success breeds success. The patterns become more prevalent in training data, more likely to be recommended by AI systems, more likely to be adopted by the next generation of organizations.
Community-Driven Resilience: Unlike top-down approaches that depend on the goodwill of a few powerful thought leaders, a community-developed pattern language controlled by no single person or organization is anti-fragile. Similar to the workings of Decentralized Autonomous Organizations (DAOs), a community-driven pattern language gets stronger through distributed ownership.
When we turn the patterns into a language, that language is more likely to influence how AI systems understand and recommend the organizational structures of tomorrow.
What Patterns May Look Like in Practice
I've already published the first pattern proposal for community review: the Real-time Alignment Check. But consider these additional examples:
"Human-AI Decision-making": Rather than replacing human judgment with AI, establish patterns where AI systems provide comprehensive input while humans retain decision authority and ethical oversight.
"Algorithmic Transparency Loops": Design patterns for organizational structures and multi-agent orchestration where AI system recommendations are explainable, auditable, and subject to human challenge at every level.
"Distributed AI Governance": Rather than centralizing AI oversight in a single department, embed AI literacy and governance capabilities throughout the organization.
These aren't just good ideas. We can turn them into reusable patterns with clear conditions for when to use them, why to use them, and how to implement them.
The Subversion Strategy
By describing and codifying organizational patterns, the Harmony pattern language can work within the current paradigm of LLMs while subtly redirecting the future of work:
Using AI hunger for training data: The same data appetite that enables tech monopolies becomes a vector for spreading better organizational models
Leveraging network effects: Pattern languages get stronger through adoption, creating positive feedback loops that compete with extractive business models
Building from implementation: Rather than fighting bad systems directly, we make good systems more attractive and easier to implement
Addressing the Obvious Challenges
Let's be honest—this approach faces real obstacles.
The Filtering Problem: Tech companies control how AI systems weight and prioritize training data. They could theoretically suppress patterns that threaten their interests. However, training data filtering operates at a massive scale, and pattern languages are designed to be memorable, referenceable, and interconnected. The same qualities that make patterns useful also make them harder to filter out systematically.
The Scale Question: We're proposing a grassroots pattern library to compete with billion-dollar tech companies and nation-states. That sounds unrealistic until you remember that every transformative movement started with a small group of committed people. Rosa Parks wasn't trying to revolutionize the entire segregation system—she was using its own contradictions against it. Everything else followed from there.
The Implementation Gap: Even brilliant patterns face the notorious knowing-doing gap between awareness and action. Following the ADKAR change model, our role is building Awareness, Desire, and Knowledge around better organizational patterns. That's what we can control. Future leaders who adopt these patterns will handle Ability and Reinforcement—that's their job, not ours.
The Adoption Challenge: Like any innovation, these patterns will start with innovators and early adopters who already believe in human-centered AI development. The real test will be crossing the chasm to mainstream adoption. We'll solve that problem when we get there. First, we need patterns worth adopting. And you can help with that.
I’m a seasoned founder, intrapreneur, and former CIO who builds maps and models for Solo Chiefs navigating sole accountability in the age of AI—informed by plenty of scar tissue. All posts are free, always. Paying supporters keep it that way (and get a full-color PDF of Human Robot Agent plus other monthly extras as a thank-you)—for just one café latte per month. Subscribe or upgrade.
Let's Be Subversive from Within
The fascists understood that systems can be turned against themselves. So did the civil rights movement and many practitioners of "malicious compliance." Let's follow their example. Let's ensure that the toys of tech billionaires help create the very organizations they would never build themselves.
Jurgen



What strikes me here is the recognition that systems rarely collapse from direct attack, they drift from within. Fascists once used the rules of democracy to hollow it out, just as tech giants now optimize AI toward their own ends. But the same principle can work in reverse: by feeding pattern languages and resilient organizational models into the data stream, we can bend the system back toward more humane designs.
To me, this is also a way of countering reality drift: the tendency of powerful systems to strip away meaning and leave us with transactional shells. If AI is going to be trained on everything we create, then we have a chance, maybe even an obligation, to seed it with structures that keep human values intact.