The Sassy AI Glossary for 2026
Essential and nonsensical AI terminology with 250+ definitions for the future of work
Trying to keep up with the future of work means understanding what the industry is talking about. And lately, the industry has been talking a lot.
I asked my team of AIs—ChatGPT, Claude, Gemini, and Perplexity—to perform deep research into the terminology around the future of work in the age of AI. What followed was something like a competitive sport. Each model seemed determined to outdo the others, and I ended up with more than 250 terms. (I had expected maybe 50.)
Gemini and Claude then collaborated on crafting the definitions you’ll find below—aiming for accuracy but with a hint of sass and attitude, because glossaries don’t have to be boring. Meanwhile, Perplexity and Gemini worked together to track down sources and origins, though most of these terms have emerged organically across the industry and can’t be traced to a single inventor.
I invite you to browse through the list and see which ones you didn’t know yet. If you’re like me, the answer will be “more than I’d like to admit.”
“Each model seemed determined to outdo the others, and I ended up with more than 250 terms. (I had expected maybe 50.)”
The Terms
100x engineer – A developer who uses AI to produce enough code to keep a traditional team busy for a century.
10x engineer – The person who does the work of ten people, usually by drinking ten times more coffee.
accountability gap – The legal vacuum created when a chatbot makes a mistake and the CEO is “out of the office.”
agent boss – A supervisor who doesn’t need a desk, just a server and a stable API connection.
agent escalation logic – The protocol for when a robot realizes it has no idea what you’re talking about.
agent misalignment – When the AI’s goals and your goals are on different planets.
agent orchestrator – The person who discovered that managing AI agents is still just management.
agent supervisor – The person who watches the robots watch the humans.
Agent-as-a-Service (AaaS) organization – Outsourcing your workforce to a fleet of autonomous scripts.
agentic AI engineer – A programmer who builds software capable of making its own bad decisions.
agentic AI staircase – The gradual ascent of software from “tool” to “unsupervised colleague.”
agentic AI workforce – A team that never sleeps, never complains, and occasionally hallucinates.
agentic organization – A company where the software has more autonomy than the interns.
McKinsey & Company, People & Organizational Performance Practice (2025), “The agentic organization: A new operating model for AI” – introduces and defines the “agentic organization” as an AI‑era paradigm uniting humans and AI agents around AI‑first workflows across five enterprise pillars. (source)
agentic team – A squad of bots and humans trying to find a common language.
agentic workflow – A sequence of tasks that largely completes itself while you have coffee.
agentic workforce – Employees who spend more time managing agents than doing “work.”
AI agent developer – The person teaching the machine how to act like a person.
AI agent manager – A middle manager for digital entities.
AI bias auditor – The professional who checks if the algorithm is being unfairly judgmental.
AI capability owner – The person who “owns” a skill that a machine actually performs.
AI content creator – Someone who prompts a machine to write what they used to write.
AI content strategist – Planning how to flood the internet with machine-generated wisdom.
AI deference threshold – The point at which you stop questioning the machine and just do what it says.
AI ethicist – Someone who asks “should we?” while everyone else is asking “can we?”
AI ethics specialist – The person who ensures the AI’s moral compass is at least pointing North.
AI fluency – Speaking “Prompt” as a second language.
AI governance – The rules meant to keep the algorithms from going rogue.
AI governance lead – The person writing rules for technology that changes before the ink dries.
AI leverage skills – Knowing how to make a machine do 90% of your job.
AI literacy – Understanding that the chatbot is not actually a real person.
AI Operations (AIOps) engineer – The mechanic for the AI engine.
AI Operations (AIOps) specialist – Someone who keeps the “intelligent” systems from crashing.
AI orchestration specialist – Getting different AIs to play nicely in the same sandbox.
AI Orchestrator (AIO) – The central nervous system of a company’s AI tools.
AI platform owner – The person who gets blamed when the playground catches fire.
AI portfolio manager – Diversifying your company’s bets on which AI will win.
AI product manager – Managing a product that might change its personality overnight.
AI Quotient (AIQ) – An IQ test, but for how well you work with algorithms.
McKinsey & Company (2023–2024), “Digital and AI leaders outcompete” and related research – introduces AI Quotient (AIQ) as an assessment of organizational AI maturity across strategy, talent, operating model, technology, data, and adoption/scaling. (source)
AI raters – Humans paid to tell the AI it’s doing a good job.
AI red teamer – A professional “bad influence” paid to find an AI’s flaws.
AI resentment – The feeling you get when the software gets a bigger upgrade than you.
AI risk register – A list of all the ways the technology could go wrong.
AI safety engineer – Trying to ensure the “intelligence” doesn’t accidentally end the world.
AI safety researcher – Thinking about how to stop a superintelligence before it starts.
AI solutions architect – Someone who draws boxes and arrows until the AI problem looks solvable.
AI strategist – Someone with a five-year plan for a technology that changes every five days.
AI studio model – Treating AI development like a high-end creative agency.
AI teammate – A colleague who is incredibly helpful but has no social skills.
AI trainer – A digital tutor for a very fast-learning student.
AI transformation journey – The long road from “What is GPT?” to “Everything is automated.”
AI transparency analyst – Trying to explain why the black box did what it did.
AI UX designer – Making sure the robot-human interaction doesn’t feel creepy.
AI wellness coordinator – Someone who schedules your ‘digital detox’ between your twelve daily AI tools.
AI whisperer – Someone who knows exactly how to phrase a prompt to get the right answer.
AI workforce manager – Balancing the headcount between carbon-based and silicon-based workers.
AI-augmentable job – A role that could be much easier if you just let the robot help.
AI-augmented worker – A human who has essentially become a centaur: half-person, half-processor.
AI-displaced workers – Professionals whose jobs evolved faster than the retraining budget.
AI-embedded organization – A company where you can’t tell where the software ends and the people begin.
AI-first organization – A company that asks the computer its opinion before the board.
AI-human interaction designer – Architecting the digital handshake.
AI-native engineer – A developer who doesn’t remember life before Copilot.
AI-native organization – A startup born in the age of intelligence, with no legacy baggage.
AI-native worker – Someone who thinks of AI as a tool, not a threat.
AI-proof job – Careers that require a level of human messiness machines can’t replicate.
AI-resistant job – A role that is surprisingly difficult to automate (for now).
AI-retrainable job – A job that is changing so fast you need a permanent “Learning” sign.
algorithm appreciation – When you finally stop fighting the math and embrace the recommendation.
Jennifer M. Logg, Julia A. Minson & Don A. Moore (2019), “Algorithm Appreciation: People Prefer Algorithmic to Human Judgment” – introduce and define the term “algorithm appreciation” for people’s tendency to rely more on algorithmic than human advice. (source)
algorithm auditing – Checking the math to see if it’s being biased.
algorithm steward – The guardian of the company’s secret formulas.
algorithmic accountability – Making sure there’s a human to blame when the math fails.
algorithmic deference – The habit of assuming the computer is always right.
algorithmic fairness specialist – Ensuring the software treats everyone with equal indifference.
algorithmic fragility – When a tiny change in data makes the whole system collapse.
algorithmic management – Being managed by an app that doesn’t care about your weekend.
Min Kyung Lee, Daniel Kusbit, Evan Metsky & Laura Dabbish (2015), “Working with Machines: The Impact of Algorithmic and Data-Driven Management on Human Workers” (CHI 2015) – coin and define “algorithmic management” for software algorithms that assume managerial functions over workers. (source)
algorithmic operating model – Running a business based on data-driven logic rather than gut feeling.
ambient AI worker – The AI assistant that is always there, even when you didn’t ask it to be.
augmented collective intelligence – A group of people and bots who are smarter together than apart.
augmented connected worker – A worker wearing enough tech to be tracked from space.
augmented intelligence – Using AI to make humans smarter, rather than replacing them.
Douglas C. Engelbart (1962), “Augmenting Human Intellect: A Conceptual Framework” – foundational vision of using computers to augment human problem‑solving capacity (later framed as “augmented intelligence” / “intelligence augmentation”). (source)
augmented worker – A person with digital superpowers.
authenticity anxiety – The fear that your “original” idea was actually influenced by an LLM.
authority inversion – When the intern knows more about the tech than the CEO.
automation anxiety – The classic fear that a robot is coming for your cubicle.
automation complacency – Assuming the machine has everything under control (it doesn’t).
automation drag – The extra work created by trying to automate a simple task.
automation overreach – Trying to automate things that really require a human touch.
automation product owner – The person responsible for the “Easy” button.
automation ROI fallacy – Believing automation is always cheaper (spoiler: it isn’t).
automation threshold – The point where it’s finally cheaper to buy a bot than hire a person.
autonomous company – A business that practically runs itself while the owners go fishing.
bionic company – A firm where human creativity and machine scale are perfectly fused.
capability stack – The layers of skills (human and digital) a company possesses.
capability-based organization – Focusing on what people can do rather than what their title is.
cascade failure – When one small error in an automated system breaks everything else.
centaur worker – A human who knows exactly when to let the AI gallop and when to take the reins.
Ethan Mollick et al. (2023), “Centaurs and Cyborgs on the Jagged Frontier” and associated BCG research – popularize “centaur” workers as humans who strategically allocate tasks between themselves and AI across the jagged frontier of AI capabilities. (source)
Chief Agentic Officer (CAgO) – The executive in charge of the company’s autonomous agents.
Chief AI Ethics Officer (CAIEO) – The person paid to make sure the company stays on the right side of history.
Chief AI Officer (CAIO) – The person tasked with putting “AI” in every slide deck.
Chief AI Transformation Officer – The leader of the “Let’s automate everything” movement.
citizen developers – Non-techies who build apps because the tools are finally easy enough.
Gartner (c. 2009), research notes by Eric Knipp – coins “citizen developer” for non‑IT employees who create business applications or capabilities with sanctioned (often low‑code/no‑code) tools. (source)
clinical AI specialist – Using AI to figure out why you’re sick.
clone worker – Creating a digital twin of yourself to handle your emails.
cognitive atrophy – Forgetting how to do things because the AI does them for you.
cognitive capital – The value of what’s inside your head (the parts that aren’t automated).
cognitive offloading – Letting the machine remember things so you don’t have to.
Evan F. Risko & Sam J. Gilbert (2016), “Cognitive Offloading” in Trends in Cognitive Sciences – formalize the concept and term “cognitive offloading” for using external resources (including digital tools) to reduce internal cognitive load. (source)
cognitive surplus redistribution – Using the time saved by AI to do... more work, probably.
Clay Shirky (2010), Cognitive Surplus: Creativity and Generosity in a Connected Age – coins “cognitive surplus” for society’s aggregate free time and attention that can be redirected from passive consumption to collaborative production; your “redistribution” framing is an AI‑era extension of this idea. (source)
cognitive task allocation – Deciding which parts of a job require a brain and which require a chip.
co-intelligence – The polite term for ‘I did 10% and the AI did 90%.’
Ethan Mollick (2024), Co‑Intelligence: Living and Working with AI – popularizes “co‑intelligence” as a framework for humans and AI thinking and working together (AI as co‑worker, co‑teacher, coach). (source)
collaboration fatigue – Being tired of talking to both humans and bots all day.
collaborative intelligence – The hopeful belief that humans and AI complement each other rather than compete.
collaborative robot (cobot) – A robot that won’t accidentally hit you in the warehouse.
J. Edward Colgate & Michael A. Peshkin (patent filed 1996, published 1999), “Cobots” (US5952796A) – introduce the term “collaborative robot” or “cobot” for robots designed for safe, direct physical collaboration with human operators. (source)
composable organization – A company built like LEGO bricks, ready to be rearranged.
context collapse – When the AI treats your CEO the same way it treats your intern - which might be an improvement.
danah boyd (early 2000s), developing “collapsed contexts” in her work on networked publics; later formalized as “context collapse” with Alice Marwick in “Twitter users, context collapse, and the imagined audience” (2011). (source)
context engineer – Someone who ensures the AI has all the facts before it starts talking.
context monopoly – When only one person (usually you) holds all the knowledge the AI needs to function.
context rot – When an AI’s understanding of a situation degrades over time.
credit attribution crisis – Trying to figure out if you or the AI deserves the promotion.
cybernetic teammate – A colleague who might be 100% code.
cybertariat - The digital working class who traded the rhythmic clanging of the industrial loom for the soft, soul-crushing glow of a 13-inch monitor.
Ursula Huws (2000/2001), “The Making of a Cybertariat? Virtual Work in a Real World” (Socialist Register) – coins “cybertariat” for the emerging class of precarious digital workers; expanded in her later books, including Labor in the Global Digital Economy: The Cybertariat Comes of Age. (source)
cyborg worker – A human who is digitally enhanced to the point of no return.
Ethan Mollick et al. (2023), “Centaurs and Cyborgs on the Jagged Frontier” – defines “cyborg” workers as humans whose workflows are tightly integrated with AI, with human and machine tasks deeply intertwined. (source)
data annotator – The human labeling thousands of pictures of cats so the AI knows what a cat is.
data labeler – The unsung hero of the AI revolution.
Decentralized Autonomous Organization (DAO) – A company run by code and votes, with no boss in sight.
decision bottleneck – When the AI moves fast but the human manager takes three days to reply.
decision delegation architecture – The plan for which decisions the robot can make on its own.
decision provenance – Tracking why a decision was made (was it data or a hallucination?).
deskilling – Losing your ability to write a letter because you only use templates.
digital co-founder – An AI that helped you start your business.
digital coworker – The software bot that “sits” next to you in Slack.
digital dignity – Maintaining your self-worth when a script is more efficient than you.
digital native organization – A company that has never seen a paper filing cabinet.
digital worker – An autonomous script with a job description.
ethical AI compliance officer – The person making sure the robots follow the rules.
ethical debt – The future problems you’re creating by ignoring ethics today.
FOBO (Fear Of Becoming Obsolete) – The fear that you’re one software update away from irrelevance.
fractional executive – A part-time boss for hire.
fractional leader – A leader who splits their time between multiple companies.
fractional worker – The ultimate gig economy participant.
frontier firm – A company living on the absolute edge of technology.
Term originates in economics (OECD, 2010s) to describe the top 5–10% of most productive firms at the technological frontier. The 2025 Microsoft Work Trend Index, led by Jared Spataro, repurposes “Frontier Firm” for organizations that deeply embed AI and redesign work around it. (source)
fusion team – A mix of techies and business folks working as one.
future-built organization – A company designed to survive the next ten hype cycles.
Boston Consulting Group (2024–2025), Build for the Future / The Widening AI Value Gap – defines “future‑built companies” as the ~5% of firms with the critical AI capabilities and AI‑first operating model needed to generate outsized, compounding value from AI. (source)
gen AI maturity stages – The levels of a company’s “AI puberty.”
generative AI engineer – Someone who builds systems that create things from scratch.
ghost worker – The invisible human making sure the “automation” actually works.
Mary L. Gray & Siddharth Suri (2019), Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass – coin “ghost work” and describe the invisible “ghost workers” who keep automated systems running. (source)
gig-to-agent shift – When “gig workers” are replaced by “agentic software.”
governance theater – Making it look like you’re controlling the AI when you’re actually not.
healthcare AI coordinator – The person managing the digital doctors.
human capital augmentation lead – A fancy title for “The person who gives you better tools.”
human feedback providers – People who tell the AI “Yes, that looks like a person’s hand.”
human judgment layer – The final check before a machine’s decision becomes reality.
human review debt – The backlog of AI decisions that humans haven’t checked yet.
Human-Agent Team (HAT) – A group of people and bots working together.
human-AI collaboration specialist – A digital therapist for human-bot relationships.
human-AI handoff – The delicate moment when the robot gives the task back to the human.
human-AI interaction designer – Making sure the robot doesn’t annoy the human.
human-AI operating model – A business plan that accounts for both brains and bits.
human-AI task arbitration – Deciding who gets to do the fun parts of the job.
Human-Centric AI Design (HCAID) – Designing AI that actually likes people.
human-intelligence collectives – Groups of people who are smarter because they use AI.
Human-in-the-Loop (HITL) – Keeping a human around to prevent digital disasters.
Human-on-the-Loop (HOTL) – A human watching the system, but not touching the controls.
human-out-of-the-loop – The point where humans receive notifications rather than make decisions.
human-over-the-loop – The human is the ultimate supervisor of an automated process.
hybrid intelligence – Using your gut feeling to choose which AI recommendation to follow.
hybrid team – A team split between the office, the home, and the cloud.
hybrid workforce – A mix of full-time, part-time, and digital employees.
hybrid-augmented intelligence – When you use multiple AIs to make yourself smarter.
hyper-productive team – A team that uses AI so well they finish a week’s work by Tuesday.
hyperscale small company – A tiny team with a massive global reach, thanks to AI.
inference engineer – Someone who optimizes how the AI “thinks” in real-time.
intelligence abundance – A world where smart advice is free and everywhere.
intrapreneur – Someone acting like a startup founder inside a giant corporation.
jagged frontier – The unpredictable line where AI shifts from ‘genius’ to ‘confidently wrong’ - sometimes mid-sentence.
Ethan Mollick et al. (2023), “Centaurs and Cyborgs on the Jagged Frontier” – introduces “the jagged frontier of AI” to describe uneven AI capabilities across tasks. (source)
“The unpredictable line where AI shifts from ‘genius’ to ‘confidently wrong’ - sometimes mid-sentence.”
judgment atrophy – Losing the ability to make decisions because you always ask the bot.
judgment-as-a-skill – The new premium skill: knowing when the AI is wrong.
knowledge graph architect – Someone who builds a map of everything the company knows.
knowledge worker 2.0 – A worker who curates information rather than just processing it.
labor-AI substitution curve – The economic model that treats ‘cheaper’ and ‘better’ as synonyms.
lean AI startup – A company run by two people and a dozen subscriptions.
legal AI specialist – A lawyer who knows how to argue with an algorithm.
liability distribution – Figuring out whose fault it is when the robot breaks the law.
LLM engineer – A person who speaks fluent “Large Language Model.”
managerial deskilling – Managers forgetting how to lead people because they only manage dashboards.
marginal cost of intelligence – The price of one “smart” thought (currently trending toward zero).
meta-skills – The skills you need when you don’t know which skills you’ll need tomorrow.
micro-CXO – An executive for a tiny, highly automated company.
micro-entrepreneurs – People running small businesses with big-tech tools.
micro-multinational – A one-person company that operates in ten countries.
minimal-employee company – A company where every human hire needs to justify their existence against a spreadsheet.
MLOps engineer – The person who makes sure the machine learning actually works in production.
No single originator. The role and term “MLOps engineer” emerged in the late 2010s by analogy with DevOps, strongly influenced by concerns articulated in Google’s “Hidden Technical Debt in Machine Learning Systems” (Scully et al., NIPS 2015). (source)
model collapse – When AI starts eating its own tail and becomes stupid.
Shumailov, Shumaylov, Zhao, Gal, Papernot & Anderson (2023), “The Curse of Recursion: Training on Generated Data Makes Models Forget” – introduces and names the “Model Collapse” phenomenon. (source)
model drift risk – The risk that your AI will get “weirder” over time.
model governance – The HR department for algorithms.
model literacy – Understanding the difference between GPT-4 and a spreadsheet.
model risk officer – Someone who monitors the AI for “bad behavior.”
modular workforce architecture – Treating your org chart like a LEGO set, where people are also bricks.
moral sedation – Becoming indifferent to ethical issues because “the AI handled it.”
M-shaped professional – Someone with deep expertise in several unrelated areas.
Networked Agentic Organization (NAO) – A company that is just a web of autonomous bots.
Jurgen Appelo (2025), “From DAO to NAO” and “When Do I Get My Networked Agentic Organization (NAO)?” – introduces and defines the term “Networked Agentic Organization (NAO)”. (source)
N-shaped professional – A specialist who can also handle the “new” world of tech.
one-person enterprise – A company of one, powered by a thousand scripts.
one-person unicorn – A billion-dollar company with only one human employee (the dream).
Sam Altman (2024) – interview remarks predicting the first “one‑person billion‑dollar company” enabled by AI, widely discussed as the “one‑person unicorn”; timeline‑specific forecasts later amplified by Anthropic CEO Dario Amodei (2025). (source)
one-to-many leverage – Using AI to let one person do the work of a hundred.
operational loneliness – The feeling of running a business where you are the only human.
orchestration layer – The software that tells all your other software what to do.
orchestration management – Being the “conductor” of your digital tools.
organizational sensemaking – Trying to figure out what the hell is going on in a complex company.
organizational sensing lead – Someone who monitors the company’s “vibe” via data.
outcome-based organization – Caring about the results, not how many hours you sat at your desk.
pi-shaped professional – Someone with two deep areas of expertise.
polycorporate professional – Someone who works for five companies at once via AI.
portfolio career professional – Having five jobs instead of one career.
post-functional organization – A company where traditional departments no longer exist.
post-labor economics – Studying a world where work is optional (we’re not there yet).
productivity capture gap – When the AI makes you faster, but your boss just gives you more work.
productivity paradox – Having the best tools ever and still feeling like you’re behind.
Erik Brynjolfsson (1993), “The Productivity Paradox of Information Technology: Review and Assessment” – introduces the original “productivity paradox,” later echoed as “productivity paradox 2.0” in the AI era. (source)
productized workforce – Employees with version numbers and quarterly updates.
prompt designer – Someone who writes the perfect “questions.”
prompt engineer – A person who gets paid to talk to a computer.
prompt-literate worker – A worker who knows how to ask a machine for help.
quantitative AI engineer – Using AI to do very complex math.
recovery debt – The time you need to spend fixing things the AI broke.
red team worker – A professional hacker (the good kind).
reskilling – Learning a new job because your old one was automated.
responsible AI lead – The person who makes sure the AI isn’t a jerk.
reversibility loss – The point where you can’t go back to doing things the “old way.”
re-founding – Blowing up your company to rebuild it around AI.
Reid Hoffman (2023–2024), “re‑founders” / “re‑founding” concept on Masters of Scale and LinkedIn, with Satya Nadella’s Microsoft used as the canonical example of refounding an established company for a new era (including AI). (source)
RLHF workers – Humans who teach AI what “good” looks like.
self-automator – Someone who writes code to do their own job so they can nap.
Randazzo, Dell’Acqua, Kellogg et al. (2024), “Cyborgs, Centaurs and Self‑Automators: The Three Modes of Human‑GenAI Knowledge Work and Their Implications for Skilling and the Future of Expertise” – introduces “Self‑Automators” as a distinct worker type. (source)
shadow AI – Using AI tools that your IT department doesn’t know about.
shadow automation – Secretly automating your tasks without telling anyone.
silent automation failure – When a bot stops working and nobody notices for a month.
singularity – The moment AI gets smart enough to make itself smarter, after which predictions become meaningless.
Vernor Vinge (1993), “The Coming Technological Singularity: How to Survive in the Post‑Human Era”. (source)
skill decay – When you forget how to do something because an app does it for you.
skill half-life – How long your knowledge stays useful (it’s getting shorter).
skill wage premium inversion – When “soft skills” suddenly pay more than “tech skills.”
skills marketplace platform – An eBay for what you know how to do.
skills-based hiring – Hiring for what you can do, not where you went to school.
slop – AI-generated content that nobody asked for, filling the internet like grey goo.
@deepfates (2024), X/Twitter – early use of “slop” as the term for unwanted AI‑generated content; popularized and elaborated by Simon Willison (2024). (source)
social orchestration manager – Managing the relationships between your human and bot workers.
solo chief – A leader with full accountability and no one to delegate to except algorithms.
Jurgen Appelo (2026), “The Solo Chief” and The Maverick Mapmaker – defines the Solo Chief as a sole‑accountability leader orchestrating AI, tools, and systems. (source)
solo scaling – Growing a business without hiring a single person.
solopreneur – A person who is their own entire company.
strategic reservation – The choices you explicitly refuse to delegate to algorithms, no matter how smart they get.
super individual – One person with the power of an entire 1990s corporation.
superworker – An AI-augmented human who is annoyingly productive.
Josh Bersin (2025), “The Rise of the Superworker: Delivering On The Promise of AI” – research defining the AI‑augmented “Superworker”. (source)
symbiotic learning – You learn from the AI, and the AI learns from you.
synergistic agentic AI operating model – When agents work together to create something bigger.
IBM Institute for Business Value (2025), “Agentic AI’s strategic ascent: Shifting operations from workflows to autonomous systems” – introduces the “synergistic ‘always‑on’ agentic AI operating model” framework. (source)
synthetic colleague – An AI that has a name, a “personality,” and a seat in the meeting.
synthetic data expert – Someone who creates fake data to train real AI.
synthetic employee – A digital person with a payroll entry.
talent marketplace platform – Where you go to sell your human-only skills.
task atomization – Breaking a job down into tiny pieces so a bot can do them.
task-level automation – Automating the boring parts of your job.
technostress – The headache you get from trying to keep up with 2026.
Craig Brod (1984), Technostress: The Human Cost of the Computer Revolution. (source)
tokenomics (organizational) – Using digital tokens to run an organization.
trust calibration – Deciding exactly how much you should trust a machine’s output.
upskilling – Learning how to be better than the robot.
value alignment – Making sure the AI shares your company’s “vibe.”
value leakage – When your AI tools are costing more than the value they create.
vibe coder – A programmer who codes by describing the “feel” to an AI.
Andrej Karpathy (2025), X/Twitter – original post coining “vibe coding” for AI‑assisted development. (source)
work unbundling – Stripping a “job” down into its component “tasks.”
workflow orchestration manager – The person who keeps the digital assembly line moving.
workforce transition architect – The person who figures out what to do with the humans.
X-shaped professional – A leader with deep expertise and massive cross-functional reach.
zero-employee company – A revenue-generating machine with no humans on payroll.
You’re reading The Maverick Mapmaker—maps and models for Solo Chiefs navigating sole accountability in the age of AI. All posts are free, always. Paying supporters keep it that way (and get a full-color PDF of Human Robot Agent plus other monthly extras as a thank-you)—for just one café latte per month. Subscribe or upgrade.
A Living Document
This is obviously not a complete list. The vocabulary of AI and work is expanding faster than any glossary can capture, with new terms popping up every week. Consider this a snapshot in time—a map of where we are in early 2025, not a permanent monument.
Is anything incorrect? Any terms or sources missing? Let me know in the comments. I might even update this glossary once in a while as the language of work continues to evolve.
Jurgen, Solo Chief.
P.S. Which term in this glossary surprised you most?
Choose Your Tech Migration Strategy
Stop chasing every new tool. Stop feeling guilty about waiting. Learn where to sit on the technology migration spectrum.
AI Rewrites the Advisory Playbook
Big consulting is doomed, but advisory work isn’t. AI is just forcing it to become actually useful.
Clean Up Your Data
AI agents and workflow automations cannot help you create better experiences when your data is crap and all over the place.






This is one of those pieces that starts as a glossary and quietly becomes a map of how power, work, and accountability are shifting. The sass makes it readable, but the real value is how it surfaces what’s actually changing underneath the language.
Once you see terms like “judgment atrophy,” “governance theater,” and “strategic reservation” sitting next to each other, it’s hard to unsee the patterns. Fun on the surface, deeply useful in practice.
This piece really made me think, seriously. The way you leveraged multiple AIs is so smart! I'm curious about that 'accountability gap' term. How do you even begin to regulte that legal vacuum when a chatbot messes up? It feels like a massive challenge for future legislation.