The Split — High-Agency or Irrelevant
There’s a split happening right now and most people are pretending it isn’t.
On one side, a tiny cadre of operators running five or seven terminals in parallel, shipping what used to take a team a month in a single afternoon. On the other side, a large majority still opening ChatGPT once a month, getting a bad answer, and concluding “AI hallucinates, you can’t trust it.” Daniel Miessler estimates the fluent group at one to ten percent. I think he’s being generous on the top end.
The gap between those two groups is not a gap of IQ. It’s not a gap of resources. It’s a gap of agency. And that’s the uncomfortable part, because agency is the kind of thing you can actually decide to build.
What the bifurcation actually looks like
Daniel put it bluntly on UL 487: he runs Claude Code across multiple terminals “like a CEO handing work to 100 workers across 100 companies.” Not vibecoding. Grunt work. Migrating images off S3, normalizing two decades of blog posts, fixing thousands of broken links. He pays $200 a month for Max and says it would be cheap at $2,000 because he recouped the cost five times over in a single week.
Meanwhile the median knowledge worker is producing, in Daniel’s own estimate from the “Why AI Will Replace Knowledge Workers” episode, maybe 23% of a real week’s work. Most of the rest is meetings, Jira theater, and Game-of-Thrones-at-the-cube-farm politics. The $50 trillion global payroll that AI has to beat isn’t a high bar. It’s a bar on the floor.
One operator doing the work of a hundred. Ninety-nine people about to find out what that means for them.
That’s the split. It’s not coming. It’s here.
High agency without the cringe
The “high agency” framing is everywhere right now. George Mack, Chris Williamson, the whole Modern Wisdom circuit. And I’ll be honest: a lot of that discourse makes me twitch. The moment “high agency” curdles into “high-agency men” with NPC-framing and Bugatti thumbnails, it stops being a tool for becoming capable and starts being an identity you wear to feel superior.
Daniel’s reframe on UL 482 is the one I’m stealing. High agency is a tool, not a value. Values are what you want humanity to be. Tools are how you get there. Capitalism, socialism, high agency, stoicism — those are sliders on a mixing board. You dial them up when they serve the values. You dial them down when they don’t.
His personal values land somewhere near “Star Trek liberal” — elevate humanity, equal opportunity, humans in the foreground. Mine aren’t identical but the shape is the same: I’m not interested in a worldview where being capable becomes a license to write other people off. That’s the failure mode. A tool replacing the value it was supposed to serve.
So when I write about high agency, I’m not writing about a tribe. I’m writing about a skill. A skill you can learn on purpose, at any age, from any starting point.
High agency is learnable, not inherited
Here is what I actually think, and this is the load-bearing sentence of the post: high agency is not a personality trait you were born with or without. It’s a set of habits that compound.
I know this because I watched it happen to me. I trained as an electrical engineer. Ran decentralized infrastructure for years. Never coded for fun. For most of the last decade AI looked to me like an impressive search engine and nothing more. If you had told me in early 2025 that a year later I’d be orchestrating a named digital assistant across 49 skills and 33 hooks, running PAI as my daily harness, I would have laughed.
What changed wasn’t a personality transplant. What changed was a sequence of small choices. Open the tool instead of dismissing it. Ship the broken thing instead of polishing forever. Read the Miessler post. Build the Fabric pattern. Write the skill file. Delete the one that doesn’t work. Do it again tomorrow.
Daniel’s three filters for surviving 2026, from his “Starting 2026” piece, are drive, creativity, and AI tooling — in that order. Drive first because the other two don’t matter without it. Creativity second because execution without vision just ships slop faster. Tooling last because it’s the meta-skill that lets one person go end to end.
Drive is the part people think they can’t learn. I disagree. Drive is what happens when you put enough good input into a system that it starts wanting to output something. Daniel’s own cheat code is to read 100 to 1,000 great books. Mine has been closer to extracting wisdom from every UL episode and letting it recompose into my own worldview. Same mechanism. Inputs in, worldview out, execution on top.
The moat that isn’t there
The most dangerous story circulating right now is “AI can’t replace expertise.” It feels good. It lets smart, experienced people keep pouring effort into a job that’s already walking out the door.
Daniel demolishes this in the knowledge-workers episode and does it again in “The Great Transition.” Expertise isn’t a mystical fifth capability. It’s knowledge plus understanding plus intelligence, earned over time. AI has read every book — knowledge check. It cross-references Soviet economics with Idaho chicken farming in one breath — understanding check. Modern agent systems navigate shifting requirements all day — intelligence check. What’s left is creativity, and even that moat is being actively drained by Anthropic’s Skills — expertise packaged as markdown files, escaping from the heads of Cliff and Ravi and Suzy into the open source commons. Peeing in the pool, Daniel calls it. Once it’s in, you can’t pull it back out.
If your career plan is “I know things other people don’t,” update your plan. That moat is draining in real time and you can watch the water level drop every time a new skills repo hits GitHub.
What staying on the high-agency side actually requires
Here’s what I’m doing. I’m not prescribing it. I’m documenting it, and you can steal whatever’s useful.
Pick an assistant and name it. Mine is Isidore. Daniel’s is Kai. The naming isn’t aesthetics. It’s the cognitive shift from “I use a tool” to “I orchestrate an entity with persistent memory.” Once that clicks, you stop treating AI like a search engine.
Build the scaffolding, not the prompt. Miessler on UL 482: “Scaffolding matters more than the models. Context matters more than the models.” The leverage is in the system around the LLM, not the LLM itself. PAI, Fabric, TELOS, hooks, skills — that’s where the compounding lives. A clever one-off prompt is a lottery ticket. A well-built harness is an annuity.
Define the canonical shape, then delegate the enforcement. Daniel normalized 20 years of blog posts by writing down what a canonical post looks like once, then pointing Claude Code at broken posts and saying “fix it.” That pattern generalizes. Every piece of manual cleanup in your life has a canonical form you could describe in a paragraph. Describe it. Hand it to an agent. Stop doing the grunt work yourself.
Refuse both elitism and apathy. This is the hardest one and the most important. Elitism says: we deserve this, they don’t. Apathy says: nothing can be done. Both are exits from the actual problem. Daniel’s third rail — build, teach, share the tools — is the only move that doesn’t corrode you from the inside. I try to treat every blog post as a small version of that rail. Not a flex. A ladder.
The honest part
I don’t know if I stay on the high-agency side forever. Nobody does. The bar keeps moving and the people moving it are also the people being replaced by the next wave. The question isn’t “am I safe?” The question is “am I still learning fast enough that the gap between me and the frontier isn’t growing?”
Most weeks the answer is yes. Some weeks it isn’t. On the weeks it isn’t, I don’t panic. I go read a Miessler post, extract the wisdom, feed it back into my system, and ship something small the next day. That’s the loop. That’s the only loop I trust.
The split is real. Which side you land on isn’t written anywhere yet. It’s being decided right now, every day, by what you open when you sit down at your machine.
Build. Teach. Refuse both exits. That’s the work.