Engineers felt it first.
A few years ago, the productivity conversation in software development centered on writing code faster. Developers used AI assistants to autocomplete functions, generate boilerplate, and debug errors. The promise was simple: better tools = faster work.
Then something shifted. The bottleneck wasn’t writing code anymore; it was waiting for a single agent to finish a task. Engineers who adopted tools like Cursor or GitHub Copilot hit a ceiling: one task at a time, serialized execution, constant context-switching.
Now, engineers at companies like Superposition run five coding agents in parallel. Their CTO orchestrates multiple features simultaneously — one agent handles the API integration, another rewrites the database schema, a third generates test coverage. He doesn’t wait. He delegates, monitors, and ships. The result isn’t 2x productivity. It’s 10–20x, because the leverage comes from managing AI, not just using it.
Knowledge workers are about to experience the same unlock.
The Real Productivity Gain Isn’t Only Smarter AI
For the past few years, the AI conversation has obsessed over model intelligence. Bigger context windows. Better reasoning. Fewer hallucinations. These improvements matter a lot, but they also come from a shift in how we work with AI. We’ve moved from using AI to managing it.
This distinction is critical. Using AI means you prompt, wait for a response, verify the output, then prompt again. Managing AI means you set up workflows, delegate tasks, and let the system maintain state while you orchestrate the work.
Engineers understood this intuitively because coding is iterative. You write a function, test it, refine it, ship it. When AI agents could complete tasks end-to-end, the workflow changed. Instead of prompting for help, engineers started delegating entire features. Instead of waiting for one agent, they ran multiple agents in parallel.
Knowledge workers haven’t made this leap yet — not because they can’t, but because the tooling wasn’t there. That’s changing.
A Concrete Example: Portfolio Management at Ardent
At Ardent, we track dozens of portfolio companies. Each one has quarterly board meetings, monthly check-ins, urgent action items, and long-term strategic goals. Before we automated this workflow, portfolio management looked like this:
Before:
- Someone took notes during portfolio meetings (sometimes detailed, sometimes not)
- We manually updated a tracker with action items (when we remembered)
- Before each meeting, we’d spend a day reconstructing context: What did we discuss last time? What follow-ups are still open? Did they hire that CTO we talked about?
- Things fell through the cracks. We’d promise to intro a founder to a potential customer, then forget. Urgent issues would get mentioned casually in a call, but never make it into formal notes.
The problem wasn’t intelligence; it was state management. Every meeting started from scratch. Every follow-up required manual memory retrieval. The coordination friction was exhausting.
Then we built an app using Claude Projects and Lovable. The workflow now looks like this:
After:
- Weekly portfolio discussions get transcribed and ingested into the app
- The app extracts action items, tracks their status (red/yellow/green), and flags urgent issues
- Before each meeting, we open the app and see: What’s open? What’s been completed? What needs attention?
- When a founder mentions they hired a CTO, we note it in the app, and it automatically updates the tracker. No manual follow-up.
The time savings, a day of prep becomes minutes. But the real unlock is correctness. We don’t forget to follow up on business development intros. We don’t miss casual mentions of customer churn. The app maintains state, so we can focus on decision-making instead of memory retrieval.
This wasn’t built by a dev team. It was built in a weekend, using Claude Projects to create the workflow and Lovable to turn it into an app. The AI underneath is capable, but not magical. The value comes from the orchestration. Ingesting data, maintaining memory, routing tasks, and presenting the right information at the right time.
The “Good Enough Model” Threshold
Here’s the counterargument: “Model quality still matters most. You can’t orchestrate your way around a bad model.”
That’s true for frontier problems — tasks that require cutting-edge reasoning, complex multi-step logic, or nuanced judgment. But most knowledge work doesn’t sit at the frontier.
Most knowledge work is repetitive, structured, and predictable. Summarizing meeting notes. Tracking action items. Routing customer inquiries. Generating weekly reports. These tasks don’t need GPT-5. They need workflow + memory + a decent model.
We’ve crossed a threshold. Models are now good enough that individuals, not dev teams, can build custom workflows to solve their own problems.
The difference isn’t just cost. It’s who can solve the problem. When automation requires a dev team, only high-value, frequently repeated tasks get automated. When automation is accessible to individuals, the long tail of workflows becomes addressable.
Knowledge Workers Are Next
Engineers experienced this shift first because coding is iterative and measurable. But the pattern is universal.
Knowledge workers are starting to see the same dynamics:
- Individual productivity: Claude Projects with persistent instructions and preloaded context (docs, policies, transcripts) let you work faster without re-explaining thestate.
- Parallel execution: Running multiple projects simultaneously — one for research, one for drafting, one for data analysis — removes the serialization bottleneck
- Orchestration over intelligence: The leverage comes from managing workflows, not from prompting smarter models
Anthropic’s Cowork, released this week, formalizes this model for non-coders. Instead of treating work as something you visit (open a chat, prompt, close the chat), Cowork embeds agents directly in your file system. Every file becomes ambient context. Iteration happens in a single loop — you edit, react, spin up sub-agents, and ship work without switching apps.
This is the same shift engineers experienced with tools like Cursor and Conductor. The productivity gain doesn’t come from faster models. It comes from removing coordination friction.
What Changes When Anyone Can Automate Workflows
If individuals can now build custom workflows without dev teams, what happens?
First, the long tail of automation becomes viable. Tasks that were “too small” or “too niche” to justify developer time can now be solved by the people who experience the problem. Portfolio tracking at a VC firm. Client intake at a law practice. Expense reporting at a small business.
Second, the bottleneck shifts from building workflows to designing them. The question isn’t “Can we afford to automate this?” It’s “What’s the right workflow?”
Third, productivity compounds. Engineers running five agents in parallel aren’t just 5x faster — they’re rethinking how work gets structured. Knowledge workers will do the same. Instead of serialized tasks (research → draft → edit → publish), workflows become parallel (research and drafting happen simultaneously, sub-agents handle formatting and fact-checking, the final output ships faster and with fewer errors).
The 10x Knowledge Worker
You’ve heard of the 10x engineer. The developer who ships 10 times more output than their peers, not because they type faster, but because they understand leverage.
The 10x knowledge worker is emerging. Not because they use smarter AI, but because they’ve learned to manage AI. They build workflows, delegate tasks, and orchestrate work without coordination friction.
The tools are here. The models are good enough. The question is: Are you still using AI, or are you managing it?
