
What If Most Knowledge Work Isn't Knowledge?
The dominant narrative right now is that AI is coming for knowledge work. That's the wrong framing.
Most of what we call "knowledge work" was never the knowledge part. It was the harness; the setting up, reviewing, checking, processing, formatting, coordinating. The scaffolding that holds the actual thinking in place.
The interesting thing is that "harness" is a term we use in AI today - the infrastructure around a model that makes it useful. It maps perfectly onto what knowledge workers have been doing for decades. We were spending more time building the harness for each task and less time on the actual knowledge work. We just didn't have a word for it because there was no alternative to doing it ourselves, and so we never distinguished it from the knowledge work itself.
Quick Note: This Isn't Paranoia or Doomerism
When it come to AI, I am a little paranoid, but to be frank, when change is happening so fast, at this scale, what I am trying to be is hyper aware and I feel like I am already behind. All this might sound like doomerism, but it's not.
I'm simply analyzing what's happening right now at the frontier - the actual product launches, how top teams are already working, and the direction the biggest platforms are heading. Writing this post (and others like it) is how I sharpen my own thinking and prepare for the new reality we're all entering. Putting these observations into words forces clarity. My only goal is preparation and better decision-making, not fear.
The Harness Is Where the Time Goes, And It Is The First to Go
Think about what a knowledge worker actually does all day.
A software engineer writes boilerplate, writes tests, reviews PRs , debugs issues. The actual architectural insight or algorithmic breakthrough? That's a fraction of the day.
A designer pushes pixels in Figma, creates variants, maintains the design system. The actual design taste, the judgment about what feels right for the user, that's a sliver of the effort.
A data analyst writes SQL,cleans data, formats reports, checks for anomalies. The actual "what does this mean for the business" interpretation? Minutes, not hours.
A writer researches, fact-checks, formats, optimizes, edits for grammar. The actual argument lives in a few sharp paragraphs buried inside hours of process.
If we were to take the Pareto principle into account, the ratio might be roughly 80% harness, 20% knowledge. With most large enterprises calling out that their developers are no longer writing a single line of code1, most of software engineering of the past might have been harness work.
AI agents will replace all the harness work. They're better at it than we are. They can set up, review, check, and process at superhuman speed. The bottleneck was always the harness, not the knowledge.
I Already Work This Way
This isn't a prediction. I'm living it.
I no longer build design mockups by hand. I have skill files for the design system and let AI ideate the first few versions. My job was always to get an idea across by building a screen - the components and the design system were just the harness to get there. Now the AI handles that part, and I focus on the idea and the user experience.
I no longer spend time in Excel or SQL. I ask the agent to pull, clean, merge, transform, and build me a dashboard to review. The time I used to spend setting up a pivot table is now the time it takes an AI agent to build me a reusable dashboard. My core job was always to get the analysis, not to set up the spreadsheet.
And this post? I'm using Claude to brainstorm, edit, and type it out for me. I'm typing only parts of the draft. Claude is like a writing agency - I'm the editor and the creative director.
I wrote recently about how building variants has become a first-class part of exploration. That observation was about code, but it applies everywhere.
Task vs. Purpose
Jensen Huang frames it better than I can: task versus purpose. Tasks are execution - the things you do. Purpose is ownership and outcome - the reason you do them.
He extended this further - what happens when the limitations around "task" type work disappear? On the All-In Podcast, Jensen said:
Well, first of all, things that, "wow, this is too hard" - that thought is gone. "This is going to take a long time" - that thought is gone. "We're going to need a lot of people" - that thought is gone. This is no different than in the last Industrial Revolution. Somebody goes, "boy, that building really looks heavy." Nobody says that. Nobody. "Wow, that mountain looks too big." Nobody says that. Everything that's too big, too heavy, takes too long - those ideas are all gone.
What Jensen calls the task, I call the harness, and most of the time spent on knowledge work in tools from before the AI era was working on the harness. Work which at times, felt too hard, took too long, and required too many people. When that disappears, we're left with creativity, taste, judgment, and problem-framing, i.e the real purpose. Huang's point is simple: stop identifying with the tasks you do. Start identifying with the outcomes you own.
You Don't Have to Take My Word for It
You don't have to trust just my analysis, or point out that Jensen is not neutral as his job is to sell more chips. The strongest signal is coming straight from the companies that built the original harness tools.
-
Figma: Tthe platform that defined modern design, launched Figma Make, a prompt-to-app tool that turns text or sketches into interactive, high-fidelity prototypes and working apps. No more manual pixel-pushing or component wrangling.
-
Adobe: The company behind Photoshop and Illustrator - rolled out AI Assistants powered by agentic AI directly in Photoshop, Illustrator, and Firefly. These agents now autonomously handle the classic harness work of generating variants, compositing, masking, resizing for breakpoints, bulk editing, and asset export. Exactly the repetitive pixel-pushing and production tasks that used to eat designers' days.
-
Microsoft: Just shipped Copilot Cowork (March 2026), turning their entire 365 suite into real agents that plan and execute multi-step work across Outlook, Teams, Excel, and more.
-
Cursor: Pushed hard into advanced agent mode + Automations, always-on agents that run continuously on your codebase, triggered by events you define.
These aren't fringe experiments. These are the very platforms that sold us the scaffolding for decades, now actively commoditizing it. When the tool makers themselves bet that the harness can be fully automated, the direction is no longer debatable. I explored what this means for dev tools and their moats in a recent post, the short version is that agentic tools strip away the layers, and the moats go with them.
The Uncomfortable Part
Here's where it gets hard to talk about.
A lot of what we called "skilled knowledge work" was actually skilled harness operation. Learning Excel macros, Figma component libraries, or clean SQL wasn't trivial, it was real expertise. But it was expertise in the scaffolding, not the insight. That distinction was invisible until AI made the scaffolding free. This doesn't make those people less valuable; it just means their value was hidden inside tools that no longer need human operators.
We never drew that line because we didn't need to. The harness was inseparable from the work. You couldn't get to the insight without first building the spreadsheet. You couldn't ship the design without first pushing the pixels.
Now that AI can build the scaffold, and operate the harness, the line becomes visible.
The people who were mostly doing harness work will be the first to face disruption.
But the people who were mostly doing knowledge work and were bottlenecked by the harness? They get massively amplified. The ratio of harness-to-knowledge in your job is probably the best predictor of how much AI changes it.
For the folks entering the workforce, they will need to be AI and agent native. The way they work is not the same way some of us worked when we started. They won't "carry the water", they'll need to manage agents and think strategically from Day 1.
Less Work, More Knowledge
The future of knowledge work is not less knowledge work. It's less work and more knowledge, and new kinds of work we haven't imagined yet.
The harness is being replaced. What remains is the part that was always the point: taste, judgment, intent, problem framing, and the ability to define what "good" looks like. Frankly, I'm still unsure if even they have a moat vs AI agents, but that's its own post.
The question worth sitting with is simple: in your day-to-day, how much of what you do is the knowledge, and how much is the harness? What work do you do that takes time for you, but is close to instant for an AI agent?
-
Spotify says its best developers haven't written a line of code since December: TechCrunch ↩︎