Published at: 2026-05-04T21:03:31+05:30
2026-03-12 — AI — AI agents will not take your job; they will take your attention
Thesis
AI agents will not primarily change the economy by “replacing humans.” They will change it by competing for the scarcest input in knowledge work: directed attention. The organizations and individuals who win will treat agents like production systems (scoped permissions, instrumentation, evaluation), and will redesign workflows so that human attention is reserved for judgment, taste, and responsibility.
Context
The public story about AI and work still swings between two extremes.
One extreme is substitution: models get smarter, jobs disappear.
The other extreme is reassurance: “AI will only be a tool,” and nothing fundamental changes.
Both miss the near-term mechanism of change: agents turn lots of tiny cognitive actions into software calls. That is not just “automation.” It is a new interface between intention and execution.
When you can delegate actions (draft, schedule, summarize, route, triage, search, reconcile) to systems that operate across apps, the bottleneck stops being “how many tasks can one person do.” The bottleneck becomes:
How many tasks can one person choose well.
How many times can one person switch context without losing the plot.
How many agent outputs can one person verify.
In other words: attention.
This is also why the impact will feel uneven. The organizations that build agentic workflows as controlled systems will get leverage. The organizations that bolt agents onto messy information environments will mostly get noise.
Key ideas
1. Agents repackage work from “tasks” into “pipelines”
Classic software automation was brittle. It required precise inputs and narrow paths.
Agents change that because they can do something humans do well: translate ambiguous intent into a sequence of small actions. The actions might be:
Retrieve context.
Draft text.
Execute a tool call.
Compare results.
Ask a clarifying question.
Retry with a different plan.
McKinsey’s framing is useful here: work is moving toward partnerships between people, agents, and robots, and the big lever is not automating single tasks, but redesigning workflows around collaboration and handoffs.
When work becomes a pipeline, the manager’s job changes too. You stop asking “How many people do we need?” and start asking:
What are the pipeline stages?
Where do errors matter?
Where do we require human sign-off?
What is the service level for this pipeline (latency, quality, cost)?
The first-order effect is not layoffs. It is a structural shift in operations: from managing humans doing steps, to managing systems that produce outcomes.
2. The new scarcity is not intelligence. It is verification capacity.
If an agent can draft 30 emails in a minute, the question becomes: who reads them?
If an agent can summarize 50 documents, the question becomes: who checks the summary against reality?
As output volume rises, the limiting resource becomes verification capacity.
This is where many “AI productivity” claims quietly break. They measure speed of generation. They under-measure:
Cost of review.
Cost of error.
Cost of downstream confusion.
The deeper point: as agents get more capable, they also get more plausible. Plausibility is dangerous. A mistake that looks confident is harder to detect, which increases the time and attention required to verify.
This is why the practical future of work looks less like “autonomy everywhere” and more like bounded autonomy:
Agents act within clearly defined permissions.
Agents propose actions, humans approve.
Agents execute low-risk actions automatically.
High-risk actions require explicit sign-off.
Human attention becomes the control surface.
3. Attention is a budget, and agents make it easier to overspend
Knowledge work already suffers from attention fragmentation:
Notifications.
Meetings.
Slack threads.
Email.
Switching between docs.
Switching between “modes” (deep work, admin, coordination).
Agents promise relief by handling the fragments. But they also make it easier to create more fragments.
When delegation is cheap, requests multiply.
This is the same dynamic we have seen in every system where the cost of an action falls:
When sending messages is cheap, we send more messages.
When making documents is cheap, we make more documents.
When creating tickets is cheap, we create more tickets.
Agents reduce the cost of producing work artifacts. That can be good. But if governance does not evolve, you get a flood of artifacts that demand human attention for review, alignment, and trust.
The right mental model is not “agents give you time back.”
The right model is “agents change your spending pattern.” The question becomes: what do you spend your recovered attention on?
4. Treat agents like production systems, not “smart interns”
A common mistake is to treat an agent as a clever assistant and then be surprised when it behaves unpredictably.
In real organizations, value comes when agents are treated like production software:
Instrumentation: measure success rate, failure rate, latency, cost.
Evaluation: maintain tests and counterexamples, update them as workflows change.
Permissions: least privilege, explicit scopes, approval gates.
Observability: logs that let you answer “what did it do, and why?”
This connects to broader AI governance work: frameworks like NIST’s AI Risk Management Framework emphasize mapping, measuring, and managing risk through the system lifecycle.
If you skip this, you will still ship something. But it will function like an attention vampire: constantly asking humans to rescue edge cases.
5. The “human skills” story becomes more true, and more demanding
As automation increases, human skills matter more, but not in a vague motivational way.
They matter because:
Someone must define goals.
Someone must set constraints.
Someone must judge tradeoffs.
Someone must be accountable for outcomes.
McKinsey notes that many skills are used in both automatable and non-automatable work. That overlap is the point. The skills persist, but the context shifts.
For individuals, this means the valuable skill is not “being smarter than the model.”
It is:
Being clearer than the model.
Having better taste than the model.
Owning the consequences that the model cannot own.
And because attention is limited, the meta-skill becomes attention management: the ability to keep the real objective in view while delegating the mechanical parts.
Counterarguments
Counterargument 1: “Agents will just replace jobs. Attention is a distraction from the real issue.”
There will be job displacement. Some roles exist primarily to move information between systems or to produce routine artifacts.
But even there, the attention argument is not a dodge. It is the mechanism.
Jobs are not replaced by intelligence in the abstract. They are replaced when:
The cost of producing an output drops.
The cost of coordinating production drops.
The cost of verifying quality is manageable.
If verification costs remain high, companies do not get full substitution. They get partial substitution plus a new layer of coordination work.
So attention is not a feel-good topic. It is the constraint that determines whether substitution happens quickly, slowly, or not at all.
Counterargument 2: “Agents will eliminate context switching. They will protect attention, not consume it.”
Sometimes, yes.
A good agentic workflow can reduce switching by bundling steps:
“Summarize this thread, draft a reply, and propose the next meeting time.”
But the second-order effect is that requests become easier to make and easier to route. That tends to increase volume.
Unless you redesign norms, agents will eliminate one kind of switching (manual clicking) and amplify another (reviewing, approving, aligning).
Counterargument 3: “This is all speculative. We do not have evidence.”
We do have evidence for the direction of travel, even if the exact timeline is uncertain.
Major institutions are modeling work as human-machine partnerships and measuring skill exposure.
Governments and standards bodies are formalizing risk management frameworks.
Industry reporting and usage studies suggest early adoption patterns concentrate in knowledge work and productivity tasks.
The essay’s claim is not “agents will become perfectly autonomous tomorrow.” It is that the pressure point is already visible: attention and verification are becoming the dominant constraints.
Takeaways
AI agents shift the unit of change from “jobs” to “workflows,” and from “tasks” to “pipelines.”
The limiting resource in an agentic organization is verification capacity, not generation capacity.
Agents can reduce context switching in the small, but increase it in the large unless norms and governance evolve.
Bounded autonomy beats total autonomy for most real work: permissions, gates, logs, and evaluation.
Treat agents like production systems, not like smart interns.
The enduring human advantage is judgment, taste, and accountability, which all require protected attention.
The winners will be those who intentionally spend the attention they recover.
Sources
McKinsey Global Institute — “Agents, robots, and us: Skill partnerships in the age of AI” (report page): https://www.mckinsey.com/mgi/our-research/agents-robots-and-us-skill-partnerships-in-the-age-of-ai
McKinsey Global Institute — “Human skills will matter more than ever in the age of AI” (summary page): https://www.mckinsey.com/mgi/media-center/human-skills-will-matter-more-than-ever-in-the-age-of-ai
NIST — AI Risk Management Framework (AI RMF 1.0) (PDF): https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
Euronews (Dec 2025) — Reporting on Perplexity agent usage and associated preprint: https://www.euronews.com/next/2025/12/10/most-people-use-ai-agents-for-productivity-and-learning-perplexity-says
IESE Insight (Nov 2024) — “How to understand AI’s potential impact on knowledge jobs”: https://www.iese.edu/insight/articles/artificial-intelligence-ai-impact-knowledge-jobs/