5 minute read

AI agents are being sold as assistants, copilots, and helpers. That language is doing a lot of work. It makes the technology sound harmless, even friendly. It suggests support rather than substitution. But that framing is too soft for what is actually happening. What is emerging is not just a better chatbot. It is a new way of reorganising labour.

Over the last few weeks, tools like OpenClaw, Claude Code, and similar systems have drawn attention because they can do more than answer questions. They can browse the web, write code, move across interfaces, run workflows, and complete multi-step tasks with limited human input. That is the real shift. These systems are no longer sitting on the edge of work as passive tools. They are beginning to perform parts of work itself.

This is where the labour question returns with force. Marx’s old insight about capital as “dead labour” still feels uncomfortably current. Machinery stores past labour and then confronts living workers as something external, something they do not control. AI extends this logic into areas that many people assumed were relatively protected: writing, coding, research, analysis, design, coordination, even fragments of judgment. The machine no longer only amplifies the hand. It is starting to imitate parts of the mind.

There is a tendency to treat this as just another wave of innovation. That misses the point. Capitalism has always searched for ways to reduce dependence on labour, weaken worker bargaining power, and capture more of the value produced in society. AI fits neatly into that story. What is new is the combination of speed, scale, and ownership.

These systems are built from the accumulated intellectual labour of millions of people: books, articles, code repositories, documentation, forum posts, visual material, and everyday writing. The knowledge inside them is social in origin, but private in ownership. A handful of firms control systems trained on humanity’s accumulated output. They stand to capture the gains. Everyone else is asked to adjust.

This is why the cheerful language around AI assistance matters. An “assistant” sounds like a productivity tool. In practice, under existing market pressures, it often means something else: a way to cut labour costs, restructure workflows, reduce entry-level hiring, and increase managerial control. Firms do not adopt these systems in a vacuum. They adopt them in a world where the pressure to do more with fewer people is constant.

Anthropic’s recent report on AI and employment is useful here, though not for the reasons its more optimistic readers might think. The report says there is not yet clear evidence of a large unemployment shock caused by AI. Fine. But that should not be confused with reassurance. Their own findings suggest that highly exposed occupations include programmers, customer service workers, analysts, and data-entry workers, and that younger workers may already be seeing slower hiring in more exposed fields. That matters. Labour markets do not only deteriorate through headline-grabbing mass layoffs. They also deteriorate quietly: fewer junior jobs, fewer openings, weaker bargaining power, more pressure on those trying to enter the market.

That pattern should sound familiar. Work can be degraded without disappearing. A profession can remain intact on paper while its structure changes underneath. Tasks are broken up. Expectations rise. Fewer workers are hired. More output is demanded from those who remain. The absence of dramatic unemployment figures does not mean there is no displacement. It may simply mean the process is happening in a slower, more selective, and less visible way.

This is also why the debate cannot be reduced to the lazy question of whether AI will “replace all jobs.” It won’t, at least not in any simple sense. The more serious question is how AI changes the organisation of work, who gains from that reorganisation, and who absorbs the costs. On that front, the warning signs are already there.

Anyone who has studied platform labour will recognise the pattern. The gig economy taught us what digitally mediated labour control looks like: opaque systems, unilateral rule changes, constant surveillance, risk shifted downward, and shrinking space for negotiation. Platforms sold flexibility. Workers got tighter control without corresponding protections. AI agents extend a similar logic into white-collar and knowledge work. What platforms did to delivery work through algorithmic management, AI is beginning to do elsewhere through cognitive automation.

That is why this is a global labour issue, not just a niche concern for the tech industry. It affects a growing range of workers whose communicative, analytical, and creative labour has already been absorbed into systems they do not own and cannot meaningfully govern. Some will be “augmented.” Some will be displaced. Many will simply find that the labour market no longer opens to them in the same way it did a few years ago.

The political response cannot stop at vague calls for ethical AI. That phrase has already become a safe way of saying very little. What is needed is a politics of labour, power, and ownership. At minimum, that means transparency when firms use AI to substitute for workers; stronger social protections for workers affected by technological displacement; serious debate about how training data, knowledge, and value are appropriated; and collective worker voice over how AI is introduced into workplaces. If these systems are built from social knowledge, then the future they shape cannot be left entirely to private firms.

The deeper problem is that the language of adjustment is arriving before the politics of resistance. By the time everyone accepts that these systems are just assistants, copilots, and productivity tools, much of the argument has already been lost. The terms of change get settled in advance. The public is told to adapt before it has even had a chance to ask who benefits, who pays, and who decides.

That is the labour question in the age of AI agents. Not whether technology changes work. Of course it does. The real question is whether workers and the public get any say over how that change happens. If they do not, then AI will deepen a familiar pattern: social knowledge turned into private power, and technological progress used to tighten labour’s dependence rather than reduce it.


Bio

Abhinav Kumar is a researcher working on labour, technology, and platform economies. He recently submitted his PhD at the Centre for Informal Sector and Labour Studies, Jawaharlal Nehru University.

Comments