5 minute read

Anthropic, the company behind Claude, published a study last month on AI’s impact on the labour market. The headline takeaway, repeated across tech media, was reassuring: no systematic increase in unemployment for workers in AI-exposed occupations. AI, it seems, hasn’t cost anyone their job.

I read the full paper. The picture is less comforting than the headline suggests.

What the study actually measured

Anthropic introduced a new metric they call “observed exposure” — not just whether AI could theoretically do a task, but whether people are actually using AI to do it, right now, in professional settings. They cross-referenced their own usage data from Claude with the O*NET task database and earlier exposure estimates from Eloundou et al. (2023).

The distinction matters. Previous studies measured theoretical capability — can an LLM grade homework, draft legal memos, write code? Anthropic’s contribution is measuring what’s actually happening. And they found a significant gap: actual AI usage covers only a fraction of what’s theoretically possible. Legal constraints, software requirements, organizational inertia, and simple habit all slow adoption.

So far, so reasonable. AI can do more than it’s currently doing. Adoption takes time. No mass unemployment yet.

But then you get to the parts that didn’t make the headlines.

The 14% problem

Anthropic’s data shows that hiring of younger workers in AI-exposed occupations has slowed since late 2022. Not layoffs — the people already in these jobs are mostly fine. It’s the people trying to enter who are getting shut out. A separate analysis from Stanford researchers Brynjolfsson, Chandar, and Chen found the same pattern: employment declines in AI-exposed sectors are concentrated among workers under 25.

Tyler Atkinson at the Dallas Fed put numbers to this. Employment in computer systems design has fallen 5% since ChatGPT’s launch. Across the top 10% of AI-exposed industries, employment is down 1% while the rest of the economy grew 2.5%. And critically, Atkinson argues the decline isn’t from layoffs but from a collapsing job-finding rate for young workers.

The job market hasn’t gotten worse for the people inside it. It’s gotten worse for the people trying to get in.

This is not unemployment. It won’t show up in the statistics that Anthropic’s study tracks. But it is displacement — of a particularly quiet kind. Firms don’t fire existing workers; they just stop hiring new ones. The position gets absorbed. The junior role disappears. Nobody makes a news headline, nobody files for benefits, but the pathway into the profession narrows.

Who’s actually exposed

Here’s where it gets counterintuitive. The workers most exposed to AI are not, as many assume, in routine manual or low-skill jobs. Anthropic’s data shows they tend to be older, female, more educated, and higher-paid. Computer programmers top the list at 75% observed task exposure. Knowledge workers — the people who were supposed to be safe from automation — turn out to be sitting directly in the path of LLMs.

The Dallas Fed analysis adds nuance. Atkinson distinguishes between codified knowledge (textbook-learnable, replicable) and tacit knowledge (gained through experience, harder to formalize). AI replicates codified knowledge effectively. It cannot replicate tacit knowledge. This means entry-level workers, whose value comes primarily from formal training, are more substitutable than experienced workers, whose value comes from judgment and pattern recognition built over years.

The wage data reflects this. In occupations with high experience premiums — where experienced workers earn significantly more than entry-level ones — AI exposure correlates with rising wages. In occupations with low experience premiums, AI exposure is associated with wage stagnation or decline. Same technology, opposite effects, depending on where you sit in the experience curve.

Augmentation or automation? Both.

Anthropic found that 57% of Claude’s professional use is augmentative — workers using AI as a tool to do their existing job faster or better. The remaining 43% is automated — AI directly performing tasks that a human would have done. The company frames this as evidence that augmentation dominates.

I’m not sure that’s the right framing. Forty-three percent direct automation is not a small number. And the boundary between augmentation and automation isn’t stable. A task that starts as “AI helps me draft this faster” can become “AI drafts this and I review it” and eventually “AI drafts this and nobody reviews it.” The augmentation phase is often just the automation phase with a human still in the loop — for now.

More importantly, even augmentation has distributional consequences. If one experienced worker with AI tools can do what previously required that worker plus two juniors, the experienced worker is augmented and the juniors are displaced. Productivity goes up. Employment goes down. Everyone cites the augmentation statistic. Nobody counts the juniors.

The codified knowledge trap

The Dallas Fed’s codified-vs-tacit framework is useful but incomplete. It assumes the boundary between codifiable and tacit knowledge is stable. It may not be.

What counts as “tacit” depends partly on the state of the art. Medical diagnosis was considered deeply tacit — the experienced clinician’s intuitive pattern recognition. AI systems now match or exceed experienced clinicians on many diagnostic tasks, not because they acquired tacit knowledge but because they made the tacit codifiable through brute-force pattern matching on massive datasets. The same process could come for legal judgment, strategic decision-making, and other domains currently considered safe because they require “experience.”

If that happens, the experience premium that currently protects senior workers from AI displacement could erode. The shelter is real, but it may be temporary.

What this means

The Anthropic study is careful, methodologically serious, and honest about its limitations. The researchers explicitly acknowledge that their framework is most useful when impacts are ambiguous — and that they may become unmistakable later. They plan to revisit the analysis periodically, which is the right approach.

But the “AI hasn’t killed jobs” framing that travelled through media coverage misses what the data actually shows. The effects are not absent. They are early, uneven, and concentrated on the people with the least power to respond: young workers trying to enter professions, entry-level employees whose skills are most easily replicated, and workers in occupations where experience premiums are low.

None of these groups have political voice commensurate with the disruption they face. Junior workers don’t have unions or lobbying power. New graduates don’t show up in unemployment statistics when they simply fail to get hired. The statistical invisibility of these effects is part of why the “no job losses” narrative persists.

We have seen this pattern before with other technologies. The initial effects are subtle, concentrated on the margins, and easy to dismiss. By the time they become unmistakable, the window for proactive policy — retraining programs, educational reform, labour market protections — has narrowed. The question is not whether AI will reshape labour markets. The Anthropic study’s own data suggests it already is. The question is whether anyone is paying attention to the people being quietly pushed out before the numbers become impossible to ignore.


Abhinav Kumar is a development researcher at Jawaharlal Nehru University, specializing in labour, digital platforms, and the political economy of technology.

Comments