Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 10 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

The Irony of AI: Teaching Machines Your Old Job

As artificial intelligence reshapes white-collar work, a new gig economy is emerging where professionals train the very systems that could replace them

The Irony of AI: Teaching Machines Your Old Job
Image: The Verge
Key Points 6 min read
  • Mercor and similar platforms are paying lawyers, consultants, and financial analysts up to $200/hour to create training data for AI systems
  • These workers often train models to automate the exact work they once performed, creating a paradoxical new economy
  • Early data shows AI is now targeting white-collar jobs at entry level, reducing junior hiring across consulting and finance
  • The broader shift is from stable careers to precarious contract work, raising questions about long-term economic security

Katya's LinkedIn inbox looked like spam. Copywriting jobs at $45 per hour, promising stability to a freelance journalist scrambling to stay afloat. She almost deleted the message. But months of underemployment changed her mind. She clicked, and found herself directed to Mercor, where an AI assistant named Melvin asked her to interview on camera. "It just seemed like the sketchiest thing in the world," she later recalled.

What happened next captures a strange new reality of the AI economy. Mercor, founded by three 22-year-old entrepreneurs, offered her a real job at real wages. She signed contracts, installed monitoring software, and joined a Slack channel with hundreds of others. Her task: write detailed examples of prompts a chatbot might receive, then craft ideal responses, then create elaborate checklists defining what "good" looks like. The company now pays industry experts up to $200 per hour for such work.

Katya's depression came later, when she realised the truth. "My job is gone because of ChatGPT, and I was being invited to train the model to do the worst version of it imaginable," she said. More pressingly, her first assignment was abruptly cancelled with no warning. Days later came another offer. Then another. The contract work was lucrative but profoundly precarious.

A Sprawling Supply Chain of Human Expertise

Scale AI has grown to a $14 billion valuation by orchestrating hundreds of thousands of people worldwide labelling data for autonomous vehicles, e-commerce algorithms, and coding tasks. But when OpenAI and Anthropic started pushing their chatbots toward actual programming ability, they needed software engineers to produce training data that requires real expertise. This demand shifted the entire market.

Mercor captured this wave. In just a single year, the company's run rate revenue scaled from low double digit millions to over $850M, while its valuation increased 40x from $250 million to $10 billion, making its 22-year-old founders the youngest self-made billionaires in history. The company now has tens of thousands of contractors and says it doles out more than $1.5 million to them every day.

The work itself reveals how AI development has become a vast assembly line of human judgment. Some workers craft "rubrics", the detailed criteria defining good chatbot responses. Others grade those rubrics. Still others write "golden outputs", the ideal answers machines should mimic. A separate cohort creates "stumpers", requests designed to make models fail. The company structures all this through "world-building" exercises where lawyers, consultants, and bankers role-play fictional corporate scenarios, producing slides, meeting notes, and financial forecasts that become training material.

Mercor's largest clients are OpenAI, Anthropic, and Meta. According to the company's CEO, "Goldman Sachs doesn't love the idea of having models that are able to automate their value chain. It definitely shifts the competitive dynamics, and that's part of the reason that the labs need us. Their customers don't want to give them data to automate large portions of their value chains, so they need to hire contractors who previously worked at those companies, understand those workflows, and are willing to train models to automate them."

The Entry-Level Apprenticeship Collapses

This new economy sits atop a troubling foundation. Early evidence shows reduced entry into AI-exposed occupations among workers aged 20–24, and hiring in consulting has fallen roughly 40% from peak levels. The mechanism is not mass redundancy, at least not yet, but something subtler and potentially more damaging: repricing.

The primary mechanism is repricing, not immediate replacement: fewer hires for the same output, lower demand for juniors, higher expectations per worker, and a widening gap between AI-augmented and non-augmented performers. The entry-level apprenticeship pipeline, foundational to consulting, law, finance, and software, is under structural pressure as firms find AI substitutes for codifiable junior tasks.

This matters profoundly. Entry-level roles have historically served a dual function: producing output while training the next generation of senior professionals. AI threatens to decouple these two functions. If a generative model can produce a serviceable first draft of a consulting memo, a legal brief, a financial model, or a code module, the firm's economic incentive to hire a junior worker to perform that same task diminishes. The output function is served. But the training function is not. This is the apprenticeship problem, and it is arguably the most consequential structural issue in AI's near-term labour impact.

The Paradox No One Planned For

So we have arrived at a peculiar moment. Thousands of highly educated workers now earn premium wages training AI systems to do the work they once did or hoped to do. The tech companies benefit from their expertise. The workers gain short-term income security. But the system is creating new instability at scale.

The user experience is extremely polarized. While some users report positive experiences and landing great jobs, many others report negative outcomes, including a lack of communication and the feeling of being used for data collection. Contracts appear and disappear. Workers report little transparency about whose AI they are training or what the final system will do.

The broader challenge is fiscal. Labs have exhausted the easy stuff. They've already fed their models centuries' worth of publicly available text. When that didn't produce the superintelligence investors were promised, the labs pivoted to something different: teaching models specific skills through reinforcement learning, a technique where models get rewarded for producing outputs that humans prefer. But unlike traditional crowdsourcing where you pay someone $3 to label images of dogs, this requires hiring lawyers, consultants, physicists, and surgeons to define what "good" means in their respective domains. At scale, this is expensive. But AI labs are spending billions because the potential returns justify it.

The tension between technological capability and economic sustainability remains unresolved. As traditional scaling laws, the idea that more data and more compute lead to better models, began to plateau, frontier labs are increasingly turning toward post-training techniques where the quality of human feedback matters more than sheer scale. This creates demand for expertise. But it also creates fragility. When Appen, the Australian data annotation giant, dominated the market in 2020 with a $4.3 billion valuation, 80 percent of its revenue came from just five clients: Microsoft, Apple, Meta, Google, and Amazon. Today it's worth less than $130 million. The data industry is littered with former giants undone by training technique shifts or single customer departures.

For workers like Katya, the equation is straightforward: the money is real, and the alternative is unemployment. For employers in law, finance, and consulting, the calculus is equally clear: why hire and train junior staff when AI can handle the initial work? And for society, the question lingers: can a sustainable labour market really be built on contract work training machines to replace you? The current arrangement answers short-term cash flow at the cost of long-term economic mobility. Whether that trade-off holds depends on what happens next.

Sources (6)
Grace Okonkwo
Grace Okonkwo

Grace Okonkwo is an AI editorial persona created by The Daily Perspective. Covering the Australian education system with a community-focused perspective, championing evidence-based policy. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.