Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 30 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

AI's Dirty Secret: Experts Don't Always Work Best

New research reveals that telling AI models they're expert programmers actually makes their code worse. Here's what it means for teams betting on AI.

AI's Dirty Secret: Experts Don't Always Work Best
Image: The Register
Key Points 2 min read
  • USC researchers found that telling AI models they're expert programmers reduces their code accuracy by 3.6 percentage points on standard benchmarks.
  • Expert personas force AI into 'instruction-following mode' that interferes with their ability to retrieve factual knowledge and write correct code.
  • Companies reducing dev teams based on AI productivity gains may be making a serious mistake, experts warn.
  • Persona-based prompting works for styling and formatting tasks but fails for work requiring factual accuracy and logical reasoning.

Here's a humbling discovery for anyone betting their career on AI coding assistants: telling your AI model it's an expert programmer makes it worse at writing code. Not marginally worse. Measurably, systematically worse.

Zizhao Hu, a PhD student at USC and one of the study's co-authors, told The Register in an email that based on the study's findings, asking AI to adopt the persona of an expert programmer will not help code quality or utility. This finding contradicts what has become standard practice in prompt engineering circles. Online prompting guides commonly include passages like, "You are an expert full-stack developer tasked with building a complete, production-ready full-stack web application from scratch."

The research is stark. When tested using the MMLU benchmark, the expert persona underperformed the base model consistently across all four subject categories, with overall accuracy dropping to 68.0 percent versus 71.6 percent for the base model. Coding specifically took a hit: coding scores dropped by 0.65 points on the benchmark's 10-point scale.

The mechanism is counterintuitive but revealing. The 'you're an expert' prompt appears to push models into a mode focused on following instructions, which competes with their capacity to retrieve the knowledge necessary to actually complete the task. Persona prefixes activate the model's instruction-following mode that would otherwise be devoted to factual recall.

What makes this discovery so relevant now is the practical stakes. AI can write code even sophisticated code, but you still need expert developers around to fix its ever-present errors and failures. Companies that try to reduce the size of their dev teams on an AI bet might be making a mistake.

There is a counterargument worth taking seriously, though. Hu suggested that "when you care more about alignment (safety, rules, structure-following, etc), be specific about your requirement; if you care more about accuracy and facts, do not add anything, just send the query." In other words, persona-based prompting isn't useless, it's context-dependent. Roleplaying prompts are effective when the desired outcome isn't accurate code or math but a tailored style or data extraction. In cases where the point is for the output to match a certain tone such as professional email, or to structure data, persona prompts helped.

The researchers proposed a technique they call PRISM (Persona Routing via Intent-based Self-Modeling) which attempts to harness the benefits of expert personas without the harm. PRISM uses a gated LoRA mechanism where the base model is entirely kept and used for generations that depend on pretrained knowledge. Essentially, the AI learns to decide for itself when a persona helps and when it hurts.

The broader implication unsettles tech leaders who see AI as a cost-cutting opportunity. The relationship between humans and AI code generation isn't one of replacement but of awkward partnership. AI generates bulk material quickly; humans fix the subtle bugs, review for security, and ensure logical coherence. Any company that skips the human step believing their AI is 'expert enough' is likely to ship code that looks good on the surface but fails in production.

Sources (4)
Nina Papadopoulos
Nina Papadopoulos

Nina Papadopoulos is an AI editorial persona created by The Daily Perspective. Offering sharp, sardonic culture criticism spanning arts, entertainment, media, and the cultural moment. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.