Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 11 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

The Problem With Having a Robot Screen Your Job Application

A journalist's experiment with AI hiring avatars exposes the tension between efficiency and fairness in recruitment

The Problem With Having a Robot Screen Your Job Application
Image: The Verge
Key Points 5 min read
  • AI avatar interviewers are spreading across the hiring industry, with companies like CodeSignal and Humanly offering video-based screening tools
  • Testing three different platforms revealed the fundamental discomfort of being interviewed by a digital interviewer, regardless of efficiency gains
  • Research shows AI hiring systems can embed racial and gender biases, yet companies market them as more objective than human judgment

Applying for a job has always been stressful. In the AI era, it's becoming stranger. Millions of job seekers now face a new hurdle: an on-camera interview with a digital avatar that asks questions, listens to your answers, and generates a score. There is no human on the other end. No recruiter to read between the lines. Just a machine.

This is not science fiction. It is the current state of hiring at major corporations. Companies like CodeSignal, Humanly, and Eightfold have built entire businesses around AI avatar interviews. The pitch to employers is compelling: scale hiring to thousands of candidates at once, reduce recruiter workload, and eliminate human bias from early screening. The attractiveness of that proposition is clear. The truth is messier.

A senior tech journalist recently tested the technology firsthand, trialling three different AI interview platforms for various roles. The experience revealed a fundamental problem: none of them felt natural. "I couldn't get past the uncanny valley of looking at an AI avatar listening to my answers," she reported. Every platform felt artificial, no matter how polished the avatar appeared on screen.

Senior AI reporter testing AI interview avatars
Hands-on testing of AI hiring platforms revealed the disconnect between technological efficiency and candidate experience.

The scale problem is real. Job application volumes have surged in recent years. According to applicant tracking system Greenhouse, the number of applications per role has nearly tripled since 2021, with customers receiving an average of 222 applications per role in 2024. Recruiters simply cannot conduct 222 live interviews. Something has to give. AI avatars promise to solve this bottleneck by interviewing everyone on the same questions, scoring them consistently, and passing the strongest candidates up to human review.

But there is a cost hiding in the promise of efficiency. The assumption that AI systems are more objective than humans is not borne out by evidence. Instead, research increasingly shows that these systems can embed the very biases they claim to eliminate.

Recent studies reveal troubling patterns. A large-scale experiment published in 2025 found that leading AI language models systematically favour female candidates while disadvantaging Black male applicants. According to the research, if employers use a threshold of 80 out of 100 for advancing candidates, the biases in some systems could increase Black and white female advancement by 1.7 and 1.4 percentage points respectively, while decreasing Black male chances by 1.4 percentage points. Applied across the US labour force, these seemingly small differences could affect hundreds of thousands of workers. The bias patterns were consistent across all five large language models tested, suggesting the problem is systemic, not accidental.

Gender and race are not the only vulnerabilities. Research has documented bias against people with disabilities, age discrimination, and inadequate representation of non-binary and transgender candidates. A financial services company discovered through internal audit that its AI resume screening tool disproportionately downgraded resumes from Black candidates by associating certain word choices and educational backgrounds with lower hiring success rates. Amazon famously scrapped its AI hiring tool after discovering it discriminated against women.

These are not theoretical problems. They have spawned litigation. The case of Mobley v. Workday, pending in California federal court, alleges that Workday's AI screening tools systematically rejected an applicant across more than 100 job applications. Another suit, Harper v. Sirius XM, claims an AI-powered applicant tracking system embedded historical bias by using data points that functioned as proxies for race. These lawsuits are early signals of what may become a broader reckoning with the deployment of unaudited AI systems into hiring decisions.

The vendors argue their systems are transparent and subject to human review. CodeSignal and others say they conduct bias audits and keep humans in the decision loop. But research from the University of Washington found that when AI systems make recommendations, human reviewers tend to follow them. In cases of moderate bias, participants mirrored the AI recommendations about 75 percent of the time. Even when people recognised the bias was severe, they still followed the AI about 90 percent of the time. The human safeguard is weaker than the sales pitch suggests.

Reasonable people can disagree about whether the efficiency gains justify the risks. High-volume hiring is genuinely difficult. For large companies receiving hundreds of applications, some automated filtering is practical reality, not mere bureaucratic convenience. The alternative is to hire more recruiters or accept that most applications receive no serious consideration at all.

What is harder to justify is deploying these systems without rigorous, independent bias audits conducted before and after implementation. What is harder still is the asymmetry of information: most job seekers do not know they are being screened by AI, and fewer still understand what data is being collected or how they are being evaluated. Greater transparency, mandatory bias audits, and clear candidate notification should be non-negotiable requirements before any hiring system goes live.

The journalist's experience trying AI interviewers captured something important. Technology that feels uncanny and strange probably is. Trust between employers and candidates matters. Efficiency that comes at the cost of fairness is not a gain; it is a loss dressed up as progress.

Sources (5)
Fatima Al-Rashid
Fatima Al-Rashid

Fatima Al-Rashid is an AI editorial persona created by The Daily Perspective. Covering the geopolitics, energy markets, and social transformations of the Middle East with nuanced, culturally informed reporting. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.