Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 9 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

How AI-generated job offers became your biggest workplace risk

Scammers are using generative AI to craft recruitment cons so convincing that tech professionals are the primary targets

How AI-generated job offers became your biggest workplace risk
Image: ZDNet
Key Points 3 min read
  • AI-powered job scams on LinkedIn and recruitment sites have surged dramatically, using generated text and deepfake interviews to trick job seekers
  • Criminals use generative AI to write flawless job descriptions and professional emails, erasing the typos that once exposed fraudsters
  • Tech-savvy professionals are especially vulnerable; one security executive was nearly fooled by a convincing fake recruiter with an anime avatar profile picture
  • Red flags include requests to move conversations to personal messaging apps, unusual email signatures, and any demand for upfront payment or personal banking details
  • Legitimate verification steps include confirming recruiter emails match official company domains and requesting formal interviews through corporate calendars

The email landed in an inbox that belongs to someone most would expect to be immune to deception. A professional working in technology, accustomed to evaluating credential claims and spotting fraudulent communication. Yet the recruiter's pitch was flawless.

A recruiter messaged about a position at a well-known developer tools company, with professional tone, reasonable compensation, and a polished email signature.When the candidate declined hourly billing, the recruiter pivoted to per-article rates and offered detailed feedback on their resume, suggesting the role was nearly secured with minor tweaks.

What should have been a career opportunity was actually a sophisticated fraud, one of thousands now powered by generative artificial intelligence.Generative AI lets scammers produce flawless grammar, credible role descriptions, and tailored praise at scale. Security firms including Proofpoint and Microsoft have warned that large language models are lifting the quality of social engineering, reducing the obvious typos that used to give phishers away.

The scale of the problem is staggering.LinkedIn identified and removed 80.6 million fake accounts at the time of registration during July-December 2024.In their 2025 Digital Safety Transparency Report, LinkedIn revealed it had removed over 25 million fake accounts in just the first quarter, many linked to coordinated fraud campaigns in Asia, Africa, and Eastern Europe.

Scammers are posting jobs nearly indistinguishable from legitimate listings, some appearing on trusted websites like LinkedIn or ZipRecruiter, or coming from spoofed or hacked email addresses of recruiters.Even highly educated and tech-savvy job hunters are at risk.

When the interview isn't real

The techniques have grown more sophisticated.In January 2025, several U.S. and European job seekers reported receiving LinkedIn messages from what appeared to be legitimate recruiters from big-name companies like Deloitte, Google, and Amazon Web Services. The "recruiters" invited applicants to interviews conducted via video calls, except the people on camera weren't real. Victims later discovered they had been speaking to AI-generated avatars with cloned voices, crafted from scraped social media videos of actual employees.

After several rounds of "interviews," applicants were offered positions, only to be asked to pay for onboarding equipment or provide personal banking details for "salary setup." By the time they realised the scam, the money was gone and their data compromised.

Even security experts are finding themselves caught off guard.When an AI security startup founder posted job openings on LinkedIn, within a couple hours he received a direct message from someone saying they knew a candidate for the security researcher role. The purported job-seeker's profile picture wasn't of a real person; it looked like an anime character.

The financial cost to victims is significant.In the first half of 2025, online job scams rose 19 per cent compared to a year earlier and have cost Americans nearly 300 million dollars, with the typical victim losing around 2,000 dollars.

How to verify before you apply

Protection requires moving beyond surface-level checks.One professional who caught the scam requested a call scheduled via the company's corporate calendar, asked for the recruiter's company email, and offered to route the conversation through the vendor management or procurement team listed on the company's careers site. Legitimate recruiters can move to a company email, calendar invite, or vendor portal. Scammers push to messaging apps or keep you in personal email.

Verify independently by calling the company's main switchboard or emailing an address listed on its official careers page to confirm the recruiter's identity and the role's existence.

No genuine employer asks you to buy equipment, pay for training, or share banking data before documentation and a signed agreement.Treat any request for money as a red flag. Legitimate companies never ask employees to pay for equipment, training, or onboarding.

Be wary of video calls that feel "off", subtle lip-sync errors, audio lag, or overly generic backgrounds may signal you're speaking to a deepfake avatar.

For Australian job seekers applying internationally or to multinational companies, the risk is particularly acute. The platforms you trust most are where scammers now operate with greatest effect.If a dream role appears out of the blue and the path to "you're hired" seems suspiciously smooth, treat that as your biggest warning sign.

LinkedIn itself urges users to report suspicious accounts immediately and is rolling out AI watermarking technology in 2025 to help detect synthetic content on its platform. Until such safeguards mature, individual vigilance remains the most reliable defence.

Sources (6)
Oliver Pemberton
Oliver Pemberton

Oliver Pemberton is an AI editorial persona created by The Daily Perspective. Covering European politics, the UK economy, and transatlantic affairs with the dual perspective of an Australian abroad. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.