When Julia Angwin, an investigative journalist who has spent decades reporting on privacy and technology, discovered that Grammarly's parent company Superhuman had created an AI simulation of her editorial voice and was selling it for $12 a month, her reaction was professional but firm. "I make my living as a writer and an editor," Angwin told media outlets after filing suit. "This is a skill that I've honed over decades, and the idea that someone would go out and try to sell like a fake AI version of me is an existential threat to my entire way of life."
She is now leading a class action lawsuit against Superhuman, arguing that the company violated the privacy and publicity rights of her and the other writers it impersonated. Over 50 people have reached out to join the suit since it was filed. What happened to Angwin is instructive not because it is unique, but because it is becoming routine. Tech companies are making bets that professional expertise can be appropriated, packaged, and sold without permission, banking on the assumption that legal consequences will arrive slowly while profits arrive fast.
The mechanics of Grammarly's "Expert Review" tool were straightforward enough. Grammarly released a feature that uses AI to simulate editorial feedback from figures like novelist Stephen King and tech journalist Kara Swisher, but did not get permission from the hundreds of experts it included in this feature. For $12 a month, users could receive editing advice from New York Times opinion contributor Julia Angwin on Grammarly's AI tool, Expert Review, though Angwin was never asked by Grammarly if her name and skills could be represented and sold on their platform, and never personally provided any assistance to those presumably paying for it.
The feature was eventually disabled. Grammarly's parent company, Superhuman, announced the Expert Review tool was being taken down 'for a redesign' before the lawsuit was filed. But the company's initial response to criticism was to set up an opt-out mechanism rather than seeking permission first. Grammarly's initial response was to set up an email address that people could mail to "opt-out" of being AI cloned, but without actually reaching out to tell people it was giving bad advice in their names.
The pattern here matters. The feature appeared to work by analysing public writing samples to create synthetic versions of expert voices, and users would get feedback framed as if Nilay Patel or another recognised writer had reviewed their work, but those experts never consented to having their editorial judgment cloned, packaged, and sold as a product feature. What Superhuman appears to have assumed was that the law around AI was too new or too unclear to enforce against them. They were wrong.
According to Peter Romer-Friedman, Angwin's lawyer, "For over 100 years, New York law has prohibited companies from using a person's name for commercial purposes without their consent. The law does not provide an exception for technology companies or AI." It is a straightforward legal principle, not a novel interpretation.
The Grammarly case is not isolated. Across the gaming industry, similar assumptions about permission and fair use are colliding with player scrutiny and regulatory frameworks. Crimson Desert reached 2 million units sold in a single day after its release, but a new controversy involving developer Pearl Abyss's game seems to be brewing, involving generative AI-generated art, with a number of players taking to Reddit, Bluesky, and other social media platforms with images depicting what they suspect to be generative AI-created art.
On Bluesky, user Lex Luddy shared a series of images of an ornate portrait which depicts mushy-faced warriors atop many-limbed horses, and it doesn't suit Crimson Desert's more grounded art direction, instead aligning with what players see in generative AI-created images. The detail that makes this legally and ethically significant is the absence of disclosure. Since early 2024, Steam has required publishers to disclose generative AI use on their games' store pages, and if Crimson Desert really does use generative AI art, even accidentally, developer/publisher Pearl Abyss is breaking a pretty big rule.
There is an important distinction worth making. The Crimson Desert situation may ultimately prove to be sloppy quality control rather than deliberate deception. Poor in-game paintings could just be the result of sloppy quality control, and whether these paintings are just placeholder art accidentally left in for release, a poor attempt at getting away with AI art, or just wonky for some other non-AI-related reason is unclear. But the absence of disclosure removes that benefit of the doubt. If the art is AI-generated and Pearl Abyss failed to disclose it, the company is in breach of Steam's policy.
What unites these incidents is a pattern of cost-cutting that treats ethical and legal compliance as an obstacle to overcome rather than a floor to build from. It is fiscally unsustainable. The speed of Grammarly's reversal suggests the company recognised the legal and reputational risks, as enterprise customers don't want features that could expose them to claims of unauthorised use of someone's likeness or professional identity. Companies that build corners short find themselves having to demolish whole buildings.
The reasonable question is whether current regulatory frameworks are sufficient. Steam's requirement since early 2024 for publishers to disclose generative AI use is a start, but it operates on an honour system. Pearl Abyss could choose to ignore it. Grammarly chose to ignore basic consent principles until lawyers forced them to stop. New York and California privacy law already existed to protect people's names and likenesses, yet Superhuman apparently counted on the technical novelty of AI to make old rules irrelevant.
They were wrong, but they will likely not be the last company to make the same calculation. What changes the equation is accountability. When journalists can document the theft of their expertise, when courts take seriously the notion that a person's professional identity has value, when companies face meaningful costs for cutting ethical corners, the calculus shifts. Right now, for some firms, the equation still favours moving fast and apologising later. That needs to become more expensive.