The “Expert Review” Debacle: How Grammarly’s AI Ambitions Hit a Wall of Ethics

10

The transition from a helpful grammar checker to an all-encompassing AI agent is a path fraught with legal and ethical landmines. This is precisely what happened to Grammarly (now rebranding as Superhuman ) following the controversial launch and rapid demise of its “Expert Review” feature.

What began as an attempt to lend authority to AI-generated suggestions has spiraled into a crisis involving unauthorized use of likenesses, broken citations, and a looming class-action lawsuit.

The Rise and Fall of “Expert Review”

In an effort to move beyond simple spell-checking, Grammarly launched a feature called Expert Review. The premise was ambitious: the AI would provide writing suggestions “inspired by” world-renowned professionals, authors, and academics.

To add a veneer of credibility, the interface displayed these suggestions alongside the names and verification-style icons of famous figures. However, the implementation was deeply flawed:

  • Unauthorized Likenesses: The feature used the names of living journalists (including staff from The Verge ), famous authors like Stephen King, and even deceased academics like Carl Sagan—all without their consent or compensation.
  • Hallucinated Authority: Instead of providing genuine insights, the AI often generated generic “word salad.” In one instance, advice attributed to journalist Nilay Patel simply suggested adding “urgency” and “intrigue” to headlines.
  • Broken Links and Paywall Bypassing: While the feature claimed to be “inspired” by published works, the provided source links were often broken or redirected to web archives of paywalled articles that contained no relevant editing advice.

A Failure of Consent and Attribution

The fallout from the feature’s discovery sparked a heated debate over the definition of attribution versus appropriation.

When confronted, Superhuman CEO Shishir Mehrotra defended the practice by arguing that the AI was merely referencing publicly available work. However, critics—including the very journalists whose names were used—argued there is a fundamental difference between citing a source and “making something up” and slapping a person’s name on it to sell a service.

“This wasn’t an attribution,” Nilay Patel argued during a confrontation on the Decoder podcast. “You just made something up and put my name on it… It’s not something I would ever say.”

The company’s initial response—offering an email inbox for experts to “opt out”—was widely criticized as an insufficient way to handle the unauthorized use of professional identities. Under intense pressure, Superhuman eventually disabled the feature entirely, promising to “reimagine” it with better controls for experts.

The Legal and Cultural Fallout

The “Expert Review” saga is not just a PR blunder; it has entered the courtroom. Investigative journalist Julia Angwin has filed a class-action lawsuit against Superhuman, alleging violations of privacy and publicity rights under New York and California law.

Beyond the legalities, this incident highlights a growing tension in the AI era: the extractive nature of generative models.

The trend is clear: AI companies are ingesting vast amounts of human intellectual property to create products that mimic the expertise of the very people they are “learning” from—often without permission, credit, or compensation. This creates a parasitic relationship where the creator’s work is used to build a tool that could eventually compete with them.

Conclusion

The Grammarly/Superhuman controversy serves as a cautionary tale for the AI industry, proving that adding a famous name to an AI suggestion does not create authority—it creates a liability. As companies race to build “AI agents,” the industry must decide whether it will collaborate with human experts or continue to attempt to automate their identities without consent.