Grammarly has deactivated a contentious artificial intelligence feature known as Expert Review, which generated editing suggestions by imitating the writing styles of notable authors and academics. This move comes as the company confronts a multimillion-dollar class-action lawsuit filed in the southern district of New York, alleging unauthorized commercial use of real individuals' identities.
Lawsuit Alleges Unlawful Use of Names
The lawsuit targets Superhuman, Grammarly's parent company, arguing that leveraging personal names for profit without permission violates legal standards. Plaintiffs claim damages exceeding $5 million, equivalent to approximately £3.7 million, highlighting the financial stakes involved in this dispute over digital identity and intellectual property.
Backlash from Featured Writers
Since the feature gained public attention, several writers have voiced strong objections to their inclusion. Tech journalist Casey Newton, one of those featured, criticized Grammarly for monetizing identities without involvement, calling it a deliberate and unethical choice. Similarly, Vanessa Heggie, an associate professor at the University of Birmingham, expressed outrage on LinkedIn, describing the inclusion of the late academic David Abulafia as obscene.
Investigative journalist Julia Angwin, who serves as the lead plaintiff, shared her shock with the BBC, noting that she never considered editing skills as something that could be stolen. Her lawyer, Peter Romer-Friedman, reported significant interest from over 40 writers within 24 hours of filing the suit, indicating widespread concern in the literary and academic communities.
Grammarly's Response and Apology
Grammarly, originally launched in 2009 as a spelling and grammar checker, expanded into generative AI features last year, including Expert Review. The tool was marketed as providing subject-matter expertise and personalized feedback to enhance writing for academic or professional purposes. However, in response to the backlash, Superhuman's CEO, Shishir Mehrotra, issued a public apology on LinkedIn, acknowledging that the feature misrepresented voices and promising a redesign.
Mehrotra stated to the BBC that Expert Review had minimal usage during its brief availability and was already slated for removal before the lawsuit. He maintained that the legal claims lack merit, asserting that Superhuman will vigorously defend against them, even as the company reevaluates its approach to AI-driven features.
Broader Implications for AI and Ethics
This incident raises critical questions about the ethical boundaries of artificial intelligence in creative and professional fields. As AI tools become more advanced, the potential for misuse of personal identities and styles without consent poses significant legal and moral challenges. The case underscores the need for clearer regulations and consent mechanisms in the tech industry to protect individuals' rights and livelihoods.
Moving forward, Grammarly's experience may serve as a cautionary tale for other companies developing AI features that involve real people, emphasizing the importance of transparency and permission in innovation.



