My years-long dislike of grammar has finally been vindicated


I first learned about Grammarly, an AI writing assistant, about 10 years ago. Then their YouTube ads suddenly appeared everywhere, clinging to my precious videos like a flock of spotted lanterns. At first this seemed harmless. It was the high-pitched whine of a buzzy new startup that would soon fade into obscurity. Mostly I was confused by their huge advertising budgets. I wasn’t alone. But the ads never slowed down, and as I continued to see unskippable Grammarly ads, the script was ingrained in my head: “Writing isn’t that easy, but Grammarly can help.” The ads irritated me so much that, on principle, I tried to coat my brain in Teflon and wipe out all information about every transaction on Grammarly. This meant that the company rarely registered what exactly it did. But even back then, I knew I hated Grammarly before I even had a real reason to.

Founded in 2009 and rebranded as Superhuman last fall, Grammarly uses tools like machine learning to proofread people’s writing. It doesn’t have any of Clippy’s signature features, but it does check grammar and spelling, similar to Microsoft’s Office Assistant. It was the perfect ad for people like Tyler, who emails his boss Anita because he needs Grammarly’s help to trust Grammarly’s ads. The logic of the ad is that Tyler wants Anita to like him, but he doesn’t want to seem unsure of himself. So Grammarly helps him replace the word “really helpful” with “informative” and the word “educational” with “informative.” We said it would connect us better with Anita. Tyler’s successful email means Anita emails him in just a few minutes and they can now stand close to each other and ride the elevator together. I remember wondering every time I was forced to watch this ad. Will Tyler and Anita fall apart?

The product seemed rather benign, considering the miserable modern climate of obviously malicious tech companies running far more offensive ads. But not to be outdone, Grammarly introduced generative AI support in 2023, which, among other things, now provides writing for users. In the years since, the company has expanded its generative AI suite with features like its “AI Instagram Caption Generator” or its “Improve It” feature, which gives you the ability to write one of the following adjectives: diplomatic, exciting, inspiring, friendly, empathetic, assertive, confident, or persuasive. This all sounded like stupid, mediocre Gen-AI bluster.

Then, last August, Grammarly launched a particularly lewd AI tool called “Expert Review.” The tool provided subscribers with “insights from subject matter experts and trusted publications,” according to a since-deleted Grammarly FAQ. These “subject experts” include people living and dead like Stephen King and Carl Sagan, and Grammarly promises that they will critique your text and provide tips for revision. But let me be clear: these experts are not Stephen King or Carl Sagan. They are not people at all, but AI-generated sock puppets that have nothing to do with the people whose names they bear. (Writer Ingrid Burrington called them “sloppelgangers.”) None of these writers consented to having their names or likenesses impersonated by the company. And it goes without saying that the advice was often bad or completely meaningless.

The people controlling Grammarly clearly saw the potential illegality of this feature and issued the following disclaimer: “References to experts on Expert Review are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement of that person or entity,” reported Casey Newton, one of many journalists and writers impersonating Grammarly Expert Review.

After a surge of reports that put the tool under scrutiny, Grammarly kindly offered to any authors whose identities were stolen by Expert Review to opt out of the process by emailing expertoptout@superhuman.com. This email was clearly not composed on site. This placed the onus on living writers to be the first to find out if they were being impersonated. There was no list of “experts” on Grammarly and no way to search for them. You simply sign up and use the tool to see who Grammarly suggests, and it might even suggest you in the end. Of course, this opt-out process did nothing for the numerous dead authors whose identities the company had stolen.

On Wednesday, journalist Julia Angwin filed a class action lawsuit against Superhuman for using her and others’ names and identities without consent. “I worked for decades to hone my skills as a writer and editor, and was distressed to discover that tech companies were selling fake versions of my hard-earned expertise,” Angwin said in a statement. The same day, Superhuman CEO Shishir Mehrotra apologized and announced via LinkedIn that he was “reimagining the feature to make it more useful to our users while disabling expert reviews while giving experts direct control over how they want to see them, or not at all.”

The idea of ​​an AI company ventriloquizing the living and the dead to sell a product virtually indistinguishable from ChatGPT is perhaps the most repulsive of all, for sheer predictability. Like Sora and Anthropic, Grammarly is essentially based on theft and completely ignores consent. LLMs are trained in real writing, books and articles, the labor of real people with shared ideas, perspectives and skills spanning centuries. “Most of the work I have published already appears to be within these models, shaping the results in ways I have never agreed with and will never fully understand,” Newton wrote. “It was bad manners to write my name grammatically.”

Grammarly has never been interested in writing or improving writing. Grammarly’s only interest is making money. In May 2025, the company announced that it had closed a $1 billion financing round, which it estimated would be used to create more ads. The company’s business model is designed to make people feel insecure about their own writing, making them believe they need an AI assistant to do simple things like sending emails. You don’t need a computer wearing Stephen King’s skin or bad advice from bell hooks to write better emails. It now makes sense that a company that fundamentally misunderstands what makes good writing would choose to name things in the form of adverbs, parts of speech that are notorious for diluting or obfuscating meaning. As Stephen King actually wrote, “I believe the road to hell is paved with adverbs, and I’ll shout it from the rooftops.”

It is unclear what repercussions Gramery will experience following this retraction and meaningless apology. mad The class action lawsuit reportedly seeks at least $5 million in damages. Of course, the amount paid in a class action lawsuit alone is not enough. In an ideal world, plaintiffs would sue Grammarly out of existence and into the fiery depths of hell, setting a precedent for other institutions based entirely on theft. Until then, let’s hope for a clear bankruptcy of a company that was in a terrible mood long before falling into AI psychosis.

Add Comment