The study found that AI is changing the style and substance of human writing



Does money lead to happiness?

Researchers from a consortium of West Coast universities were curious about how 100 human participants would respond to an age-old question, not because of their own pursuit of happiness. Instead, the researchers wanted to know how the participants of the AI ​​systems could manipulate their (written) answers.

The research team found that users who relied heavily on large language models (LLMs) produced responses that differed significantly in meaning from the answers of participants who partially relied on LLMs or avoided their use altogether, suggesting that heavy AI use alters the writing style as well as the substance of humans’ arguments.

“LLMs are pushing essays away from anything that could have been written by a human,” said Natasha Jacques, one of the study’s lead authors and a professor of computer science at the University of Washington, highlighting the “blondification” of writing that relies on AI systems. “They just change human writing in a very big way, and it’s different from what humans would have done otherwise.”

New research, peer-reviewed and accepted at an upcoming workshop at a major AI conference, found that people who relied heavily on LLMs created essays that answered the happiness question with a neutral response 69% more often than participants who did not use AI or only used AI for light edits. Study participants who used AI less often or avoided AI altogether submitted essays that were either positive or negative about the relationship between money and happiness.

In addition to the impact of AI on the meaning of essays, the researchers found that heavy reliance on AI systems changed the overall style of users’ outputs, causing their language to become less personal and more formal.

After the experiment, participants who relied more on the AI ​​reported that their essays were significantly less creative and less in their own voice. At the same time, these participants reported similar satisfaction rates with their end results compared to participants who used AI less, raising concerns from the authors and outside experts about the long-term effects of increased human use of AI systems.

“This research highlights that LLMs cannot adhere to people’s preferences and personalize how a human writes an essay,” said Jacques, who is also a senior research scientist at Google DeepMind, one of the world’s leading AI companies. “An ideal LLM should write the thesis you wrote and save you time.”

“It’s not doing that. It’s writing a very different essay.”

The study evaluated the impacts of three major AI systems widely used in 2025: Cloud 3.5 Haiku from Anthropic, GPT-5 Mini from OpenAI, and Gemini 2.5 Flash. In an initial test, the researchers found that half of the participants either refused to use the LLM or only used it to find information rather than create new content. To better categorize the large batch of participants, the researchers defined heavy AI users as participants who said they created more than 40% of the text written for the experiment with an LLM.

The authors found that users who relied heavily on LLMs submitted essays with 50% fewer pronouns, representing a larger shift toward personalized language that includes fewer anecdotes and references to human experiences.

In addition to an experiment on the impact of money on happiness, the new paper analyzes the differences in how LLMs edited another theses compared to humans, and examines how the use of AI affects the criteria scientists use to judge whether to accept papers for major AI conferences.

To compare how LLMs edit existing writing compared to humans, Jacques and her collaborators relied on a database of human-authored essays from 2021 to evaluate writing published before LLMs were widely adopted.

When LLMs were asked to revise human essays based on human feedback from an original human-written dataset, the study authors found that three major AI systems made significantly larger edits than human editors in the same situation, and that the AI-driven edits changed the meaning of the underlying essays.

While human editors often made changes that replaced individual words and left much of the original vocabulary untouched, the LLMs “changed a larger portion of the original writing than humans do when revising their own work,” according to the paper.

“This substitution of words contributes to the loss of individual voice, style, and meaning, as each writer’s unique lexical fingerprint is overwritten by a particular model’s preferred vocabulary,” the authors wrote.

Thomas Zuzek, a professor of computational linguistics at Florida State University who was not involved in the research, said the paper is a valuable contribution to a rapidly growing field of interest.

“It’s a really good magazine,” Zuzek told NBC News. “What really struck me was the fallacy of using LLMs to do grammar checking. This research shows that while users may think they’re doing a simple language check, the model is doing a lot more.”

“Going forward, what does this mean for thinking, language, communication and creativity?” Juzek asked.

For his part, Jacques posited that the language-changing behavior of AI systems may be a consequence of how they are currently trained, which may reward the manipulation of graders’ preferences.

“If you’re training a model on human feedback, the model has no boundary or understanding of the difference between satisfying the human and changing the human to easily meet their preferences,” Jacques said. They suggested that humans’ reliance on LLMs to write is similar to how YouTube recommendations can change people’s preferences about what types of YouTube videos they enjoy the most.

Looking ahead, Jacques is excited to see more research on the long-term effects of AI systems on human values, expression and organizations, especially as more AI researchers rely on AI systems in their own work.

“Humans care about clarity, relevance and impact, while AI cares about scalability and reproducibility,” Jacques told NBC News. “It’s already changing our decisions in ways that affect our existing institutions.”

In his own work, Jacques said he avoided using AI to write the new paper. Instead he said he uses LLMs and their shortcomings as inspiration to write his own.

“Sometimes, I put a poor version of what I’m trying to say in a conversational style into the LLM,” Jacques said. “That usually produces something and then inspires me to write it myself.”

Add Comment