In short
- A study of 15 million biomedical articles on PubMed found a peak in AI-associated words such as “Delve” and “Present.”
- Experts warn that the word frequency alone cannot prove AI use and possibly focus unfairly on writing people.
- Because detection tools remain unreliable, the debate about ethics, authorship and access in the academic world is growing.
What words do AI give away? A new study of more than 15 million biomedical summaries on PubMed showed that at least 13.5% of the scientific articles published in 2024 show signs of AI-supported writing aids, in particular OpenAi’s Chatgpt.
The study by researchers from Northwestern University and the Htion Institute for AI in Brain Health at the University of Tübingen found a sharp increase in word patterns associated with AI-generated writing in 2024. These include both unusual terms – such as “Delves”, “underlining”, and “present” – as well as more known words such as “potential”, “findings” and “crucial”.
In order to measure this change, researchers compared the word frequencies in 2024 with baseline data from 2021 and 2022. They ultimately identified 454 words that are often convinced by AI models, including ‘Inkapapsulates’, ‘remarkable’, ‘underlines’, ‘research’ and ‘seamless’.
However, experts have explained Decrypt That word frequency alone is not sufficient proof of AI use.
“Language changes over time,” said Stuart Geiger, university teacher of communication at UC San Diego. “Delve” was shot up, and this word is now in the vocabulary of society, partly because of chatgpt. “
Geiger emphasized that writing AI is not just a technical challenge; It is also ethical.
“The only way to reasonably detect LLM use is when you get there, the writing process,” he said. “That comes with high costs, logistics, moral and technical.”
Stuart, however, warned against jumping to conclusions based on instructions at surface level without knowing the full context.
“It could be that they have just seen a number of chatgpt-generated writing and now think that so good writing looks like,” he said. “That is the whole issue that we are struggling with in the academic world, especially if we cannot simply put students on chairs and ensure that it is just pen and paper.”
As AI-generated text occurs more often, educators have turned into aids that claim to detect it; However, the quality of these tools varies.
In October 2024, Decrypt Tested leading AI detection tools – including grammar, quillbot, gptzero and zerogpt. The results varied enormously: Zerogpt claimed that the American declaration of independence was 97.93% AI-regenerated, while GPTZero gave it only 10%.
“A lot of snake oil is sold,” said Geiger.
According to Geiger, concern about AI written aids from the past debates about spell control, Wikipedia and Cliffnotes, and reflect deeper questions about the purpose of writing, authorship and trust.
“People are worried that when you had to write the words yourself, you had to think about them,” he said. “That is what people react so strongly to when they see something that feels suspicious.”
Rice University Professor of Business Kathleen Perley argued that writing AI often shows patterns, such as repeated structures or over used words such as ‘Delve’, what is the most important or helps researchers without jeopardizing quality. That is especially the case, she said, for non-native English speakers or people who have other challenges.
“If AI researchers helps to overcome challenges such as language barriers or learning disorders and does not endanger the originality or quality of their work, I see no problem with it” Decrypt. “I think it can be a general advantage, because it can have people who have different backgrounds, ideas, exposures, to participate in something that might be an obstacle due to a lack of formal writing skills.”
The AI adviser of the deans at Rice Business, Perley noted that another dilemma is the tendency of people to change the way they are accused of using AI, adding that she has become more aware of certain words that can be marked as potential AI-generated.
While some criticizing this style because of the lack of personality, Perley sees AI-assistant writing as a tool that can democratize participation in formal research.
“Of course, we might get more ‘delves’ and em stripes,” she said. “But if AI helps people with different backgrounds to share important research, I don’t care how polished it sounds – it’s worth it.”
Generally intelligent Newsletter
A weekly AI trip told by Gen, a generative AI model.