News

ChatGPT Fools Scientists By Writing Fake Research Paper Abstracts

Written by Senoria Khursheed ·  1 min read >

Artificial Intelligence has taken an advantageous position in the technology world. The Chatbot is another application of Artificial Intelligence that stimulates human-like conversations with users via chat.

The critical task is to answer user questions with instant messages. A version of the Chatbot called ChatGPT has written fake research paper abstracts that scientists could not spot.

ChatGPT (Generative Pre-training Transformer) is typically a chatbot by OpenAI in November. It quickly garnered attention for its detailed responses and articulate answers across many areas of knowledge. Since its launch, it has become a hit among all education departments.

It can quickly churn out answers to the smallest and biggest questions in life. Moreover, I can draw up college essays, fictional stories, job application letters, and even haiku.

Catherine Gao, a research team leader at Northwestern University in Chicago, used the Artificial Intelligence tool ChatGPT to generate artificial research paper abstracts. Catherine did the job of testing whether scientists could spot them.

According to the report, the researchers asked the Chatbot to generate 50 medical research paper abstracts. In addition, the abstracts were based on a selection published in JAMA, The New England Journal Of Medicine, The BMJ, Nature Medicine, and The Lancet.

After experimenting, they compared the fake abstract with the original one by running them through a plagiarism detector and an AI output detector. Later, they asked a group of medical researchers to point out the fabricated abstracts.

Chat GPT

However, it was astonishing that the Chatbot generated abstracts dialed through the plagiarism checker. The median originality score was 100%. This indicates that no plagiarism was detected.

Moreover, the AI detector has shown 66% of the generated abstracts. In contrast, human reviewers cannot handle it much better. They correctly identified 68% of the generated abstracts and 86% of the accurate abstracts.

On the other hand, they incorrectly identified 32% of the generated abstract as the real one. Whereas, according to the Nature article, 14% of the generated abstracts as being developed.

“It’s a time to think over,” said Sandra Wachter from the University of Oxford, who was not a part of the Research.
“If we are now in a situation where the experts cannot determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” said Sandra Wachter.

OpenAI, a Microsoft-owned software company, invested in a tool for public use in November, free of cost.
“Since its release, researchers have been grappling with the ethical issues surrounding its use as much of its output can be difficult to distinguish from the human-written text,” published a report.

Read more:

Microsoft to Invest $10 Billion in Chat GPT AI

Cybercriminals Are Using ChatGPT To Create Hacking Tools And Code