ChatGPT’s Impact on Research Papers: Unveiling Anomalies and Ethical Considerations

2023-09-12 05:00:00

Embarrassed, I copied the words on the ChatGPT button when I was writing my physics paper.

Not only did the results pass two months of peer review, they were even eventually published in a journal.

The person who discovered this phenomenon was Guillaume Cabanac, a well-known anti-counterfeiter and associate professor at the University of Toulouse in France. He was selected as one of Nature’s top ten figures of the year.

Nature’s latest report states that this is not an isolated case, or even just the “tip of the iceberg.”

To say that the appearance of “Regenerate response” in the paper is relatively obscure, there are even more outrageous and obvious ones.

For example, directly copy/paste the entire sentence “As an AI language model, I (As an artificial intelligence language model, I…)”.

Hmm…that’s a little too casual.

Just the tip of the iceberg

A few days ago, the journal Physica Scripta published a paper aiming to discover new solutions to complex mathematical equations.

What I didn’t expect was that a sentence similar to “Regenerate Response” appeared on the ChatGPT button on the third page of the paper.

The publisher’s head of peer review and integrity said the authors later confirmed to the journal that they used ChatGPT to help draft the manuscript.

Prior to this, no such anomaly was found when the paper was submitted in May, the revised version was submitted in July, and subsequent typesetting.

The publisher has now decided to retract the paper, arguing that the authors failed to declare their use of the tool when submitting, a violation of their ethics policy.

In fact, this is not the only case. According to incomplete statistics from pubpeer, there have been more than a dozen articles containing “Regenerate Response” or “As an AI language model, I…” in the past four months.

Take “As an AI language model, I…” as an example. There are 8 results in the search, the latest one was found three days ago.

This also brings a more severe test to the peer reviewers. Firstly, they usually do not have time to conduct a thorough inspection; secondly, the number of gatekeepers cannot keep up.

However, there are some specific methods, and large models such as ChatGPT are good at spitting out fake documents. Retraction Watch once broke the news that a millipede preprint written by AI was withdrawn because it contained false citations, and was later re-released.

It can be used, just declare it

In fact, this does not mean that researchers cannot use ChatGPT and other large model tools to assist in writing manuscripts.

Many publishers, including Elsevier and Springer Nature, stated:

It can be used, just declare it.

Previously, Som Biswas, a radiologist from the University of Tennessee Health Science Center, used ChatGPT to write 16 papers in 4 months and published 5 papers in 4 different journals. .

When he submitted his paper for the first time, he told the editor frankly: Everything you see is written by AI.

Just a few days later, the paper passed peer review and was published in the journal Radiology.

So following that, I got out of hand, and the papers I wrote were not limited to radiology majors, including education, agriculture, law, etc.

For some people, large models represented by ChatGPT have really improved their production efficiency.

source:

1694504771
#EmbarrassingI #forgot #delete #text #button #writing #paper #ChatGPT #passed #peer #review #Txnet

Leave a Replay