2023-07-11 10:00:00
The text written by the artificial intelligence consisted of 1992 words and 17 quotes. What regarding the quality of the final rendering?
Par Johanna Amselem
Published on
L‘Artificial intelligence is everywhere, Pandora’s box has been opened. In medicine, it can offer a valuable aid to the practitioner. For example, a recent study shed light on how it predicted the risk of spreading aggressive breast cancer. Another advance, an artificial intelligence model, used in addition to echocardiography, allows better diagnosis of valvular heart disease.
Reports, works of art, recipes, etc. A lot of content has already been created using ChatGPT. THE Journal of Medical Internet Research published a new study showing that it was also possible to create fraudulent scientific articles from scratch, very similar to the originals.
READ ALSOChip in the brain: when reality defies fiction
In the Czech Republic, Dr. Martin Májovský and his colleagues studied the capabilities of artificial intelligence to create high-quality medical articles in the field of neurosurgery. Throughout the drafting, the questions were refined in order to improve the quality of the rendering. “The AI-generated article included standard sections such as Introduction, Materials and Methods, Results and Discussion, and a Fact Sheet. It consisted of 1992 words and 17 quotes, and the entire article creation process took regarding an hour, without any special human user training,” the study reports.
Inaccuracies and semantic errors
If, at first sight, the article seemed convincing, certain faults appeared by scrutinizing the text with more attention. Indeed, experts have highlighted the presence of certain semantic inaccuracies and errors in the references (bad information or omissions). ” VSSome specific concerns and errors have been identified in the generated article, particularly in the references. The study demonstrates the potential of current AI language models to generate completely fabricated scientific papers. Although the articles appear sophisticated, and seemingly flawless, expert readers can identify inaccuracies and semantic errors upon closer inspection. »
READ ALSOElon Musk: “Artificial intelligence can manipulate us”
Accordingly, the researchers reiterate the importance of increased vigilance and better detection methods to combat the potentially misuse of artificial intelligence in scientific research. Hence the need to combine the knowledge and performance of artificial intelligence. “At the same time, it is important to recognize the potential benefits of using AI language models in writing and authentic scientific research, such as preparing manuscripts and editing languages,” conclude the authors. authors.
1689634563
#ChatGPT #generates #convincing #fake #scientific #article