Startup OpenAI, which created the artificial intelligence (AI) chatbot ‘ChatGPT’, has released a tool that detects texts written by AI. However, the detection success rate and accuracy were found to be low.
OpenAI unveiled the tool on its website on the 31st of last month (local time) and explained the results of its self-evaluation conducted before launch.
The texts used in the evaluation were all in English, and included those generated by other chatbots as well as humans and ChatGPT. The tool judged only 26% of texts written by AI to be “likely written by AI.”
Even if it was written by a real person, the ‘false positive’ rate of falsely determining that it was written by AI reached 9%. However, the longer the length of the text, the more reliable it tends to be, the company explained.
The reason why OpenAI decided to openly distribute the tool was that it was pointed out that ChatGPT, which was released at the end of November last year, might be abused for sending spam, plagiarism, and fraud. The company explained, “The tool is not completely reliable and is a work in progress. It is impossible to detect all texts written by AI, but it can be used as an aid to educators and others to identify the source of texts.”
The company also said there was no way to sort out the content for which there was only one clear answer. He added that it is difficult to determine whether computer code was written by humans or AI.
ChatGPT became a hot topic for its ability to create articles that looked like they were written by experts on a specific topic in a matter of seconds. However, following the release of ChatGPT, there are concerns in schools that students can use ChatGPT to write assignments. In December of last year, ChatGPT received a passing score in the US medical license written test.