Why OpenAI’s AI detection tool may stay under wraps
OpenAI has built a system for watermarking ChatGPT-text and a tool to detect the watermark about a year ago, but the company isn’t sure about releasing it, a report by The Wall Street Journal revealed. The AI firm is reportedly worried that doing so could hurt their profits.
An AI detection tool would potentially make it easier for teachers to catch students and discourage them from submitting assignments written by AI. In a survey commissioned by OpenAI, it was found that people globally supported the idea of an AI detection tool by a margin of four to one, the report shared.
However, almost 30% of the respondents also said that they would use the tool less often if OpenAI watermarked the text.
Since the news, OpenAI has confirmed, in a blog, that they were working on the tool . The company has called the AI detection methods 99.9% effective and resistant to tampering methods like paraphrasing. But if the text was then reworded with another model, it would make it “trivial to circumvention by bad actors.”
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
OpenAI also noted in the blog that they didn’t want to stigmatise the use of AI tools by non-native English speakers.
The tool would reportedly just focus on detecting writing from ChatGPT and not by AI models from other companies. It would make tiny changes to how ChatGPT was predicting words and create an invisible watermark in the writing that could then be easily detected by another tool later.