Connect with us

Tech

OpenAI says it’s taking a ‘deliberate approach’ to releasing tools that can detect writing from ChatGPT | TechCrunch

Published

on

OpenAI says it’s taking a ‘deliberate approach’ to releasing tools that can detect writing from ChatGPT | TechCrunch

OpenAI has built a tool that could potentially catch students who cheat by asking ChatGPT to write their assignments — but according to The Wall Street Journal, the company is debating whether to actually release it.

In a statement provided to TechCrunch, an OpenAI spokesperson confirmed that the company is researching the text watermarking method described in the Journal’s story, but said it’s taking a “deliberate approach” to due to “the complexities involved and its likely impact on the broader ecosystem beyond OpenAI.”

“The text watermarking method we’re developing is technically promising, but has important risks we’re weighing while we research alternatives, including susceptibility to circumvention by bad actors and the potential to disproportionately impact groups like non-English speakers,” the spokesperson said.

This would be a different approach from most previous efforts to detect AI-generated text, which have been largely ineffective. Even OpenAI itself shut down its previous AI text detector last year due to its “low rate of accuracy.”

With text watermarking, OpenAI would focus solely on detecting writing from ChatGPT, not from other companies’ models. It would do so by making small changes to how ChatGPT selects words, essentially creating an invisible watermark in the writing that could later be detected by a separate tool.

Following the publication of the Journal’s story, OpenAI also updated a May blog post about its research around detecting AI-generated content. The update says text watermarking has proven “highly accurate and even effective against localized tampering, such as paraphrasing,” but has proven “less robust against globalized tampering; like using translation systems, rewording with another generative model, or asking the model to insert a special character in between every word and then deleting that character.”

As a result, OpenAI writes that this method is “trivial to circumvention by bad actors.” OpenAI’s update also echoes the spokesperson’s point about non-English speakers, writing that text watermarking could “stigmatize use of AI as a useful writing tool for non-native English speakers.”

Continue Reading