Connect with us

Tech

OpenAI Has Software That Detects AI Writing With 99.9 Percent Accuracy, Refuses to Release It

Published

on

OpenAI Has Software That Detects AI Writing With 99.9 Percent Accuracy, Refuses to Release It

“It’s just a matter of pressing a button.”

Water Marker

ChatGPT creator OpenAI has developed internal tools for watermarking and tracking AI-generated content with 99.9 percent accuracy, the Wall Street Journal reports — but is refusing to release it.

Effective tools for flagging AI-generated text could be useful in any number of situations, from cracking down on cheating students to sorting through the AI-generated sludge filling the web.

Which is why it’s so surprising that OpenAI, as the WSJ reports, has been quietly hanging onto tools that could do exactly that.

“It’s just a matter of pressing a button,” a source familiar with the project told the WSJ.

Bot Detector

Per the WSJ, OpenAI began to discuss the need for a functional watermarking tool, or software that embeds an artifact denoting a piece of content as AI-generated, upon the release of ChatGPT in 2022. The software in question was created shortly thereafter, and according to internal documents is known to function with 99.9 percent accuracy when applied to “enough” ChatGPT-created text.

But OpenAI has waffled on whether it should release the tool.

According to the WSJ, in April 2023, an OpenAI-conducted global survey found overwhelming support amongst the general public for implementing watermarking tools. Another survey conducted the same month among OpenAI’s users, however, found that 30 percent of its customer base said they would stop using ChatGPT if it started deploying watermarks and a competing company didn’t. As the WSJ put it, this latter survey has “loomed large” over further watermarking discussions.

A spokesperson for OpenAI told the WSJ that the company’s hesitation comes down to an abundance of caution.

“The text watermarking method we’re developing is technically promising but has important risks we’re weighing while we research alternatives,” said the spokesperson. “We believe the deliberate approach we’ve taken is necessary given the complexities involved and its likely impact on the broader ecosystem beyond OpenAI.” (After the WSJ’s report was published, TechCrunch noticed that OpenAI updated a previously published blog post about its watermarking efforts.)

But while we’re as much in favor of AI risk mitigation as the next guy, this is a wildly paradoxical line for OpenAI to take. ChatGPT, which is now just one of OpenAI’s many AI tools, has been in public hands for years now. It’s hard to see what could be so risky about introducing its watermarking software — beyond OpenAI’s need to grow and maintain its userbase, that is.

More on OpenAI: OpenAI’S GPT-4o Voice Mode Says It Needs to Breathe

Continue Reading