- ChatGPT lets you create AI-generated content with just a prompt.
- It doesn't have any watermarking just yet, however, it might change as the company is working on watermarks that has potentially achieved high accuracy rate.
- If the feature goes through, it could potentially become both a boon and a bane for use cases.
With the advent of GenAI, ChatGPT is undoubtedly one of the most popular AI tools out there that lets you create AI-generated content with ease. Presently, it doesn’t have any watermarking whatsoever but what if it starts putting watermarks on AI-generated content? Well, that’s what the latest buzz is all about, and here, we will explore what the possibilities are.
Brace For Impact as ChatGPT Readies Watermark for AI Generated Content
The latest update coming from OpenAI via The Wall Street Journal reveals that the company reportedly has been working on systems to watermark AI-generated content for almost a year. The company is internally divided on whether to push out the feature and that’s probably why it is being tested internally at the time of writing this.
Tools like OpenAI’s ChatGPT use existing content on the web (and other sources), take the user’s prompts, decipher it, and based on their respective AI models, generate content. ChatGPT, for instance, can generate text-only responses that can be anything from e-books to technical documentation, social media campaigns, and beyond.
With that being said, there are 100 million ChatGPT users out there each using the platform to generate AI content for their use cases. Putting watermarks on AI-generated content could potentially hurt the bottom line as it could reveal they are using AI. Albeit it is also a responsible thing to do in today’s time when almost anything can be created out of thin air and people would presume it to be authentic or original.
ChatGPT Watermark for AI-Generated Content Can Be A Double-Edged Sword
ChatGPT’s watermarking system could act as a double-edged sword. We don’t know exactly how ChatGPT’s watermarking system works. Still, certain assumptions highlight it could be based on the likelihood of words and phrases that can be used to recognize detectable patterns. Reports suggest the technique is tamper-proof and can potentially uncover paraphrased content as well but still has limitations. For instance, a user could potentially bypass AI detection by using multiple AI tools to generate content based on output from ChatGPT.
According to a report by The Verge, having AI watermarks can deter students from using AI to produce their assignments. This will squash the potential of using AI to get their assignments done which otherwise requires extensive research thereby lowering their acumen towards such real-life tasks. It can be a potential boon.
OpenAI further highlights that they ran checks on content with watermarking, and it brought an accuracy of a whopping 99.9%. If you have used AI detection tools before, you would know this accuracy is greater, proving instrumental for those detecting AI content to begin with.
Presently, the company itself is divided on whether to use watermarking or not. A survey run by OpenAI on ChatGPT users revealed that 30% of the participants said they would use the software less should they introduce watermarking, further stressing the company.
Related
Alternatively, OpenAI is working on metadata that are cryptographically signed and thus, resistant to tamper and offer a much higher reliability to trace the origins of the text. Should OpenAI introduce watermarks for AI-generated content, this could potentially build trust among such content, although it is a double-edged sword acting against those who use ChatGPT regularly and wouldn’t want their text to be tagged as AI-generated whatsoever.