This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take on AI

| less than a minute read

Research Shows Watermarking's Shortcomings in Preventing Misinformation

Watermarking has been tagged as the way to address a number of concerns and risks arising with AI.   This past summer several technology companies agreed to develop watermarking in their AI models to help prevent misinformation.

Just as physical watermarks are embedded on paper money and stamps to prove authenticity, digital watermarks are meant to trace the origins of images and text online, helping people spot deepfaked videos and bot-authored books.  

However, researchers suggest that watermarking is hard to put into practice, and even when implemented correctly, it is hard to avoid fake marks.  

Transparency and authenticity are important for AI.  Public concerns over false media and manipulated images are high.  Others using and building AI models within their companies are concerned about attribution and ownership of content inputted into and developed within an AI model.  Finding a solution to source and track content in an AI model is important, and a way to mitigate some of the risks associated with the “black box” of AI algorithms.  However, this research suggests that watermarking alone may not be a sufficient answer, and improvements and other technologies are needed.   

The hope is that these tools will flag AI content as it’s being generated, in the same way that physical watermarking authenticates dollars as they’re being printed. It’s a solid, straightforward strategy, but it might not be a winning one.