Watermarking has been tagged as the way to address a number of concerns and risks arising with AI. This past summer several technology companies agreed to develop watermarking in their AI models to help prevent misinformation.
Just as physical watermarks are embedded on paper money and stamps to prove authenticity, digital watermarks are meant to trace the origins of images and text online, helping people spot deepfaked videos and bot-authored books.
However, researchers suggest that watermarking is hard to put into practice, and even when implemented correctly, it is hard to avoid fake marks.
Transparency and authenticity are important for AI. Public concerns over false media and manipulated images are high. Others using and building AI models within their companies are concerned about attribution and ownership of content inputted into and developed within an AI model. Finding a solution to source and track content in an AI model is important, and a way to mitigate some of the risks associated with the “black box” of AI algorithms. However, this research suggests that watermarking alone may not be a sufficient answer, and improvements and other technologies are needed.