Requiring generative AI to add watermarks to better identify what content was created by AI has been receiving a lot of buzz. AI companies that signed President Biden's risk management commitments last month all agreed to adopt “robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system." Moreover, differentiating between AI work and human work has been proposed for EU regulations. However, as a recent article in Scientific American points out, adding watermarks to AI outputs is not "as simple as, say, overlaying visible copyright information on a photograph." Watermarking digital images requires having clusters of pixels in different colors act as a barcode, audio watermarks need signals embedded in soundwaves, and a text mark requires dividing available words into two groups and having the AI generator "slightly favor one set of words and syllables over the other."
Importantly, the article notes, it is crucial for watermarks to be kept secret. Once discovered, a watermark can be deleted or manipulated. For watermarks to be effective, they need to be hidden. When implementing watermarks, AI companies should take measures to minimize exposure of watermarks to ensure the mark is useful, but also, so that the mark can be protected as a trade secret and add value to a company's IP portfolio.
Reach out to us with any legal questions on best practices for protecting watermarks as trade secrets.