AI Governance in Focus for Proxy Season
Glass Lewis, one of the leading proxy advisory firms, recently updated its policy guidelines for 2025 to include recommendations regarding board oversight of AI. In the new guidelines, Glass Lewis emphasizes the importance of effective oversight and disclosure of AI practices for companies engaging with AI but falls short of prescribing the structure of such oversight. Instead, Glass Lewis advised that it will generally not make voting recommendations based on a company’s oversight or disclosure of AI unless there is evidence that insufficient AI oversight or management may have harmed shareholders. In such cases, Glass Lewis intends to review the company’s overall governance practices generally and the board’s oversight of AI matters specifically and thereafter may recommend voting against one or more directors should it determine the oversight, response or disclosure of the AI incident to be insufficient.
The addition of these AI-related governance considerations reflects demand from investors for companies to adopt responsible AI practices and ensure that AI technologies are aligned with broader corporate governance and ethical standards. While AI is not the top governance concern for all constituencies (we note that it was not addressed in the 2025 guidelines for ISS, another top proxy advisory firm), we see the 2024 trend of AI-related proposals continuing into the 2025 proxy season.
NJ Guidance Applies Antidiscrimination Law to AI
According to the New Jersey attorney general’s recently issued guidance, algorithmic discrimination falls within the scope of the state’s antidiscrimination law. Accordingly, regulated entities could be held liable for discrimination resulting from the use of their AI tools, even if such tools were developed by a third party. In addition to publishing the guidance, New Jersey has launched a Civil Rights Innovation Lab that will work with stakeholders to develop best practices for responsible AI development, testing, and use. Several other states, including Colorado, Illinois, and California, have taken similar steps to address algorithmic discrimination stemming from the use of AI tools.
Stakeholders should be sure to thoroughly review antidiscrimination laws in each state in which they operate to determine whether their services fall within the scope of these laws. Additionally, regulated entities should specifically consider the requirements of relevant antidiscrimination laws when designing and testing their AI tools.
The UK Government Signals the “Status Quo” Cannot Continue for AI and Copyright
Across the Atlantic, the UK government has made it clear that existing rules on AI and copyright will not suffice in the face of rapidly advancing technology. A recent government consultation proposes a text and data mining (TDM) exception with an opt-out mechanism for rightsholders who “reserve their rights.” This approach aims to balance enabling “world-leading AI models” with preserving creators’ control over their works. The consultation also highlights practical solutions—such as machine-readable “Do Not Train” protocols—to safeguard both innovation and copyright. Although no final decisions have been made, the UK’s emphasis on interoperability and international collaboration suggests that forthcoming regulatory changes could influence how other jurisdictions tackle AI and copyright.
Getty Images v. Stability AI – Class Definitions and Representative Claims
A High Court’s recent reserved judgment in Getty Images v. Stability AI yields the latest development for arguably the most significant AI-related and copyright infringement litigation in the UK. Getty Images, a popular media company with a large repository of stock and copyrighted images, claims that Stability AI used these images (and other Getty media) to train several AI models, and thus infringed Getty’s copyrights. The reserved judgment, delivered by Justice Joanna Smith, highlights issues in litigating large-scale, high-popularity copyright infringement claims – including class definitions and representative claims. Notably, the court held that individual photographers (e.g., those who upload their work to Getty) could not join this copyright infringement claim as they don’t share the same interest in the claim. Further, the judgment noted that the proposed class definition was impermissible as it relied on the litigation’s outcome (i.e., was there copyright infringement?). These judgments make clear the need for precise, strategic class definitions and representative claims when handling copyright infringement claims at scale. Clients should carefully propose definitions and representative claims that balance both quantity and quality of claimants against the desire for strong and swift judicial proceedings. Clients should equally keep a focused eye on the developments of this case as final judgments on substantive issues will inform certain legal ambiguities surrounding the use of copyrighted materials in developing and training AI systems.