Boards, General Counsel and Chief Compliance Officers are now increasingly well informed about the laundry list of legal and operational risks implicated in adopting AI tools. They are many depending on the company and context. But they also know that, just as with email, mobile devices and cloud computing, those risks will not prevent AI tools from being adopted. Instead of a binary ("do we use AI or not?), the questions and answers for effective risk management are almost always found on a continuum and include "what" use cases are implicated? "Which" tools will be used? and "how" controls can mitigate the risks?
Effective risk management certainly requires a holistic understanding and management of the risks with functional risk leaders in IT, Privacy, Compliance and Procurement working together in a coordinated way, but each function should also take leadership over certain issues and tasks that must be addressed.
- Convene an AI Governance Steering Committee of key business leaders and key risk stakeholders.
- Perform an AI risk assessment identifying all potential and existing uses of AI tools within the organization.
- Monitor and perform detailed assessments of the applicability and impact of both existing laws implicating AI tools (e.g., data privacy) and emerging AI-specific regulatory proposals such as in EU and additionally considering the applicability of such laws to pre-existing business process tools not previously considered AI.
- Develop specific preventative and detective controls to identify AI tools, ensure compliance with identified laws and create real time evidence trail of compliance with laws.
- Following a risk assessment and identification of likely use cases, lead a multi-disciplinary review of existing company policies to update each for the emerging risk of AI and reclarify the organization's expectations of employees.
- Use end point security tools to identify instances of access by employees to known generative IT platforms to better scope the volume and nature of usage and log instances of data leakage to AI tools via prompts and other uploads.
- Review and update data loss prevention controls
- Review and update existing data governance protocols.
- Review existing and potential AI use cases against existing and emerging privacy obligations.
- Review and update privacy data flow representations to regulators and contractual commitments
- Review AI vendor selection criteria and prioritize vendor's attitude toward key contractual terms alongside solution and commercial terms.
- Conduct robust factual due diligence of vendor in addition to contract negotiation.
- Revise playbooks on representations of compliance with laws, non-infringement, as well as new issues of transparency and explainability, anti-discrimination of outputs and inputs to model, data consents for past and future training data, allocation of IP ownership and indemnification for third party infringement claims.