This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take on AI

| 1 minute read

A ChatGPT Ban is Not an AI Policy

In what has been touted as the first enforcement action targeting an artificial intelligence application, the EEOC obtained a resolution against a tutoring company that programmed its resume screening application to exclude female candidates over 55 years old and male candidates over 60 years old.  

This enforcement action is an excellent reminder that artificial intelligence does not equal ChatGPT. This means if your organization’s response to the emergence of AI was to simply adopt a policy banning employees from using ChatGPT, you have likely done both too much and too little.  You have done too much because a comprehensive ban of ChatGPT and other generative AI tools will provide your competitors with enormous productivity and other competitive advantages and the widely publicized risks of IP loss and hallucinations can be substantially mitigated.

But you have also almost certainly done too little because adopting a policy prohibiting employees from using ChatGPT does nothing to identify and manage the risk from other artificial intelligence applications. In this case, the violation arose from an automated resume filtering program systematically discriminated against older applicants, but it had nothing to do with ChatGPT.

GCs and Compliance Officers need a comprehensive approach to artificial intelligence and the global urge among enforcement agencies to scrutinize AI use cases. We should also recognize that the definition of artificial intelligence increasingly includes any automated decision-making systems.  In the short term, the following steps can help an organization scope and manage its risk:

  1. Perform an AI risk assessment. A policy prohibition is not self-executing. And you can’t wait for or rely on employees to self-diagnose their AI influenced processes and come to you for permission. A reliable understanding of your risk will only come from going out and asking questions about what systems are being used.
  2. Isolate and concentrate on the highest risk use cases. Does it disclose sensitive IP? Used to create IP? Incorporated in product? Customer facing? Used in employment decisions (hiring? comp? promotions? Layoffs?)
  3. “Pressure test” the high-risk use cases. Why do we need it?  How does it work? What instructions were provided to create it? How was it trained? How do we know any of the above? What value does it provide? What are the worst-case outcomes? 
An AI-bias-in-hiring lawsuit that the Equal Employment Opportunity Commission settled this week was the first case of its kind brought by the agency. But employment lawyers expect many more to come, as well as more suits filed by job candidates who believe they were victims of AI bias.