This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take on AI

| 7 minutes read

Establishing an Artificial Intelligence Strategy and Plan

In the Midwest, severe weather is not uncommon. And there is a backhanded compliment sometimes paid to television meteorologists which goes something like: “He has accurately predicted thirteen of the last four tornados.”  Observers may be forgiven for feeling the same way about artificial intelligence and the past predictions of its disruptive effect on their businesses and careers.   Of course, hundreds of very lethal tornados are spawned across the Midwest every summer. So, despite the false alarms, a tornado warning often led us into the basement and especially when a twister was spotted nearby. Smart leaders are similarly experienced at assessing risk likelihood and potential disruption.  They don’t overreact to every potential issue, but they don’t neglect potentially catastrophic risks just because they have failed to materialize in the past.

Some industries may be more immune than others to the effects of artificial intelligence and the resulting advances in automation. But every organization should consider its risks and, equally important, the opportunities. Organizations should be considering not just a narrow policy response but a comprehensive approach including the following elements.

Management Vision

.  The identification of threats and opportunities, both in the short term and long term, and the development of a strategy to take advantage of and/or counter those threats and opportunities is inherently the responsibility of senior leadership.  These responsibilities cannot be delegated. Management must roll up its sleeves and understand the potential applications and uses cases.  This is particularly challenging because the capabilities are incredibly dynamic, and they are expanding at a compounding rate with each new advance fueling multiple new possibilities.  Leaders and strategists must consider what currently prevailing constraints, costs or revenue streams in the current business model could be undermined?   What previously contemplated investments or strategic moves may no longer make sense?  What new capabilities will be made possible?  What happens if we do nothing, and our competitors transform their offerings and their cost structures?  Of course, all this reimagining of the business model post-AI must happen while not missing a beat in execution of the current strategy. 

Board EngagementCaremark, Delaware courts had historically embraced very reactive and lenient expectations of corporate directors in overseeing risk. However, in recent years, Delaware courts have articulated a set of different set expectations of directors which require directors to proactively identify the most significant or “mission critical” risks facing the organization and then take steps to assure themselves that management has an effective plan to mitigate the key risk as well as actively monitoring management’s execution to ensure it was effective. Today, most surveys of CEOs and directors list AI as among the top three enterprise risks so it is likely a risk that is or should be on the agenda of every board.  Directors should be posing many of the questions above (and below) to their own management teams and assessing the reasonableness of the responses. After being satisfied that management is focusing adequately on both the risk and the opportunities, directors should ensure that they remain in the loop and are able to monitor the effectiveness of the company’s strategy and response.

– One of the most significant developments in corporate board governance over recent years has concerned risk management. Under

Risk Management

– The mapping, assessment, and prioritization of the multitude of risks and opportunities associated with artificial intelligence is not a “back of the envelope” exercise. If an organization has an ERM function, they will likely have the skills and methodologies to comprehensively map the reasonable possibilities (no time to boil the ocean) and ensure that nothing important falls through the gaps. If there is no ERM, another function with risk assessment skills and experience such as the Chief Compliance Officer or Internal Audit might be best suited.  But this task should be assigned to someone who can be accountable.  What is the methodology?  Have we considered the traditional strategic, reputational, regulatory, and operational risks? Are we now including the risk of AI tool bias being unwittingly inserted? The pace of tool development and regulatory response is incredibly fast. How often do we need to freshen our analysis?  Who should receive the assessment? A diligent board member would want to see it to ensure they are meeting their Caremark duties. Will the assessment be protected by the attorney-client privilege? A diligent GC would want to be involved in the process to ensure legal risks are reliably discussed and assessed and the privilege should therefore be applicable.

Governance

– There are some risks or activities that arise consistently within a particular function and nowhere else.  They rarely require broad governance structures. But other activities are undertaken by everyone, and effective governance is more complicated.  IT security is a good example of a need for broad governance because in many industries everyone in the organization has a computer.  For AI governance, an organization will need to select a governance structure that is well suited to its risk profile.  If the possibility of new AI applications arising is substantial across multiple functions (it should be), then a governance model with broad participation will be most effective.  A governance structure that is comprised solely of risk avoiding lawyers, internal auditors and security professionals will be too risk averse because they will often not have the perspective of the business or function and the value that the application can provide.

Legal Compliancenew regulatory initiatives undertaken by the EU and the initiatives of the US federal government, and various states. New proposals are emerging weekly, and they propose very intrusive controls over both new technologies and even software tools that we never thought of as AI.  But there are also a great number of copyright laws, IT security, data privacy, anti-discrimination, competition, government contract, and other existing laws that also apply to any current or proposed AI uses.  Organizations should ask themselves:  What processes do we have to monitor emerging new laws? How are we monitoring new enforcement initiatives or interpretations of existing laws? What processes do we have to ensure our use of AI is consistent with existing or emerging compliance obligations?  What preventative or detective measures can we employ to ensure compliance with the law and our policies? 

– Much discussion has focused on the

Data Security & Privacy

– This issue has properly received much attention in recent months. However, the issue is not readily solved by the adoption of a simplistic ChatGPT ban.  A policy is not self-executing. Employees will evade it whether on network or through their personal devices. And the risk that confidential information might be loaded into a third-party service provider is hardly new or unique to chatbots.  Companies have dealt with this risk for many years.  Instead of a simplistic ban, and just as they did with other third-party applications that had value, companies need to consider proactive steps to embrace artificial intelligence tools and provide their employees with a safe environment to use them without endangering sensitive information. Enterprise versions of different generative AI tools offer wholly secure instances where the employee queries are not recorded or used for tool or model training and the company’s own data can be safely queried which also provides far greater value.  For those systems, the questions are familiar. What controls are in place to protect and safeguard data?  Once we have identified systems that the privacy and data security risks are real but must be engaged proactively and constructively.  Just say no is not a good strategy here.

AI “Assurance”

– The era of AI has introduced us to the concept that an AI model and its algorithms must be explainable, reliable, and trustworthy.  Also, we want to know the data that has been used to train the model or system. Where was it sourced? Did the training company have lawful right to use the data? Was it open source? Was the consent of any data subjects given? What potential biases may arise as a result of the training algorithm used, or the data that was used, or the data not used? How do we know that any potential bias or discrimination issues are being avoided? Someone of these concerns are familiar but some are new to many of us and derive from the “black box” aspect of AI tools. A company should begin to develop a framework for assuring itself, and assuring others, that its use of AI tools is free from these issues.

Third Party Risk Management

– Most all companies will deploy AI-based capabilities. Almost none of those companies will be developing their own generative AI, LLM or other models.  They will be procured form third parties. Even where the model receives more advanced instructions or “fine tuning,” there will be underlying training and data integrity issues that will need to be examined.  Companies will need to ensure that they set AI standards for their third parties.  This is true not only of the AI technology provider supplying a generative AI tool but also your other vendors who may be providing a simple service. In a recent case, the EEOC obtained a settlement from a company that was screening resumes with a tool that was programmed to exclude applicants above a certain age. Your vendor's improper use of AI in supporting your business is often your problem. Companies procuring different services may consider requiring vendors to disclose the use of AI tools in the fulfilment of their contract to the company.

Training and Education

– Finally, employees will need to be trained on the impermissible uses of AI tools and how to safely use and harness the capabilities of AI applications. Companies should establish a strategy and training curriculum. What are the key risks?  Who will require training? Will the training be role specific? How will we measure its effectiveness?

Today, most surveys of CEOs and directors list AI as among the top three enterprise risks so it is likely a risk that is or should be on the agenda of every board. Directors should be posing many of these questions to their own management teams and assessing the reasonableness of the responses.