The US federal government is likely to be the biggest customer of AI technology in the next few years, and they just provided a list of their demands. Since the release of the October 30th executive order (EO) on artificial intelligence, legal commentary has focused on the broad regulatory mandate articulated within the EO. While the regulatory scope is indeed broad, perhaps the most impactful language is found in Section 10, titled, "Advancing Federal Government Use of AI."
How the US federal government decides to purchase AI technology may signal how the AI industry will develop in the short term. In the absence of explicit laws from Congress, companies should pay attention to the developing industry standards created by this purchasing behavior.
The EO is clear in its mandate to purchase AI tools.
"Within 180 days of the date of this order, to facilitate agencies’ access to commercial AI capabilities, the Administrator of General Services...shall take steps... to facilitate access to Federal Government-wide acquisition solutions for specified types of AI services and products... Specified types of AI capabilities shall include generative AI and specialized computing infrastructure." (Sec. 10.1.h)
The above suggests the federal government will purchase hardware and software per the guidance set out by the EO. With the US federal government poised to be potentially the biggest AI customer in the world, companies should look to the EO and the subsequent reports as a guide that will steer investment and innovation within the industry over the next few years. Companies should pay special attention to reports from agencies regarding the requirements of AI systems that will be used within the government. For example, the Director of the Office of Management and Budget (OMB) will provide AI guidance for the rest of the federal government.
"The Director of OMB’s guidance shall specify... required minimum risk-management practices for Government uses of AI that impact people’s rights or safety, including, where appropriate, the following practices derived from OSTP’s Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework: conducting public consultation; assessing data quality; assessing and mitigating disparate impacts and algorithmic discrimination; providing notice of the use of AI; continuously monitoring and evaluating deployed AI; and granting human consideration and remedies for adverse decisions made using AI." (Sec. 10.1.b.iv)
The guidance referenced above may serve as a blueprint for companies developing AI technologies who want to be in the best position to comply with any future regulations or take advantage of customer demands flowing from the industry standards. Furthermore, the Director of the OMB will provide recommendations and best practices for the following:
- "External testing for AI..."
- "testing and safeguards against discriminatory, misleading, inflammatory, unsafe, or deceptive outputs..."
- "reasonable steps to watermark or otherwise label output from generative AI"
- "minimum risk-management practices" as defined in the EO
- "independent evaluation of vendors’ claims concerning both the effectiveness and risk mitigation of their AI offerings"
- "documentation and oversight of  AI"
- and "requirements for public reporting on compliance"
These recommendations and best practices to be provided by the OMB may be adopted by AI companies who are looking for guidance on what industry standards will exist in the future. Additionally, future legislation is likely to be congruent with the language of this EO. Accordingly, companies may use this EO as a roadmap for investing in their AI future.
The EO can be found here: “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
A fact sheet of the EO can be found here: “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.”