This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take on AI

| 1 minute read

NIST's Latest Guidance on Secure AI Development and Global Standards

On July 26, the National Institute of Standards and Technology (NIST) released four guidance documents related to artificial intelligence (AI) development and implementation. These documents were issued pursuant to instructions under AI Executive Order 14110, which directed several U.S. government agencies to promulgate guidance and regulations with respect to safe, secure, and trustworthy AI.

The first document, titled "Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile" (RMF GAI), describes and defines risks associated with generative AI (GAI) and outlines how organizations can govern, manage, and mitigate such risks. The RMF GAI profiles the functions and categories of NIST's 2023 AI Risk Management Framework, specifically for GAI technology. It provides a cross-sectoral profile for managing risks related to GAI implementation, applicable across different sectors and addressing both current concerns and potential future harmful scenarios.

NIST also released three additional documents:

"Secure Software Development Practices for Generative AI and Dual-Use Foundation Models" (SSDF), which updates prior NIST software development guidance to add recommendations for implementing secure development practices specifically tailored to generative AI systems. It covers the entire AI model development lifecycle and emphasizes practices to secure AI elements and mitigate risks from malicious tampering.

"A Plan for Global Engagement on AI Standards" (AI Plan), which provides directives to drive worldwide development and implementation of AI-related consensus standards, cooperation, and information sharing. It emphasizes the need for context-sensitive, performance-based, and human-centered AI standards.

"Managing Misuse Risk for Dual-Use Foundation Models" (MMRD), which offers comprehensive guidelines for identifying, measuring, and mitigating misuse risks associated with powerful AI models. This document is open for public comment through September 9, 2024.

Together, these guidance documents attempt to define best practices to reduce risks that arise when developing and deploying AI models. While the NIST guidance is not legally binding, those developing or deploying AI models might take note, as deviation from prevailing practices or recommendations could introduce insurance or liability risks, particularly for those operating in accordance with federal information systems.

This document defines risks that are novel to or exacerbated by the use of GAI. After introducing and describing these risks, the document provides a set of suggested actions to help organizations govern, map, measure, and manage these risks.