NIST debuts long-anticipated AI risk management framework
With the launch of the AI RMF 1.0, federal researchers focused on four core functions to structure how all organizations evaluate and introduce more trustworthy AI systems.
The National Institute of Standards and Technology unveiled its long-awaited Artificial Intelligence Risk Management Framework on Thursday morning, representing the culmination of an 18-month-long project that aims to be universally applicable to any AI technology across all sectors.
Increasing trustworthiness and mitigating risk are the two major themes of the framework, which NIST Director Laurie Locascio introduced as guidance to help organizations develop low-risk AI systems. The document outlines types of risk commonly found in AI and machine learning technology and how entities can build ethical, trustworthy systems.
“AI technologies have significant potential to transform individual lives and even our society. They can bring positive changes to our commerce and our health, our transportation and our cybersecurity,” Locascio said at the framework’s launch event. “The AI RMF will help numerous organizations that have developed and committed to AI principles to convert those principles into practice.”
The framework offers four interrelated functions as a risk mitigation method: govern, map, measure, and manage.
“Govern” sits at the core of the RMF’s mitigation strategy, and is intended to serve as a foundational culture of risk prevention and management bedrocking for any organization using the RMF.
Building atop the “Govern” foundation, “Map” comes next in the RMF game-plan. This step works to contextualize potential risks in an AI technology, and broadly identify the positive mission and uses of any given AI system, while simultaneously taking into account its limitations.
This context should then allow framework users to “Measure” how an AI system actually functions. Crucial to the “Measure” component is employing sufficient metrics that represent universal scientific and ethical norms. Strong measuring is then applied through “rigorous” software testing, further analyzed by external experts and user feedback.
“Potential pitfalls when seeking to measure negative risk or harms include the reality that development of metrics is often an institutional endeavor and may inadvertently reflect factors unrelated to the underlying impact,” the report cautions. “Measuring AI risks includes tracking metrics for trustworthy characteristics, social impact and human-AI configurations.”
The final step in the AI RMF mitigation strategy is “Manage,” whose main function is to allocate risk mitigation resources and ensure that previously established mechanisms are continuously implemented.
“Framework users will enhance their capacity to comprehensively evaluate system trustworthiness, identify and track existing and emergent risks and verify efficacy of the metrics,” the report states.
Business owners participating in the AI RMF also expressed optimism at the framework’s guidance. Navrina Singh, the CEO of AI startup Credo.AI and member of the U.S. Department of Commerce National Artificial Intelligence Advisory Committee, said that customers seeking AI solutions want more holistic plans to mitigate bias.
“Most of our customers…are really looking for a mechanism to build capacity around operationalizing responsible AI, which has done really well in the ‘Govern’ function of the NIST AI RMF,” she said during a panel following the RMF release. “The ‘Map, Measure, Manage’ components and how they can be actualized in a contextual way, in all these specific use cases within these organizations, is the next step that most of our customers are looking to take.”
The new guidance was met with broad bipartisan support, with Rep. Zoe Lofgren, D-Calif., and Rep. Frank Lucas, R-Okla., both sending congratulatory messages for the launch event.
“By taking a rights affirming approach, the framework can maximize the benefits and reduce the likelihood of any degree of harm that these technologies may bring,” Lofgren said at the press briefing.
Community participation from a diverse group of sectors was critical to the development of the framework. Alondra Nelson, the Deputy Director for Science and Society at the White House Office of Science and Technology Policy, said that her office was one of the entities that gave NIST extensive input into the AI RMF 1.0. She added that the framework, like the White House AI Bill of Rights, puts the human experience and impact from AI algorithms first.
“The AI RMF acknowledges that when it comes to AI and machine learning algorithms, we can never consider a technology outside of the context of its impact on human beings,” she said. “The United States is taking a principled, sophisticated approach to AI that advances American values and meets the complex challenges of this technology and we should be proud of that.”
Much like the AI Bill of Rights, NIST’s AI RMF is a voluntary framework, with no penalties or rewards associated with its adoption. Regardless, Locascio hopes that the framework will be widely utilized and asked for continued community feedback as the agency plans to issue an update this spring.
“We're counting on the broad community to help us to refine these roadmap priorities and do a lot of heavy lifting that will be called for,” Locascio said. “We're counting on you to put this AI RMF 1.0 into practice.”
Comments on the AI RMF 1.0 will be accepted until February 27, 2023, with an updated version of the playbook set to launch in Spring 2023.
Editor's Note: This article has been updated to reflect the planned launch of an updated AI RMF playbook.