What’s Missing From NIST's New Guidance on Generative AI?

Jason Green-Lowe
May 13, 2024

Last week, the National Institute of Standards and Technology (NIST) released several resources on how to deal with Generative AI, i.e., machines that can generate new content on demand, such as essays, music, videos, and even software code.

Several of these resources are urgently needed and well-thought-out. The GenAI program will accelerate work on benchmark datasets that can be used to evaluate the capabilities and limitations of advanced AI models. Such benchmarks are indispensable for researchers and activists, because they provide a quantitative basis for comparing models to see which ones pose the most significant risks. GenAI’s initial “challenge exercise,” which will focus on automatically detecting AI-generated content, is a perfect task to help sharpen our thinking about these problems.

Similarly, the NIST resource, A Plan for Global Engagement on AI Standards (NIST AI 100-5), is useful for setting “norms for governance and accountability processes.” In the long run, safety standards must be global to be effective. Moreover, those standards must be reasonably full of operational detail. 

As NIST AI 100-5 correctly points out, we need more than just “the high-level AI policy principles discussed in multilateral settings such as the Organisation for Economic Co-operation and Development or the G7” in order to “provide actionable guidance for developers, project managers, senior leaders, and other hands-on AI actors.” This document provides an insightful roadmap laying out how we can get that actionable guidance. 

The Center for AI Policy applauds NIST AI 100-5’s careful analysis of which standards are needed, what obstacles we face to developing workable standards on those topics, and what specific diplomatic or scientific actions can be taken to overcome those obstacles.

However, Section 2 of NIST AI 600-1, the Generative Artificial Intelligence Profile, is one area where the NIST documents missed the mark. This section lays out twelve different categories of risks that are “unique to or exacerbated by generative artificial intelligence,” including “dangerous or violent recommendations,” the “environmental” hazards posed by the massive amounts of energy used to train generative AI systems and violation of “intellectual property.”

Although these risks are serious and worthy of attention, no list of the harms posed by generative AI would be complete without at least mentioning takeover risk, the risk that AI systems will permanently escape from human control and begin acting as autonomous agents, pursuing their own goals at the expense of human well-being. The ability to generate a to-do list or a strategic plan is not fundamentally different from the ability to generate an essay, a poem, or a stockholder report. Generative AIs already have some ability to engage in strategic planning simply by imitating the language found in human-authored plans, just as AIs have been able to use their mastery of patterns to successfully write code, create architectural blueprints, and steer self-driving cars. Over the next few years, this ability is likely to improve to the point where AI agents are as common as ordinary chatbots. 

Once millions of AIs begin acting as agents, they are likely to disrupt our society and our economy enormously. AI agents can reproduce themselves in a few minutes; raising a new human takes at least 18 years. AI agents can absorb an entire library’s worth of information in seconds; a human might not be able to read (or retain) that much information in their entire lifetime. Humans need to be paid a minimum wage; AI agents can be run for a few cents’ worth of electricity. Even if AI agents are no more intelligent than ordinary humans, they will still have significant structural advantages because of their electronically-enhanced memory, vision, cost-effectiveness, and speed. We need to start preparing for AI agents today. The Center for AI Policy strongly urges NIST to revise its draft profile to account for the unique risks of AI agents.

From a macro policy perspective, relying on voluntary standards is not enough. No matter how complete NIST’s profiles become, many companies will recklessly choose to ignore them. In the long run, the only way to protect against the unique risks posed by generative AI is for Congress to enact laws requiring developers to pass minimum safety standards. 

CAIP recommends policies contained in our model legislation, the Responsible Advanced Artificial Intelligence Act (RAAIA), including hardware monitoring, mandatory licensing based on safety features, and whistleblower protections for employees who point out unsafe AI practices. 

The scale and breadth of the risks discussed in NIST AI 600-1 should make it clear that we need legislative protections to keep Americans safe from those risks – the stakes are too high to leave it up to each individual company to decide whether they feel like making safe products.