America Needs a Better Playbook for Emergent Technologies

Kate Forscey
,
July 19, 2024

This morning, a cyber-security company ended up accidentally shutting down computers and cyber systems worldwide. A single bug in a single product disrupted everyone’s business, from airlines to banking to pastry shops.

This is a warning sign of worse problems to come. As far as we know, the CrowdStrike bug was accidental and based on human error; it was promptly corrected because humans remained in control of the relevant systems and could revert back to a safer version of the software. 

This will not always be the case: the current trend is to delegate more and more control over vital infrastructure to artificial intelligence. What problems might we expect to see when no reversion is possible, because AI is in control? When billing systems go down for one morning, we can give out free coffee and check patients into the hospital by hand. What happens when they go down for weeks or months at a time under the pressure of AI-guided cyberattacks?

For the past year, the Center for AI Policy (CAIP) has been pushing Congress to require safe AI. We want the government to prepare for these problems before a true emergency strikes. This is, sadly, not the way the federal government has been coping with new technologies for the last few decades.

Instead, Congress has been using a deeply flawed four-step process.

Step 1: stand back while American companies deploy exciting new products, occasionally giving a speech about the importance of innovation.

Step 2: hold hearings to listen to the experts warning that the new products come with important new risks. If necessary, introduce a few messaging bills to show that Congress cares about these problems.

Step 3: focus on other priorities, letting the messaging bills die in committee.

Step 4: Wait for a catastrophe to strike, and then convene a hearing to scold the executives who were responsible – but don’t make a serious effort to solve the underlying problem, and don’t require the executives to pay for the harm they caused.

This is basically the process we used for the Exxon-Valdez oil spill (which didn’t stop the BP oil spill), for the Equifax data leaks (which didn’t stop the AT&T leaks), and for the Cambridge Analytica disinformation campaign (which isn’t stopping election deepfakes in 2024).

If we keep using this flawed approach, then in the near future we will have to cope with problems that are far worse than this morning’s comparatively mild inconveniences. AI is rapidly approaching a level of capabilities that will allow it to develop biological weapons, permanently cripple essential infrastructure, and launch lethal autonomous weapons. The techniques for controlling those advanced capabilities are still in their infancy – for the most part, AI is still a black box. The software that crashed on CrowdStrike’s watch had too many millions of lines of code for any one person to fully understand it, but at least it was, in principle, the kind of thing that humans could label, study, and reorganize. This is not true for AI. At the core of every AI system is a list of billions of unlabelled numbers whose meanings are utterly opaque.

We cannot adequately protect the American public just by hoping that those numbers do what we want them to and then yelling at executives when they don’t. We need a better way of responding to emerging technologies. We need mandatory safety requirements to be put in place before the next catastrophe.

Both Nobel Laureates and Everyday Americans Recognize the Need for AI Safety

Why is there still inaction in Congress?

October 17, 2024
Learn More
Read more

The Risks and Rewards of AI Agents Cut Across All Industries

Unfortunately, not all AI agent applications are low-risk

October 17, 2024
Learn More
Read more

When Polling Is Ahead of Politicians

Many voters prefer a careful, safety-focused approach to AI development over a strategy emphasizing speed and geopolitical competition

October 16, 2024
Learn More
Read more