The US Has Committed to Spend Far Less Than Peers on AI Safety

Claudia Wilson
,
September 23, 2024

In September’s House Committee on Science, Space, and Technology AI markup, Congresswoman Haley Stevens (D-MI) made a salient point. The US is punching below its weight when it comes to funding its AI Safety Institute (AISI). Compared to peers such as the UK, EU, Canada, and Singapore, the US has announced the least funding so far, with a maximum of $10 million authorized for FY24/25. Even if you annualize this funding, the US AISI funding is middling at best. Plus, these are not ‘apples to apples’ comparisons - the US invests far more in AI than any other country with an AISI. In other words, the US’s investment in safety simply doesn’t match its investment in advancement. 

Why does this matter? One of AISI’s responsibilities is to advance a very nascent field - AI safety evaluations. We often hear that it’s too soon to introduce mandatory evaluations of AI because the methodology is yet to be perfected. However, without sufficient resources and planning, the US AISI is unlikely to fulfill this mission.

The opportunity to deploy these capabilities is imminent. OpenAI and Anthropic have already agreed for AISI to conduct evaluations on their models. BIS has also issued a proposed rule around the reporting of any red-teaming on advanced models informed by AISI guidelines. Yet, AISI is housed in a leaky building and has nearly 4 times less funding than Tom Brady’s annual salary at Fox. 

Beyond the likelihood for insufficient risk mitigation, this lack of investment and strategy also impedes US leadership on international standards. If international leadership is truly an issue that the Executive Branch and Congress purport to care about, then it should be funded accordingly. 

Notes

1. Total funding announced so far, sourced from International Center for Future Generations (ICFG) and converted into USD. France and Japan have yet to announce their safety institute budgets. Other countries such as Korea and Australia are planning to set up similar safety institutes.

2. Private sector Investment funding sourced from Stanford AI Index. Duration of AISI funding sourced from UK Department for Science, Innovation, and Technology; ICFG; Infocomm Media Development Authority (IMDA). EU duration for “initial budget” was not specified, so assumed to be 5 years. Total AISI funding sourced from International Center for Future Generations (ICFG) and converted into USD.

AI’s Lobbying Surge and Public Safety

OpenAI's lobbying has expanded, and it's crowding out dialogue on safety.

September 26, 2024
Learn More
Read more

There's No Middle Ground for Gov. Newsom on AI Safety

If Governor Newsom cares about AI safety, he'll sign SB 1047

September 24, 2024
Learn More
Read more

Reflections on AI in the Big Apple

CAIP traveled to New York City to hear what local AI professionals have to say about AI's risks and rewards

September 20, 2024
Learn More
Read more