Two Easy Ways for the Returning Senate to Make AI Safer

Mark Reddish
September 9, 2024

With the Senate returning today from its August recess, there are two strong bills that are ready for action and that would make AI safer if passed.

  1. Call a Floor Vote on the PREPARED for AI Act

The Senate Committee on Homeland Security and Governmental Affairs approved a bipartisan bill that would strengthen the foundation for realizing the benefits of AI, while also improving federal procedures for mitigating the risks of AI. The “Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment for Artificial Intelligence Act” (a.k.a. the “PREPARED for AI Act” (S. 4495)) was introduced by Senators Gary Peters (D-MI) and Thom Tillis (R-NC). Among other things, the bill would ensure federal agencies:

  • Implement procurement plans for AI tools that include cybersecurity minimum standards and risk mitigation techniques 
  • Coordinate on practices related to AI risk management and adverse outcome reporting 
  • Perform testing and monitoring of AI proportionate to the risk level
  • Hire and train a workforce that understands the risks and benefits of using AI
  • Develop a process for identifying high-risk uses of AI, such as uses that impact civil rights, privacy, or safety
  • Establish agency training programs to limit the use of AI to its intended purpose and interrupt the operation of AI where appropriate
  • Avoid using AI for certain prohibited use cases, such as categorizing and taking action against an individual based on biometric data to deduce race, political opinion, religious beliefs, or other personal traits 

These measures align with the broader need to ensure that the federal government takes a proactive approach to evaluating and mitigating the risks from AI, as well as the need for AI systems to be protected with strong cybersecurity safeguards.

The Center for AI Policy (CAIP) supports the PREPARED for AI Act, hopes it will soon be considered by the full Senate, and looks forward to the House moving companion legislation to enact these commonsense initiatives. 

  1. Add S.Amdt. 3139’s Cybersecurity Protocols to the NDAA

On July 24th, Senators Chuck Schumer (D-NY), Mike Rounds (R-SD), and Martin Heinrich (D-NM) introduced an amendment to S.4638, the Senate version of the National Defense Authorization Act (NDAA), that will empower the the Secretary of Commerce and the Secretary of Homeland Security to jointly develop mandatory cybersecurity and insider threat protocols for all covered artificial intelligence firms. Now that the Senate is back in session, it’s time to get this amendment adopted and the 2025 NDAA enacted.

CAIP applauds the addition of Senate Amendment 3139 (S.Amdt. 3139) to FY 2025 NDAA.

In today's rapidly evolving technological landscape, the need for mandatory AI oversight has never been more critical. As America advances the frontier of AI capabilities, our government’s ongoing lack of authority to regulate AI poses significant risks to our economy and security. Ensuring the safe deployment of AI is not just a technological imperative but a national security and public safety necessity.

The United States must take proactive steps to establish robust frameworks that govern AI development and deployment, particularly in defense and national security. This commonsense amendment will safeguard our nation against potential threats and allow us to deploy advanced AI without leaking dangerous secrets to criminals and rival states.

CAIP Comment on Managing Misuse Risk for Dual-Use Foundation Models

Response to the initial public draft of NIST's guidelines on misuse risk

September 16, 2024
Learn More
Read more

CAIP Welcomes Useful AI Bills From House SS&T Committee

CAIP calls on House leadership to promptly bring these bills to the floor for a vote

September 11, 2024
Learn More
Read more

Governor Newsom Must Support SB 1047

The Center for AI Policy (CAIP) organized and submitted the following letter to California Gavin Newsom urging him to sign SB 1047.

September 3, 2024
Learn More
Read more