Comment on BIS Reporting Requirements for the Development of Advanced AI Models and Computing Clusters

Claudia Wilson
,
October 8, 2024
Download the Full Comment

Response to Bureau of Industry and Security

On the 11th of September, the Bureau of Industry and Security (BIS) released a proposed rule “Establishment of Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters”. In line with Executive Order 14410, BIS has proposed a quarterly cadence of reporting the development and safety activities of the most powerful models.  

The Center for AI Policy (CAIP) supports these reporting requirements and urges Congress to explicitly authorize them. These reporting requirements will offer valuable visibility for BIS over the state and safety of America’s AI industry. Such insight will enable BIS to analyze whether innovation is matching America’s military needs and models are being safety tested before they are released to the wider public. 

Besides design of the rule itself, sufficient resources and communication between government departments will be crucial to achieving the intent of these reporting requirements. For example, BIS may wish to establish ongoing meetings with representatives of the Department of Defense’s Chief Digital and Artificial Intelligence Office (CDAO) to understand what innovation is relevant to military usage.  

Although the proposed rule is a step towards AI safety, reporting requirements are no guarantee that companies will act responsibly. Given corporate incentives, companies may rush to develop and release AI models without sufficient safety testing. Powerful but insufficiently tested models may prove deadly when deployed in high stakes critical infrastructure contexts. Similarly, we don’t want malicious actors armed with the capability to develop new pathogens. Plus, current generative AI tendencies towards deception and power seeking are all the more concerning as the autonomy of AI agents increases. Only by shifting corporate incentives, through required safety measures or clarification of liabilities, can we ensure that companies don’t put society at risk with technically faulty or easily misused models. 

CAIP replied to BIS’s request for comments to help refine the proposed reporting requirements. Below is an executive summary of our full comment.

Executive Summary

Thank you for the opportunity to provide feedback on the proposed rule Establishment of Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters. The Center for AI Policy (CAIP) commends the Bureau of Industry & Security (BIS) on a well-designed process for reporting information. CAIP also strongly agrees with the intended aim of the proposed rule - “to ensure and verify the continuous availability of safe, reliable, and effective AI … including for the national defense and the protection of critical infrastructure”. Leading AI developers plan to build stronger foundation models with capabilities that could pose catastrophic national security risks, while complex safety and security challenges remain unsolved. This unprecedented situation warrants careful, vigilant oversight.

In this response, we share the following feedback on three topics highlighted by BIS.

  1. Quarterly notification schedule: Support the quarterly notification schedule.
  2. Collection and storage: Suggest using an encrypted file sharing platform.
  3. Collection thresholds: Support computing power as an interim and evolving threshold.

We also share additional commentary on the following topics

  1. Clear description of information required: Suggest that BIS clarify the required information for collection in the final rule.
  2. Cost of compliance: Agree that compliance costs are minimal relative to operational costs of developing large models or running computing clusters.
  3. Relevance of critical infrastructure: Suggest inclusion of “critical infrastructure” in the background as mentioned in the EO and the DPA.

Read the full comment here.

CAIP Comment on Managing Misuse Risk for Dual-Use Foundation Models

Response to the initial public draft of NIST's guidelines on misuse risk

September 16, 2024
Learn More
Read more

CAIP Welcomes Useful AI Bills From House SS&T Committee

CAIP calls on House leadership to promptly bring these bills to the floor for a vote

September 11, 2024
Learn More
Read more

Two Easy Ways for the Returning Senate to Make AI Safer

With the Senate returning today from its August recess, there are two strong bills that are ready for action and that would make AI safer if passed

September 9, 2024
Learn More
Read more