The Need for AI Safety Has Bipartisan Consensus at the Highest Ranks

Kate Forscey
,
October 1, 2024

The verdict is in: the need to move on AI safety is one of the rare bipartisan agreements these days.

In the past week, both President Joe Biden and former (and possibly future) first daughter Ivanka Trump have made significant statements on the need for AI safety regimes.

In his recent speech to the United Nations General Assembly, President Biden spent a decent amount of time discussing the need for AI safety:

“But let’s be honest.  This is just the tip of the iceberg of what we need to do to manage this new technology. Nothing is certain about how AI will evolve or how it will be deployed.  No one knows all the answers. [...]

“Will we ensure that AI supports, rather than undermines, the core principles that human life has value and all humans deserve dignity?  We must make certain that the awesome capabilities of AI will be used to uplift and empower everyday people, not to give dictators more powerful shackles on human — on the human spirit.” 

Meanwhile, Ms. Trump has made public her concern about the issue, going so far as to do a deep dive study into the issue. In one of her most recent comments on X (formerly Twitter), she notes: ”Leopold Aschenbrenner’s SITUATIONAL AWARENESS predicts we are on a course for Artificial General Intelliigence (AGI) by 2027, followed by superintelligence shortly thereafter, posing transformative opportunities and risks.” The document she shared highlights the fact that ‘leading AI labs treat security as an afterthought.’

Yet unfortunately, California Governor Gavin Newsom, who has generally been strong on consumer rights and AI issues, vetoed SB 1047, the strongest state bill to date intending to codify safety regulations. Even Elon Musk, of Silicon Valley fame, backed the bill, which passed through both chambers. Anthropic, a leading AI lab, said ”we believe its benefits likely outweigh its costs.” At the end of the day, the Governor was the lone dissenter. 

The EU has moved forward on AI regulation. California has tried and so far failed to initiate significant AI safety regulations. Yet there is clear bipartisan support for safety in AI at the highest levels, no matter how the election plays out come next month. 

Congress has been working on this issue for months and months, and we now have further validation from both sides of the aisle. And to give some credit, multiple bipartisan bills have been introduced, but none show signs of getting across the finish line. It’s time to make a concerted effort to do so, to ensure AI safety on a national level while we still have the time, because AI waits for no one.

Preparedness: Key to Weathering Tech Disasters

Creating a plan, anticipating challenges, and executing a coordinated response saves lives and protects communities

October 10, 2024
Learn More
Read more

Sam Altman’s Dangerous and Unquenchable Craving for Power

No one man, woman, or machine should have this much power over the future of AI

October 9, 2024
Learn More
Read more

Memo: Walz-Vance Debate and the Hope for Hearing AI Policy Positions

CAIP hopes that Walz and Vance will tell their fellow Americans where they stand on AI safety legislation

September 30, 2024
Learn More
Read more