Democratic Platform Nails AI Strategy But Flubs AI Tactics

Jason Green-Lowe
August 21, 2024

Last Monday night (8/19/24), the Democratic Party approved its 2024 Party Platform.

The platform’s general rhetoric hits all the key themes of AI safety:

  • “Artificial intelligence (AI) holds extraordinary potential for both promise and peril.”
  • “We need to act now and act fast to realize the promise of AI and manage its risks to ensure that AI serves the public interest.”
  • “Democrats are committed to ensuring that workers get a voice in how AI is used in their workplace and that they share fairly in any economic gains AI produces.”
  • The Biden administration took credit for asking NIST “to set rigorous standards for extensive testing of powerful AI models to ensure safety before public release.”

The Democratic Party is clearly aware of the possibility that advanced AI could cause catastrophic damage. At his listening forum in December 2023, Senate Majority Leader Schumer (D-NY) asked, “could AI systems be used to more easily create a deadly novel pathogen or surpass the capabilities of even the smartest humans?” 

As Rep. Don Beyer (D-VA) explained to Time magazine, “As long as there are really thoughtful people, like Dr. Hinton or others, who worry about the existential risks of artificial intelligence—the end of humanity—I don't think we can afford to ignore that,” said Beyer. “Even if there's just a one in a 1000 chance, one in a 1000 happens. We see it with hurricanes and storms all the time.”

The question is: what will the Democratic Party do with this awareness?

Based on their platform, it looks like Democrats are imagining that they can solve the problem with voluntary guidelines and best practices. Although they call for an outright ban on voice impersonations, the remainder of the policies in their seven paragraphs on AI seem to lack an enforcement mechanism.

The Center for AI Policy (CAIP) welcomes the Democratic Party’s proposal to “invest in the AI Safety Institute to create guidelines, tools, benchmarks, and best practices for evaluating dangerous capabilities and mitigating AI risk.” However, such investments cannot and will not solve the problem all by themselves, because it is ultimately each company’s choice as to whether to follow the guidelines.

Such flimsy oversight would be readily recognized as insufficient in any other industry: we do not rely on ‘voluntary guidelines’ for combating wildfires, for avoiding food poisoning, or for making sure that airplanes stay in the air.

Why, then, is the Democratic Party apparently content to stick to voluntary guidelines for AI safety? This is not what the voters want. A poll conducted in June 2024 showed that 75% of voters from both parties favor “taking a careful, controlled approach” to AI over “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.” If we leave the decision up to private companies, as we have done so far, then at least one company will always choose to race ahead in pursuit of shareholder profits and fame. 

In the long run, the only way to give Americans the careful, controlled approach to AI that they demand and deserve is by working with Congress to pass binding AI safety legislation that levels the playing field and requires all companies to develop AI safely. 

Senators Blumenthal (D-CT), Reed (D-RI), Klobuchar (D-MN), and Hickenlooper (D-CO) have already acknowledged the need for mandatory third-party evaluations. CAIP urges the rest of the Democratic Party to join them.

OpenAI's Latest Threats Make a Mockery of Its Claims to Openness

Who is vouching for the safety of OpenAI’s most advanced AI system?

September 19, 2024
Learn More
Read more

OpenAI Unhobbles o1, Epitomizing the Relentless Pace of AI Progress

Engineers continue discovering techniques that boost AI performance after the main training phase

September 18, 2024
Learn More
Read more

AP Poll Shows Americans’ Ongoing Skepticism of AI

A new polls shows once again that the American public is profoundly skeptical of AI and worried about its risks

September 17, 2024
Learn More
Read more