You Can’t Win the AI Arms Race Without Better Cybersecurity

Jason Green-Lowe
August 13, 2024

I returned home from my first DEFCON last night, a cybersecurity conference that many attendees affectionately call “summer camp for hackers.” 

The nickname comes in part from the relatively playful, meme-infested style of the talks and presentations, and in part from the interactive tech demos that give everyone a chance to learn new skills and engage in friendly competition.

As an attorney who hasn’t done much hacking since middle school, I found both the talks and the demos to be a startling wake-up call. 

Want to break into an office building? You don’t need a Batman-style grappling hook; turns out you can just use a standard key to physically unlock the entry pad and then silently short-circuit it with a piece of bent wire. If they ask to check your company badge, you can create one using a $20 keycard cloning device. Once you’re inside, you can plug in a USB drive with preloaded attack software. For example, the $120 Bash Bunny Mark II advertises that it “goes from plug to pwn in 7 seconds – so when the light turns green, it’s a hacked machine.”

Physically breaking into a secure facility might sound like a plot out of a James Bond movie, but the stakes will increasingly justify such stunts as AI gets more and more expensive. By the end of the year, some models will likely be trained on $1 billion of compute, which is far more than the average haul taken away from the thousands of successful bank robberies each year. 

In addition to the commercial incentives, state actors are encouraging or requiring their citizens to commit acts of industrial espionage. According to an intelligence officer I spoke with, if you’re a Chinese citizen and you come to America to work for a tech company, and the Chinese Communist Party orders you to steal the latest AI model, you really have no choice but to comply; returning empty-handed would likely lead to imprisonment for you and your family. According to FBI Director Charles Wray, China “aims to ransack the intellectual property of Western companies so it can speed up its own industrial development and eventually dominate key industries,” including AI.

Not all cybertheft requires a physical intrusion or an employee on the inside – one of the more interesting talks at DEFCON was about vulnerabilities in Amazon Web Services that allowed for remote execution of hostile code and the online “theft or manipulation of AI training datasets.” The vulnerabilities were open from February 2024 to June 2024, and affected six different Amazon services…and these are only the problems that we know about. We have to assume that there are other, similar vulnerabilities somewhere in the Big Tech ecosystem that have not yet been patched and that are continuing to leak valuable data and code to rival governments.

The Manhattan Project famously failed to keep the secrets of the atom bomb despite the fact that it was a single off-the-books project run by the military and hidden in the middle of the desert. The idea that multiple private companies conducting their AI operations in the open are all going to successfully prevent their tech from leaking to the Chinese is absurd – at least, it’s absurd to think that we can get there by conducting business as usual.

If we actually want America and its allies to win the AI arms race against rivals like China and Russia, we need large, creative, and fast upgrades to our AI cybersecurity. In terms of great power competition, there’s little point in inventing better missile control systems or better drone swarms if the Chinese are going to steal and adapt those inventions within months of their deployment.

That’s why the Center for AI Policy’s 2024 Action Plan calls for mandatory reporting on what (if anything) America’s largest AI companies are doing to upgrade their cybersecurity. The government might not know exactly which technical upgrades are best, but it’s obvious that something needs to be done. 

We think it’s reasonable to insist that Big Tech tell America how they plan to solve the problem.

OpenAI's Latest Threats Make a Mockery of Its Claims to Openness

Who is vouching for the safety of OpenAI’s most advanced AI system?

September 19, 2024
Learn More
Read more

OpenAI Unhobbles o1, Epitomizing the Relentless Pace of AI Progress

Engineers continue discovering techniques that boost AI performance after the main training phase

September 18, 2024
Learn More
Read more

AP Poll Shows Americans’ Ongoing Skepticism of AI

A new polls shows once again that the American public is profoundly skeptical of AI and worried about its risks

September 17, 2024
Learn More
Read more