OpenAI Employees File Complaint Alleging Violations of SEC Regulations

Claudia Wilson
July 15, 2024

OpenAI employees filed a complaint with the SEC. Regardless of the outcome, they still need stronger whistleblower protections.

Two weeks ago, OpenAI whistleblowers filed a complaint claiming that the tech behemoth had violated SEC regulations with overly restrictive NDAs and employment agreements. They alleged that these agreements prohibited employees from “communicating concerns to the SEC about securities violations, forced employees to waive their rights to whistleblower compensation, and required employees to notify the company of communication with government regulators”.

These are serious allegations. Whistleblowers are one of the key accountability mechanisms for AI companies as there is no government enforcement of their voluntary commitments. Without them, AI companies may continue to deprioritize safety measures to meet ambitious launch dates. Only this week, we’ve seen revelations that evaluations for GPT-40 were condensed into a single week.

While the Center for AI Policy (CAIP) hopes that the SEC investigates these claims and acts accordingly, the fact remains that existing protections are insufficient. This complaint invokes Dodd-Frank and the Sarbanes-Oxley Act (SOX), which were designed to protect employees of public companies against retaliation for sharing securities violations or fraud with the SEC. These protections cannot be extended to employees of private AI companies. Even for employees of public companies, these whistleblower protections won’t cover them unless they can link concerns about AI safety to securities violations.

Outside of SEC protections, some states have a ‘public policy exception’ which protects whistleblowers against termination for disclosures that are in the public interest. However, the definition of public interest is entirely at the discretion of the states and there is no guarantee it would encompass AI safety concerns. How can we expect whistleblowers to put their careers and livelihoods on the line for protections that are subject to interpretation?

California’s proposed AI bill is a step in the right direction. The bill includes protections for whistleblower employees who report noncompliance with SB 1047, but, of course, these protections are geographically limited.

The US needs dedicated, federal whistleblower protections for employees of AI companies who reveal safety, privacy, or other ethical violations. Given how far away federal AI safety regulation may be, it is critical that these protections are not limited to reporting illegal activity.

We already have dedicated protections for sectors such as aviation, food safety, environmental protection, and mining. Dodd-Frank and SOX were both introduced in the last twenty-five years to address specific gaps in whistleblower protections. In both these cases, it took severe consequences for Congress to legislate, with the 2007-2008 Financial Crisis prompting Dodd-Frank and Enron’s corporate fraud triggering SOX.

Let’s not wait any longer to see what the consequences of poor AI safety could be. Congress should introduce federal whistleblower protections for AI employees today.

TikTok Lawsuit Highlights the Growing Power of AI

A 10-year-old girl accidentally hanged herself while trying to replicate a “Blackout Challenge” shown to her by TikTok’s video feed.

September 4, 2024
Learn More
Read more

AI’s shenanigans in market economics

Yet another example why we need safe and trustworthy AI models.

August 30, 2024
Learn More
Read more

Somebody Should Regulate AI in Election Ads

Political campaigns should disclose when they use AI-generated content on radio and television.

August 28, 2024
Learn More
Read more