TikTok Lawsuit Highlights the Growing Power of AI

Jason Green-Lowe
September 4, 2024

Last week, the Third Circuit Court of Appeals issued a ruling in the tragic case of a 10-year-old girl who accidentally hanged herself while trying to replicate a “Blackout Challenge” shown to her by TikTok’s video feed. 

Algorithmic responses

The judges distinguished between two different types of algorithmic responses:

  1. A passive algorithm that responds to a customer’s apparent preferences by recommending videos similar to those she has already watched vs.
  2. An active algorithm that affirmatively recommends content for a customer based on the customer’s age and demographics. 

According to the judges, a purely passive algorithm would be immune under the increasingly notorious Section 230 of the Communications Decency Act. Most other courts that have considered Section 230 issues have also found that the defendants were immune.

However, in this case, the judges refused to apply Section 230 and instead allowed the girl’s family to try to “hold TikTok liable for its targeted recommendations of videos it knew were harmful.” The Third Circuit decided that TikTok’s algorithm went so far beyond the original understanding of what Section 230 was designed to cover and that no immunity was appropriate.

Section 230 was originally designed to protect third-party bulletin board services like AOL, Prodigy, and Yahoo, which merely hosted content created by third parties and allowed interested users to download that content if they wished. An online bulletin board might censor some of the most offensive posts to create a more family-friendly environment, but early Internet providers were not in the business of recommending content to their users.

By contrast, today, it is so common for algorithms to decide on the content of our ‘feeds’ that we sometimes forget that this used to be a decision left in the hands of the reader. Most of us unconsciously consume social media for two hours a day without even knowing – much less controlling – where that content comes from.

Maximal attention hijacking

As Cal Newport noted in his book on Digital Minimalism,

“Hundreds of billions of dollars have been invested into companies whose sole purpose is to hijack as much of your attention as possible.”

The more you watch, the more companies like TikTok and YouTube get paid by advertisers. This is a serious enough social problem when editorial decisions are made by corporate executives—but what about when the decisions are made by AI? 

The AI algorithm that decided to show videos of hangings to 10-year-old girls did not know or care about child psychology and did not have any sense of shame or morality. By their very nature, artificial intelligences are amoral. They have no sense of right and wrong – all they can do is accomplish their task as efficiently as possible. If a careless trainer tells them their task is to maximize advertising revenue, they will do so with ruthless and single-minded purpose, even if that means killing some of their customers.

We should think very carefully before continuing to expand the power of these amoral AIs. They are already controlling our social media feeds, restaurant recommendations, dating suggestions, and housing prices. Before we also put them in charge of our healthcare system, the power grid, and the stock market, it is critical to have the government set some minimum safety standards to ensure that powerful AIs are trained well enough to avoid killing their customers.

Oprah’s New "Favorite Thing": Safe AI

America’s best-beloved circles up a crew of technologists, humanists, and a law enforcer on what’s next for humanity in AI

September 13, 2024
Learn More
Read more

Stoplight Report: National Campaigns are Ignoring Americans' Concerns on AI

Out of 30 campaign websites reviewed, only 4 had even a single clear position on AI policy

September 11, 2024
Learn More
Read more

Memo: The Harris-Trump Debate + Safe AI

The Center for AI Policy (CAIP) believes the 2024 Presidential candidates need to take a stand on AI safety

September 6, 2024
Learn More
Read more