My Blog
Technology

SCOTUS decision in Google, Twitter cases a win for algorithms too


In a pair of lawsuits targeting Twitter and Google, the Supreme Court had its first chance to take on the 1996 law that helped give rise to social media. But instead of weighing in on Section 230, which shields online services from liability for what their users post, the court decided that the platforms didn’t need any special protections to avoid liability for hosting terrorist content.

That finding issued Thursday is a blow to the idea, gaining adherents in Congress and the White House, that today’s social media platforms ought to be held responsible when their software amplifies harmful content. The Supreme Court ruled that they should not, at least under U.S. terrorism law.

“Plaintiffs assert that defendants’ ‘recommendation’ algorithms go beyond passive aid and constitute active, substantial assistance” to the Islamic State of Iraq and Syria, Justice Clarence Thomas wrote in the court’s unanimous opinion. “We disagree.”

The two cases were Twitter v. Taamneh and Gonzalez v. Google. In both cases, the families of victims of ISIS terror attacks sued the tech giants for their role in distributing and profiting from ISIS content. The plaintiffs argued that Twitter’s and Google’s YouTube’s recommendation algorithms aided and abetted the terror group in a violation of U.S. terror laws by actively promoting its content to users.

Many observers anticipated the case would allow the court to pass judgment on Section 230, the portion of the Communications Decency Act passed in 1996 to protect online service providers like Compuserve, Prodigy, and AOL from being sued as publishers when they host or moderate information posted by their users. The goal was to shield the fledgling consumer internet from being sued to death before it could spread its wings. But in the end, the court didn’t even address Section 230, deciding it didn’t need to once it concluded the social media companies hadn’t violated U.S. law by automatically recommending or monetizing terrorist content.

As social media has become a primary source of news, information and opinion for billions of people around the world, lawmakers have increasingly worried that online platforms like Facebook, Twitter, YouTube and TikTok are spreading lies, hate, and propaganda at a scale and speed that are corrosive to democracy. That has led to claims from both right and left that today’s social media platforms have become more than just neutral conduits for speech, like telephone systems or the U.S. postal service, putting their thumbs on the scale through their algorithms and their content moderation decisions.

The court ruled, however, that those decisions are not enough to find the platforms had aided and abetted ISIS in violation of U.S. law.

“To be sure, it might be that bad actors like ISIS are able to use platforms like defendants’ for illegal — and sometimes terrible — ends,” Thomas wrote. “But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large.”

The rulings leave open the possibility that social media companies could be found liable for their recommendations in other cases, and perhaps under different laws. In a brief concurrence, Justice Ketanji Brown Jackson took care to point out that the rulings are narrow. “Other cases presenting different allegations and different records may lead to different conclusions,” she wrote.

But there was no dissent to Thomas’ view that an alogrithm’s recommendation wasn’t enough to hold a social media company liable for a terrorist attack.

This is a developing story and will be updated.

Related posts

Mac Studio Deals: Where to Shop for Apple’s Newest Desktop Computer

newsconquest

Meta’s ‘Horizon Worlds’ Virtual Land Isn’t Grabbing Users, Report Says

newsconquest

49ers vs. Browns Livestream: How to Watch NFL Week 6 Online Today

newsconquest

Leave a Comment