Photo Credit: pexels
Update on court situation! Click here.
Right now, there are two Supreme Court cases (Gonzalez vs. Google and Twitter vs. Taamneh) that have the potential to upend the world’s top social media platforms. That’s because both cases pose challenges to Section 230, a clause of the Communications Decency Act that has been eloquently described as “the 26 words that invented the internet.” No Section 230, no Facebook or Twitter, it’s just that simple.
The internet as we know it could be coming to an end
To understand what’s at stake in the court system, it’s first important to understand what Section 230 is, and why it played a deciding role in the development of social media. There are two main provisions at the heart of Section 230.
First and foremost, Section 230 ensures that online platforms such as Twitter and Facebook have no liability for the actions of their users. A scandalous tweet, or hate speech posted to a public Facebook page, can’t be used in a lawsuit against the company that created the social media platform. In other words, you can’t sue Facebook because some neo-Nazi posted a vile manifesto online. This is the big difference between being a platform and a publisher. A publisher can be sued for defamatory content or incendiary hate speech, but a platform can’t.
Secondly, Section 230 protects the ability of companies to moderate content online. Thus, Twitter or Facebook can “clean up” content they don’t like, or even remove content entirely. As long as they do it in a reasonable and responsible manner, they can’t be accused of censorship. This is why they are constantly updating their “Terms of Service.” By simply referencing their Terms of Service, they can do a lot of different things to the content being posted online (such as applying warning labels to some types of content).
Why the two Supreme Court cases matter
The two Supreme Court cases, interestingly, both involve terrorist-related content appearing online. In the case Gonzalez vs. Google, for example, the victim of a terrorist attack claims that Google should be held accountable for ISIS terrorist recruitment videos that were later recommended by YouTube (which is owned by Google).
This legal logic here is a bit tricky to understand, but yes, it involves Section 230. Under Section 230, YouTube should not be liable at all, no matter how despicable the ISIS videos might be. Remember – Section 230 completely shields companies from any liability arising from content appearing on their platforms, including terrorist-related content. But this case alleges that companies should be liable for content recommended by an algorithm (i.e. the YouTube video algorithm that predicts what people want to watch next). In short, this case revolves around the algorithm, because this is something YouTube controls. It might be unfair to blame YouTube for some crazy ISIS terrorist posting content online, but is it unfair to blame YouTube for recommending this content to someone else?
And that’s where things get really complex, because just about every social media platform has gone algorithmic these days. If the Supreme Court rules that algorithms can lead to liability, then there is going to be a massive domino effect for every social media company that uses algorithms. If a mean tweet appears in my Twitter feed, can I now go and sue Twitter?
The case of Twitter vs. Taamneh is even more convoluted. The victim of a terrorist attack says that Twitter should be held liable because of something that it did NOT do, rather than something that it did do. In short, Twitter failed to remove terrorist content from its platform, and that content was later seen by the victim. Thus, Twitter is “aiding and abetting” a terrorist organization by showing its tweets to others. So, according to this logic, Twitter should have to pay the consequences. And it would also seem to imply that Section 230 no longer applies to content moderation efforts.
When you consider just how much social media content is posted each day, doesn’t this sound a bit impossible? It puts Twitter (or any other company) into an impossible position: they would have to hire armies of both humans and AI-powered bots to guarantee 100% that no offending content ever gets published online.
Free speech and censorship
What makes things especially complex is how politicized Section 230 has become. The Republicans (or, at least, the supporters of Donald Trump) are looking for ways to end Section 230. They claim that Section 230 has made it possible for the likes of Twitter and Facebook to censor anyone they like, and to de-platform key voices, including the former U.S. president. So they want to see a rollback of Section 230. They claim that only a rollback of Section 230 will ever bring back free speech to social media.
But is that really the case? Facebook’s parent company Meta actually argues the opposite. Meta claims that if full Section 230 protections are not granted to algorithmic recommendation engines (such as the kinds that power the Facebook feed), then it will actually lead to more, not less, censorship. Companies like Meta will be so afraid to recommend anything that offends or triggers people that they won’t recommend any content except the blandest of the bland.
Conclusion
At the end of the day, Section 230 has tremendous consequences for the future of social media. As these two Supreme Court cases show, it’s too simplistic to say that Section 230 is either “good” or “bad” for social media. In the case of Gonzalez vs. Google, for example, an argument could be made that a rollback of Section 230 would do more harm than good.
So keep an eye on this space. What happens next with Section 230 could forever change the way you use Facebook, Twitter, or YouTube.