The Supreme Court can fix the Internet... but only if they stand firm against Big Tech.
When the Communications Decency Act was passed in 1996, the Internet was a nascent technology that was in need of special protections and legal consideration. Fewer than 1 in 10 households had access to the Internet. Those who had a connection spent less than 30 minutes a day surfing the web. None could have imagined what the World Wide Web would become.
Section 230 of the CDA was designed to prevent litigious companies, special interest cabals, and other bad actors from attacking a new and exciting technology–potentially killing it in its infancy.
Flash forward nearly three decades and today's Internet is the proverbial Wild West. Complete with monopolistic tycoons who've amassed their wealth largely due to the fertile anarchy Second 203 has furnished.
Soon, the Supreme Court will hear a case (Gonzalez V. Google) that could modestly reign in some of the legal immunity Section 230 has provided. And the claims that "the sky is falling" from FAANG-likes is as overblown as they are expected.
In full, Section 230 reads:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
The spirit of Section 230 is simple: websites (including YouTube, Twitter, etc.) hosting things their users upload should not be held to the same legal standard as, say, The New York Times Op-ed section. That's the spirit. But Big Tech reads that as "we're not responsible for anything our service does with your content."
And look, I owe so much of my success on the Internet to YouTube and Section 230. Indeed, I have over 100,000 subscribers on their platform. So it may seem hypocritical of me to be arguing against Google in this case.
But I intend to persuade you, dear reader, that the Supreme Court's ruling in favor of Gonzalez would make for a better overall Internet and a friendlier experience on social media in general.
What Actually is Free Speech?
First, I'm a staunch free speech advocate. I believe that the First Amendment to the US Constitution guarantees your right to speak freely, without fear of censure... from the Government. Furthermore, that right is essential to liberty across the globe.
Though sound minds do question how far that freedom should extend to corporations. Any reasonable person would agree that the news media should never face reprisal from the government for doing their job. We also know that advertisers should be punished for employing deceptive practices.
The way I think about it is simple: paid speech can never be free speech.
Free Speech on the Web
Now, you may have noticed that I believe in the spirit of Section 230. I am in no way calling for Section 230 to be repealed.
Websites that allow user content to be publicly accessible should not be held to the same standards as a magazine or TV station or newspaper. Most experts agree that it would be impossible for a site like YouTube to exist without a provision like Section 230.
However, that does not mean that social media should have a free hand to act with reckless indiscretion or impunity.
The question is: where do we draw the line? What behaviors and functionality of a platform should cause it to be seen as a "publisher" under the law?
The Line
The answer, to me, is a simple and objective one: when a website implements a "curation" system.
Whether that's an automated discovery system like the Facebook/Twitter/YouTube algorithms, or a hybrid algorithmic/human-curated system like TikTok. I believe that's when a platform crosses the line into publisher territory. And that's when it should be held to the same standards as a magazine or movie studio.
At that point, the site is no longer simply hosting user content (which I believe is the spirit of Section 230). Instead, they are curating content. If the algorithm pushes another user's post to your feed? That is no different from a newspaper publishing an op-ed. And if the algorithm knows that a post like that will keep you outraged and send you down the engagement rabbit hole? They profit. Paid speech.
Legal Jeopardy
This is essentially what Gonzalez V. Google is arguing; that YouTube and Facebook are acting as publishers when they promote content algorithmically. And they should be held liable for the damaging and dangerous content they promote.
And frankly, I think it's fair to expect a company has a responsibility over such content. A duty to ensure the content they push to their users is–at the very least–not harmful.
So, I believe a company should be considered a publisher if they promote user-generated content. That can either be harmless content or something like (though not limited to) the following:
- Someone doxes another user and and it shows up in your feed,
- Someone distributes malicious software through an ad service,
- Someone shares copyright-infringing material and the platform promotes it,
- Someone spreads actual libel or real slander and the platform puts it in your timeline,
- Someone publishes self-harm instructions and the platform recommends it,
- A criminal organization has their recruitment propaganda shared by the platform.
The platform should be at least partially legally responsible in any of the above circumstances. Doubly so if you have not followed or subscribed to the original creator of said content.
But why does any of that matter? Well, publishers can be held responsible for the content they publish. But thanks to Section 230, platforms cannot be.
A Supreme Court ruling in favor of Gonzalez would put the current, lawless business model of these sites in legal jeopardy. It would make for a legal gray area that would need new, clarifying legislation and scare off the FAANG-likes from their current discovery algorithms.
From my perspective, platforms like YouTube and Facebook should continue to exist, but their discovery models should not. And the only way that's going to change is if these companies can be sued for the stuff they promote.
Furthermore, platforms should only be liable for the content they promote. If they don't promote a piece of user-generated content for public consumption, then they should have the same Section 230 immunity for that content.
Essentially, I think of promotion as almost synonymous with endorsement. What if YouTube endorsed every video they shoved into your recommended videos?
The Alternative
There already exists an alternative. A better way to do content discovery: user-generated discovery. In fact, it's how YouTube and Facebook and Twitter used to work. And what's great about it is that user-generated discovery mechanisms are in the spirit of Section 230!
In reality, when you strip out advertising and algorithms and publications, it's the natural way humans discover things.
The term "going viral" was coined when videos on YouTube were spread from person to person like a virus. Ask yourself, when was the last time a YouTube video went viral?
So what would user-generated discovery look like? Well, it wouldn't look like what we have today.
Today we have automated, blackbox kingmakers which amplify (or muzzle) content on each platform. In most instances, what the algorithm decides to promote are things that it believes will keep you engaged. The outrageous stuff that will keep you using their service the longest... with almost no regard for the substance or quality of the content itself.
If it's a TikTok time suck, an Insta Reel promoting eating disorders, an amateur documentary proffering baseless, cultish conspiracy theories, an article about building an improvised explosive device, whatever. All of it, fair game for the algorithm to skillfully keep you addicted and scrolling.
But what if, tomorrow, YouTube looked like a completely different place? What if every YouTube account had a way to "retweet" videos from other creators? You'd see the activity of your favorite creators in a chronological timeline. No more outrage bubbles, no more destructive rabbit holes of misinformation. Just the experience you want from the platforms and users you choose.
In my book, this is the ultimate expression of the spirit of Section 230. Though reddit as a platform is by no means perfect, it's a platform that provides users with a means of sharing interesting things. And the users are mostly in control. Mostly.
Conclusion
Overall, Section 230 is a good thing for small websites, local businesses, hobby projects, and platforms that host their user's content. And it should continue to protect a healthy and diverse web.
But I believe there ought to be a provision that, should a platform cross the line into curating user-generated content, they may be held responsible for said content.
I think that it's time the lawless web starts to mature and the only way that will happen is through regulation. I hope that Gonzalez V. Google is a turning point for the Internet. I hope we see the SCOTUS ruling followed up with ethical, effective legislation aimed at delivering a better user experience for everyone online.
They should be in the clear so long as they:
#1 provide reasonably relevant results to search queries and
#2 don't promote their own content over search results
I'd love to know what you think about any of this, though. You can sign up below and leave a comment. I can't wait to hear from you!