How The Communications Decency Act Paved The Way For The Internet
It has been dubbed as the “twenty-six words that created the internet.”
Section 230 has become one of the most important, albeit controversial, laws in the country.
This became even more controversial when then Pres. Donald Trump issued an executive order that sought to limit the legal protection offered by Section 230.
President Joe Biden also echoed this sentiment, saying that this provision of the Communications Decency Act “should be revoked, immediately should be revoked, number one.”
But what is Section 230, and why is it so controversial?
Section 230 of the Communications Decency Act (CDA) of 1996 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
This means that online intermediaries hosting or republishing speech or content are protected against liability for what users post on their platforms.
Section 230 recognizes that holding websites legally responsible for user-generated content would cripple the rapidly developing online world.
It basically creates a broad protection that has allowed innovation and free speech online to flourish.
Since it has been passed by Congress and signed into law by President Bill Clinton on February 8, 1996, Section 230 lay the foundation for how we know and use the internet today.
It has made possible and allowed everything – from unfiltered opinions and comments to the social media phenomenon – and has enabled platforms to moderate their online content. On the flip side, this has also brought about mass disinformation, hate speech, and other objectionable content.
The Controversy That Is Section 230
Initially, the CDA was enacted in order to regulate online platforms and get them to follow community standards.
These platforms are expected to moderate content and remove or flag posts considered as “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” without having to be liable for that moderation or for user-generated content.
The controversy, then, centers on where to draw the line on that moderation.
According to Legal Director of Public Knowledge, John Bergmayer, “Section 230 was designed to let different platforms have different approaches about what content they choose to host, how they moderate that content, and what they takedown.” But we’ve lost a lot of that due to the rise of these major platforms.
Before, there wasn’t much of a concern because internet platforms were smaller and didn’t have much influence on the public.
Nowadays, however, companies and entities such as tech giants are powerful enough to sway public opinion and, therefore, influence and affect significant matters such as the economy and election outcomes.
There is also a lot of debate over whether internet platforms are moderating enough or maybe moderating too aggressively.
This makes it even more difficult for the public to filter out real, legitimate information from fake ones and for platforms to do the “right” thing.
Furthermore, Section 230 seems to just act as a guide to moderate content helps little in producing quality and legitimate content.
A lot of harmful content and fake information are crafted so that it becomes more preferable and profitable. As a result, consumer well-being is not really prioritized.
The First Amendment And Section 230
The First Amendment, acting out its main and foremost purpose, protects free speech and prohibits entities and the government from restricting any form of free speech, including hate speech.
It may seem like one contradicts the other, but in reality, Section 230 actually enhances the First Amendment.
It is a known fact that the First Amendment dictates that Congress “shall make no law…abridging the freedom of speech, or the press.”
A lot have argued that Section 230 is inappropriate to the First Amendment Rule, but this actually is not the case.
Section 230 does not limit free speech. In fact, it has allowed free speech to flourish online as users are permitted to post and share content online on any platform of their choosing.
Section 230 gives platforms the power to moderate content within what is allowed by law.
Mostly, this focuses on copyright violations, pornographic or sex work-related material, and violations of federal criminal law.
Other than that, online platforms cannot be held responsible for comments and content that their users post on their platforms so long as they do not constitute the mentioned violations.
They are also given the authority to moderate such content without fear of liability.
This encourages competition as the threat of liability over defamation concerns will cause platforms to over-censor valuable user speech just to avoid liability.
Improving Section 230
Around 23 bills seeking to amend Section 230 have been submitted for Congress to deliberate, and it seems that more are still coming.
Most of these amendments give off a general feeling that big tech companies get too much protection from the provision that should be limited by the government.
These amendments fall under the following categories:
1. “Carveout” approach, which reduces the scope of protection or requires a behavioral change from platforms should they want to keep those protections.
The argument is holding platforms liable will encourage them to protect consumers from content that is either harmful or discriminatory.
However, there should be a safeguard against over-censorship or over-moderation when this path is chosen.
2. The “bargaining chip” system constitutes fact-checking and restricting moderation to promote a freer flow of ideas.
However, it is difficult to determine where the compromise will come from when assumptions about where platforms go wrong are directly against each other.
3. Eviscerating Section 230. These amendments, however, seem to have been made to get the attention of big techs and online platforms rather than create a tangible and rational solution.
The Big Tech Companies’ Response To The Proposed Amendments Of Section 230
Facebook recently released a white paper that laid down what the company prefers regulators to do.
According to the white paper, online platforms are global and are subject to different laws and cultural values.
Furthermore, platforms act more as an intermediary for speech and not as publishers in the traditional sense of the word.
As is the nature of the online world, platforms also constantly change along with time to remain competitive, and thereby moderation decisions may not always be right.
Facebook, therefore, recommended that platforms can be held accountable for certain key metrics such as holding violating posts below a certain number of views or setting a mandatory median response time for removing violating posts.
However, other social media platforms such as Google and Twitter have voiced out concerns about extensive changes and amendments to the law.
The debate over the controversy of Section 230 and its amendments will continue, but finding middle ground on the potential changes will be a challenge.
Editor’s Note on The Section 230 Debate And Why It Is Important To Speech:
This piece is published to create awareness of section 230 that concerns every American over their freedom of speech on the internet.
Got any request or queries? Please send us a message by clicking the ‘Contact Us’ button below. We’d love to hear back from you!