No one has the right to stop you from saying your opinion about anything unless what you are saying is knowingly false and said with the intent to harm, distress, harass, or destroy another person's rights.
Editor, ADVOCATZ blog
Editor, New York Court Corruption
Editor, NYC Rubber Room Reporter
Editor, NYC Public Voice
Editor, National Public Voice
Editor, Inside 3020-a Teacher Trials
Regardless of one’s political persuasion, most of us can agree that First Amendment expression is, indeed, a bedrock constitutional principle.by Tom Kulik, June 1, 2020, Above The Law
First Amendment expression is a significant pillar of our constitutional freedoms in the United States, and when it comes to free expression online, the protections for vigorous debate over the internet should be no exception. Now, more than ever, online platforms such as Facebook and Twitter are providing incredible means through which to share not only ideas but news and events. The interesting fact is that none other than President Donald Trump himself enjoys using Twitter to directly reach his more than 81 million followers. His tweets, however, are not without controversy, and it seems some of them have now fanned the flames of “censorship” of content (or users) by online platforms, claiming that the social media platform (and others) may be engaging in activity that is eroding the very bedrock principle of First Amendment expression. Whether you agree with him or not, the underlying premise and its context is worth a look, and may even open your eyes to seeing online content liability in a new light.
To those who are not familiar, Section 230 of the Communications Decency Act of 1996 helped shape the internet as it stands today. Under Section 230(c)(1), “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In essence, Section 230 protects internet service providers from being treated like publishers, affording them immunity from liability for the content that is posted on their platforms. Further, Section 230 allows such providers to avoid liability for taking action “in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” What does this mean? It means that such providers can regulate certain content that meets such criteria without fear of civil liability for removing it.
From my experience with Section 230 since its inception, I find the current debate striking because many policymakers (and many lawyers) seem to misunderstand certain aspects of Section 230 and its application that are affecting the debate. Here are the three biggest misconceptions regarding Section 230 that everyone needs to keep in mind:
Don’t Get Caught Up With “Publisher” And “Platform.” Given the text of Section 230(c)(1) and the jurisprudence prior its enactment, it is easy to fall into the trap of seeing a legal distinction between “platform” and “publisher” and the extent of control over the content; however, this would be in error. The focus should remain on whether a platform is a “speaker” of the content. For example, if someone posted a defamatory reaction (i.e., comment) to an article posted by a staff writer for Yahoo News, then Yahoo News would not be liable for such defamation simply because it posted the comment. On the other hand, if any of Yahoo’s news editors or staff writers posted defamatory content on the Yahoo News website, then Yahoo News could be held liable for such posting because they would be the “information content provider.” For lack of better words, the online platform must not be the originator of the defamatory content at issue for Section 230 immunity to apply.
Copyrights Are NOT The Issue In Section 230. The fact that an internet service provider may store content it does not know to be infringing or otherwise “take down” such content under its policies and procedures and not be held liable for doing so should not be confused with Section 230 immunity. The Digital Millennium Copyright Act (DMCA), and more specifically, Section 512, not only addresses immunity for the transmission and caching of infringing content through automated means, but the requirements for receiving immunity from liability for the storage of infringing content it does not know to be infringing that resides on the platform. Of course, the DMCA is a lot more involved than the thumbnail reference above, but the point is that the DMCA is addressing immunity from liability for actions taken with respect to copyright infringement. Section 230, however, deals with immunity from liability for the posting of defamatory, obscene, excessively violent content, etc., whether or not such material is constitutionally protected.
Regardless of one’s political persuasion, most of us can agree that First Amendment expression is, indeed, a “bedrock” constitutional principle. Does this mean that Twitter’s actions on Trump’s tweets merit a remake of Section 230? At best, Twitter’s action seems ill-advised because it is not something consistently applied across the entire service — the notion of a social media platform potentially “taking sides” is repugnant to our notions of justice and fair play and undermines legitimate discourse. That said, do these facts merit a re-evaluation of Section 230 immunity? Given the broad interpretation of Section 230 by the courts since the law’s enactment, there is a good chance that more restrictive interpretation of Section 230 in line with Trump’s executive order will face an uphill constitutional battle. Perhaps that is the point. Inquiring minds will definitely differ, but the point here is that any debate should maintain the correct perspective on Section 230 and what is does (and does not) do. Anything else is just, well, idle chatter.