On the largest internet platforms, content moderation is bad and worse and worse. Getting it right is difficult, and on the scale of millions or billions of users it may be impossible. It’s hard enough for humans to sort through spam, illegal content, and offensive but legal speech. Bots and AI also failed to get the job done.

Thus, it is inevitable that services will make mistakes, suppressing user speech that does not violate their policies or terminating user accounts without explanation or the possibility of appeal. And often inconsistent moderation falls hardest on oppressed groups.

The dominance of a handful of online platforms like Facebook, YouTube and Twitter increases the impact of their content moderation decisions and mistakes on the ability of internet users to speak, organize and participate online. Poor moderation of content is a real problem that harms Internet users.

There is no perfect solution to this problem. But US lawmakers seem in love with trying to force platforms to follow a government-imposed editorial line: host this kind of speech, suppress this other kind of speech. In Congress, hearing after hearing, lawmakers hammered at executives of the largest companies on what content stayed in place and what went down. Hearings ignored small platforms and services that could be damaged or destroyed by many of the proposed new internet regulations.

Lawmakers have also largely ignored worthy efforts to tackle the outsized influence of larger online services, such as legislation supporting privacy, competition and interoperability. Instead, in 2021, many lawmakers decided that they would be the best content moderators themselves. The EFF has therefore repelled and continues to fight against repeated attempts by the government to undermine freedom of expression online.

The best content moderators don’t come from Congress

It is a well established part of Internet law that individual users are responsible for their own speech online. Users and platforms that broadcast the voice of users are generally not responsible for the words of others. These principles are enshrined in a key Internet law, 47 USC § 230 (“Section 230”), which prevents online platforms from being held liable for most legal actions relating to the speech of their users. The law applies to small blogs and websites, to users who repost other people’s speech, as well as to larger platforms.

In Congress, lawmakers have introduced a series of bills that suggest moderation of online content will be improved by deletion these legal protections. Of course, it’s not clear how a barrage of costly lawsuits targeting platforms will improve discourse online. In fact, potentially having to litigate every content moderation decision will actually make hosting online speech prohibitive, meaning there will be strong incentives to censor user speech every time someone complains. Anyone who isn’t a Google or a Facebook will have a hard time affording to manage a website that hosts user content, which is also compliant with the law.

Nonetheless, we’ve seen bill after bill that has actively sought to increase the number of lawsuits against online speech. In February, a group of Democratic senators took a shotgun-like approach to undermining internet law, the SAFE Tech law. This bill would have prevented section 230 from applying to speeches in which “the provider or user has accepted payment” to create the speech. Had this been adopted, SAFE Tech would have both increased censorship and undermined data privacy (as more online providers have switched to invasive advertising and moved away from “payment acceptance” , which would cause them to lose their protections.)

The following month we saw the introduction of a Revised PACTE law. Like the SAFE Tech Act, PACT would reward platforms for excessive censorship of user speech. The bill would require a “notice and takedown” system in which platforms suppress the user’s speech when a requester provides a court order finding the content to be illegal. It seems reasonable at first glance, but the PACT law has provided no guarantees and would have allowed would-be censors to suppress speech they dislike by obtaining preliminary or default judgments.

The PACT law would also require certain types of transparency reporting, an idea that we hope to see come back next year. While we support voluntary transparency reporting (in fact, this is a key part of Santa Clara Principles), we do not support mandatory reporting backed by federal law enforcement or the threat of losing Section 230 protections. In addition to being bad policy, these regulations would infringe on the rights of the former. amendment of services.

Last but not least, later in the year we tackled the Justice Against Malicious Algorithms, or JAMA Act. The authors of this bill blamed the problematic online content on a new mathematical bogeyman: “personalized recommendations”. The JAMA Act removes Section 230 protections for platforms that use a loosely defined “personal algorithm” to suggest third-party content. JAMA would make it nearly impossible for a service to know what kind of content curation might make it susceptible to legal action.

None of these bills has yet been passed. Still, it was appalling to see Congress continue on repeated dead ends this year, trying to create some sort of internet speech control regime that wouldn’t violate the Constitution and produce widespread public dismay. Worse yet, lawmakers seem utterly indifferent to exploring real solutions, such as consumer privacy protection legislation, antitrust reform, and meinteroperability requirements, which would solve the dominance of online platforms without having to violate users’ First Amendment rights.

State legislatures attack freedom of expression online

While Democrats in Congress have expressed outrage at social media platforms for failing to suppress user speech quickly enough, Republicans in two state legislatures have passed laws to tackle alleged platform censorship on the speech of conservative users.

The first was Florida, where Governor Ron DeSantis decried President Donald Trump’s Twitter ban and other “tyrannical behavior” by “Big Tech”. State legislature passed a bill this year, which bans social media platforms from banning political candidates or deprioritizing posts by or about them. The bill also prohibits platforms from banning major news sources or issuing an “addendum” (ie fact-checking) to news source posts. Non-compliant rigs can be fined up to $ 250,000 per day, unless the rig also has a large theme park in the state. A representative from the State of Florida who sponsored the bill Explain that this exemption was designed to allow the Disney + streaming service to escape regulation.

This law is clearly unconstitutional. The First Amendment prohibits the government from requiring a service to let a political candidate speak on its website, nor can it require traditional radio, television or newspapers to host the speech of the party. particular candidates. The EFF, together with Protect Democracy, filed a friend of the court brief in a lawsuit challenging the law, Netchoice v. Moody. We won a victory in July, when a federal court blocked the law to come into force. Florida appealed the decision and the EFF filed another brief in the United States Court of Appeals for the Eleventh Circuit.

Next came Texas, where Gov. Greg Abbott signed a bill to prevent social media companies that he said “Silence conservative views and ideas”. The bill prohibits large online services from moderating content based on user views. The bill also required platforms to follow transparency and complaints procedures. These requirements, if carefully designed to take into account constitutional and practical concerns, might be appropriate as an alternative to editorial restrictions. But in this bill, they are an integral part of a retaliatory and unconstitutional law.

This bill has also been challenged in the courts, and The EFF has weighed again, declaring in a federal court in Texas that the measure was unconstitutional. The tribunal recently blocked the law enter into force, including its transparency requirements. Texas has appealed the decision.

A Way Forward: Questions Lawmakers Should Ask

Proposals to rewrite the legal foundations of the Internet have been so frequent this year that at EFF we have drafted a more detailed analysis process. Having championed user discourse for over 30 years, we’ve developed a series of questions lawmakers should ask when crafting any proposal to change laws governing speech online.

We ask first, what is the proposal trying to accomplish? If the answer is something like “get big tech under control,” the proposal should not hamper competition from small businesses, or in fact cement the existing dominance of larger services. We are also examining whether the legislative proposal correctly addresses internet intermediaries. If the goal is something like stopping the harassment, abuse, or stalking, these activities are often already illegal and the problem can be best addressed with more effective law enforcement or civil actions targeting those perpetrating the crime. wrong.

We have also heard an increasing number of calls to enforce moderation of content through the infrastructure level. In other words, shutting down content by asking an ISP or content delivery network (CDN) to take certain action, or a payment processor. These intermediaries are potential “chokepoints” of speech and there are serious questions that policymakers should think about before attempting infrastructure moderation.

We hope that 2022 will bring a more constructive approach to internet law. Whether this is the case or not, we will be there to fight for users’ right to freedom of expression.

This article is part of our Year in Review series. Read more articles on the fight for digital rights in 2021.