Trump’s social media order reaches the FCC

President Donald Trump’s quixotic pursuit of an internet fairness doctrine continued this week, as the administration petitioned the Federal Communications Commission (FCC) to adopt rules enacting Executive Order 13925. As I discussed when it was released, this executive order sought potentially significant changes to Section 230 of the Communications Decency Act (CDA), a key law that shaped much of the modern internet ecosystem. But the petition differs from the executive order, which reduces the already slim likelihood that the executive order will lead to meaningful change.

U.S. President Donald Trump shows off the United States-Mexico-Canada Trade Agreement (USMCA) after signing it on the South Lawn of the White House in Washington, U.S., January 29, 2020. REUTERS/Leah Millis
via Reuters

Can the FCC interpret Section 230?

Perhaps
surprisingly, the petition is correct that the FCC can regulate social media
platforms. Section 201(b) gives the FCC general rulemaking authority to carry
out provisions of the Communications Act. Congress enacted the CDA as an
amendment to the Communications Act, which brings Section 230 within the FCC’s
purview, so the agency may clarify the meaning of ambiguous statutory language,
and courts must defer if the clarifying regulation is reasonable. Given this
legal framework, the key questions are whether Section 230 is ambiguous, and if
so, whether the administration’s proposed interpretation is reasonable.

What is the administration asking the
FCC to do?

The petition identifies four supposedly ambiguous provisions needing clarification. First, it seeks regulations clarifying the distinction between Sections (c)(1) and (c)(2). As discussed earlier, Section (c)(1) (“posting immunity”) protects intermediaries (such as Twitter or Facebook) from liability for publishing material written by their users. Section (c)(2), by contrast, shields intermediaries from liability for removing content (“takedown immunity”). The administration is arguably correct that courts often conflate these provisions, allowing the general posting immunity to swallow the more qualified takedown immunity. But this does not mean the statute itself is ambiguous. Moreover, the proposed regulation — limiting posting immunity to claims arising from “failure to remove information” provided by a user — is unduly narrow given the statute’s broad, sweeping language.

From there, the petition focuses on narrowing takedown immunity. Section(c)(2) protects against liability for “action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” The administration seeks to limit “otherwise objectionable” material “similar in type” to the preceding adjectives and further limit “good faith” to actions that treat similarly-situated users similarly (a swipe at allegedly pretextual blocking of conservative content). This seems inconsistent with the statute, which grants platforms broad discretion over “material that the provider . . . considers to be” problematic. It also asks the FCC to impose a notice-and-response procedure before removing material, which finds no basis in the statute.

Two interesting observations flow from this effort. First, the petition focuses heavily on narrowing takedown immunity. While this is unsurprising given the executive order’s focus on platforms that supposedly suppress conservative content, it also minimizes the petition’s effect. A recent review of 500 Section 230 cases by the Internet Association found only 19 focused on (c)(2). This is in part because Section (c)(2) only provides a defense. A plaintiff must have a separate legal basis to argue that taking down content somehow violated the law, which is difficult given that most platforms reserve the right to remove content as part of their terms of service. As the Internet Association notes, often plaintiffs assert that removal violates their First Amendment rights, but these claims fail because as private companies, platforms are not bound by the First Amendment.

Second,
the petition seems to have abandoned the executive order’s suggestion to apply (c)(2)’s
“good faith” qualification to both takedown and posting immunity. The executive
order argued that in at least some circumstances, a platform that restricts
access to content in a manner not specifically protected by (c)(2) may also not
be able to claim protection under (c)(1). As I discussed in the earlier post,
this would have been significant, as unlike the takedown immunity, the posting
immunity is vital to modern platforms
— Facebook and Twitter allow us to post freely specifically because they face
no liability for doing so. Presumably, the petition’s authors realized there
was no way to read the statute this way. The petition does seek to limit
posting immunity by broadly defining when a post might be considered the
platform’s own speech, but my first reading found this proposal fairly
incoherent and difficult to apply.

Conclusion

Overall, my sense is that the proposed rules generally fail to identify or reasonably resolve ambiguities in Section 230. I remain convinced there is little reason the FCC would act on this petition. Even if it did, platforms retain a robust First Amendment right of editorial control that largely insulates them from government-mandated evenhandedness. The Fairness Doctrine was a bad idea for 20th-century broadcast, and it remains a bad idea for 21st-century platforms.

Social Media Auto Publish Powered By : XYZScripts.com