I write this blog post on an auspicious day (my birthday)
while the audio stream in the background describes a 2020 US presidential
election that is closely contested — one that will likely go down to the wire.
A recurring theme in the commentary — albeit a Radio New Zealand-curated
stream, one step removed from the hurly-burly of Washington — is the sense that
the outcome, whichever way it goes, will likely leave a US populace deeply
divided at a time when a sense of cohesion and common purpose is needed as the
nation seeks to find a path through the economic, social, and health havoc
wrought by COVID-19.
The 2020 campaign has been bitterly fought. Consistent with
the trends over recent years, the record of what has transpired — and will
subsequently take place — is increasingly being held on social media.
Regardless of the result, the ensuing tensions from a contested election
result, combined with uncertainties due to COVID-19, provide fertile ground for
conflicts that will certainly play out on platforms such as Twitter and
Leading up to the election, much was made of the role these
platforms play in disseminating information that could potentially distort its
result. Facebook, for example, took concrete steps to ensure content posted by politicians
was fact-checked and offending content was removed.
Moving forward, however, a new potential challenge emerges:
the role the platforms could play in fueling conflicts between resentful losers
and gloating winners — and in coordinating protests and actions in the physical
world. The risk is not purely hypothetical. Social media was credited with
enabling the organization of demonstrations and other more violent protests
during the Arab Spring (2010–12). This occurred in Middle Eastern and North
African countries with only a handful of social media users and poor
telecommunications networks. What could be achieved in a first-world country
with the best networks available and near-ubiquitous uptake of the platforms in
Fortunately, this risk appears to have been foreseen by
Facebook, at least. Last week, it was reportedly evaluating tools developed for
so-called at-risk countries for use in the US after the election. These tools —
developed for use in politically volatile countries such as Myanmar and Sri Lanka, where racial and religious tensions and
political divisions prevail — include slowing the spread of posts as they begin
to go viral, altering the News Feed algorithm to change what content users see,
and changing the rules for what kind of content is dangerous and warrants
While on first blush, the use of these tools appears to raise some concerns about freedom of speech, note that all of them were proposed in the “Christchurch Call” — the collaborative agreement between social media companies and several nations (notably not the US) to constrain the dissemination of objectionable material in the wake of last year’s massacre at two mosques in the New Zealand city of Christchurch. There appears to be broad international acceptance that social media platforms will, of their own accord, introduce such tools where they see a risk exists. Facebook executives have indicated they will deploy the tools in the US only if election-related violence or other dangerous circumstances unfold. Nonetheless, the option exists because the tools already exist.
So what does this mean for social media policy in the United
First, it demonstrates that in the modern world, the
diffusion of social media is indeed a two-way street. Tools developed in the
third world can and should be used in the first world just as first-world tools
can be used in the third. This is a feature of globalization that so far does
not seem to have been constrained by increasing national partisanship.
Second, it shows that firms can willingly take steps to
manage content on their sites without requiring regulatory dictates. What
matters more is that they are transparent about how they use their tools so platform
users can make informed decisions about what sites to use and how much weight
to give to content viewed on them. There is debate about whether these firms
should be required to adopt codes of conduct analogous to those used in the
conventional media to govern such actions. But in the short run, when decisions
must be made today about content that could be viewed tomorrow, it seems that
at least some social media firms are willing to take some responsibility for
the content they carry.