This article is the fourth of five pieces from our summer series for 2020. The theme this summer is “Challenging Narratives.” In the coming days and weeks, The Generation will publish more articles where our writers challenge various notions to provide new and different perspectives on the debates and events shaping your world.
State violence, racial protest, and debates on the freedom of speech are not novel phenomena, but with the advent of social media and its global proliferation, 2020 especially has witnessed a fundamental change in how citizens criticize their government and conduct political protest. In
order for citizens to actualize their fundamental rights in the 21st century, one must navigate the complex relationship between free speech, democratic institutions, and the broadcasting power of social networks. From the Arab Spring a decade ago to the Belarusian election and George Floyd protests of this summer, social media platforms have democratized the ability to rapidly spread information about injustice to millions. Yet as these platforms become increasingly inseparable from our lives, online spaces have become battlegrounds between governments and protesters, and between information and misinformation. Now, the private companies who created these virtual domains must take on a balancing act that was previously reserved for states: protecting the freedom of speech while maintaining public safety. The complication with this role shift is that as international corporations, they must balance many countries’ cultural norms regarding the freedom of speech. To what extent will individual domestic laws be able to influence private, global platforms? In addition, social media companies have the task of remaining politically neutral while protecting election security. They are responsible for codifying checks on their own industry which inherently profits off of scandalous, viral information. The U.S. has seen the influence of online conspiracy theories increase from the 2016 election until now, demonstrating that what goes on in online chat rooms can have effects in the physical world. Conspiracy theories, inflammatory accusations by state leaders, hate speech, and calls for revolutions all violate some sort of public order in physical space, so how will social media platforms design cyber policies that will actualize people’s right to speech while protecting public order? Overall, because social platforms have become whistles for injustice and democratic tools for political organizing, social media companies must develop social media policy that helps users to actualize these freedoms online while fighting against misinformation and hatred that could pose as threats to our democracies.
Social media has catalyzed social and political movements since the Arab Spring in 2011. A viral video filmed by a common bystander ignited this powerful and paradigm altering movement. Mohamed Bouazizi was a 26-year-old fruit vendor who lit himself on fire in front of a government building after refusing to pay a police bribe. He had been unable to find work as a lawyer due to his involvement in the opposition Progressive Democratic Party, and he publicly committed suicide in protest of police harassment and President Zine El Abidine Ben Ali’s regime. Bouazizi’s death and his funeral were both filmed and sparked mass protests across Tunisia; President Ben Ali fled the country after a 23 year long reign four days after Bouazizi’s funeral. Like dominos, protests erupted across Egypt, Bahrain, Libya, Syria, and Yemen, all objecting to autocratic regimes. Social media created the ability for common citizens, especially young people, to collect and share evidence of state brutality. This technology has served as a system for fact checking government propaganda and has allowed opposition movements of repressive regimes to quickly galvanize and organize people in real life. While the results of the Arab Spring did not result in the “political reform and social justice” that was hoped for (Tunisia was the only nation that gained substantial changes in the form of a new constitution), technology was intrinsic to sparking these revolutionary movements and defining a new, 21st century method of actualizing one’s right to speak and assemble.
More recent examples of how social media has empowered everyday people and helped corroborate state brutality can be found in Lebanon and Belarus. In Lebanon, the government’s handling of the harrowing August 4th explosion in Beirut’s port, in addition to existing outrage over the governmental fiscal policy decisions during the nation’s grim economic recession, has ignited protest around the country. Videos of the explosion were disseminated widely across global news, and soon after, videos of excessive uses of force such as the use of tear gas on peaceful protesters began circulating on social media. According to Human Rights Watch, researchers “observed security forces fire a tear gas canister directly at a protester’s head, in violation of international standards, severely injuring him.” A video of the aftermath of that event where the man is being treated for his injuries was posted on Twitter and currently has 119,000 views. In Belarus, protests have broken out after attacks on members of the opposition party and the corruption of the sitting president. Aisha Jung, Amnesty International’s Senior Campaigner on Belarus, said:
“The people who have gathered to denounce the elimination of opposition presidential candidates from the election list have every right to take to the streets…Protesters claim that the sole reason Alyaksandr Lukashenka’s political opponents have been purged from the campaign is so that he can seek a sixth consecutive term as President, effectively unopposed.”
Peaceful protestors are now camped out in front of the Minsk parliament building in what is known as Independence Square. Anyone in the world with access to Twitter can be witness to these powerful moments; they can see the overwhelming crowd from a bird’s eye view and hear the ringing echo of thousands of voices bounce off the buildings. Yet, one can also find graphic videos of unnecessary and excessive force on peaceful Belarusian protesters in the media. These videos and images serve as evidence of state violence, dispel propaganda, and have captured the world’s attention.
One final example of social media’s role in galvanizing protests includes the recent demonstrations for racial justice across the United States which may be the largest social movement in U.S. history. Everyday people of all races and backgrounds, in addition to celebrities, sports teams, and corporations, have been contributing their support for the Black Lives Matter Global Network. The protests were ignited by the George Floyd’s murder by Minneapolis police on May 25th. 17-year-old Darnella Frazier captured the shocking image of the killing, which showed one officer kneeling on Floyd’s neck as he begged for his life while three other police officers looked distantly on. In America, protests over police brutality and racism have been catalyzed in the past by similar videos taken by bystanders. The Rodney King Riots of 1992 erupted after four Los Angeles policemen were acquitted for a brutal beating of Rodney King, a violent act caught on camera that made national and global news. Since then, technology has become heavily integrated into our lives and vastly accessible to many, so now more than ever the everyday citizen has the ability to record chaotic and events that could later be used to inform others or corroborate eyewitness accounts.
In an analysis of these events, one can view social media and technology as a widely accessible, democratic outlet where nearly anyone can exercise their human right to the freedom of expression. The people can share their voices, experiences, and opinions in ways that can check political regimes. But social media platforms are also private companies that, especially after the 2016 U.S. election, are reviewing their policies and investigating how they will balance the. universal right to free speech and the need to maintain public safety as well as stopping the spread of maliciously false information that could lead to real life disasters and threaten civic
institutions.
In 2016, a bizarre example of how fake social media accounts and conspiracy theories could develop real consequences arose. Crazed rumors about presidential candidate Hillary Clinton’s connection to a child sex trafficking ring hidden in a Washington D.C. pizza parlor spread on social media. Bot accounts, which are accounts that “automatically create tweets without direct human oversight”, helped to share and promote information on what became known as “Pizzagate” along with radical online “personalities” like conservative radio host Alex Jones. Edgar Welch, a believer of such theories, showed up to the supposed center of this satanic ring, Comet Ping Pong Pizza, with a “Colt AR-15 assault rifle, a .38-caliber Colt revolver and a folding knife, [and] fired his gun two or three times.” No one was injured, but this strange incident serves as an example of how fake media accounts created for the purposes of smearing a political figure online can have real and violent consequences. More recently, a group founded on similar beliefs to “Pizzagate” called QAnon has amassed followers on mainstream platforms. The Washington Post defines QAnon’s beliefs as
“… a concoction of allegations against Democratic politicians, celebrities and supposed members of a ‘deep state’ government bureaucracy, against whom Trump is seen as waging a valiant battle. Purported pedophilia rings are central to the conspiracy theory, along with Satanism and secret judicial proceedings’. QAnon believers await the ‘Great Awakening,’ or the moment the general public realizes the conspiracy exists, and the ‘Storm,’ when thousands of wrongdoers face justice.”
Despite being labeled a domestic terror threat by the FBI, Facebook only began removing QAnon platforms in August of 2020; this included 790 groups and 440 pages. Conspiracy theorists with fringe opinions spread false information to promote their own interests, interests defined by a system where the more followers one has, the more money one stands to make from advertising deals. For example, Alex Jones was “permanently suspended” from Twitter in 2018 on the grounds that his tweets violated Twitter’s community guidelines against abusive behavior which states “you may not engage in the targeted harassment of someone, or incite other people to do so. We consider abusive behavior an attempt to harass, intimidate, or silence someone else’s voice.” Recently in 2020, Alex Jones and his “InfoWars” app were banned from the Google Play store after spreading false information about the coronavirus; Jones also “lost an appeal in a defamation case about his claims related to the 2012 Sandy Hook Elementary School mass shooting” and racked up nearly $150,000 total in legal fees before the trial. Yet, cases where platforms that are actually taken down are the exception, and usually after significant harm has done. The spread of maliciously false information has terrible consequences in reality, especially during a pandemic that requires a collective effort and understanding which is constantly undermined by this misinformation complex.
The benefit of social media is that a single person has the potential to reach millions of people at once, but that also serves as a complication in the balancing act between protecting one’s right to self-expression and ensuring public safety. In America, hateful or inflammatory speech is usually thought of as “lawful but awful” and protected as long as it is not meant to incite imminent violence. But social media platforms span across many nations, all with differing regulations regarding one’s speech. An NPR podcast about free speech and social media discussed how hate speech is not protected in Europe and the EU has developed a solution of writing their guidelines into the terms of service that is signed by users of their regions; they warn platforms of passing restrictive laws against them if these rules are not factored in, while the Trump administration is also threatening legislative action if platforms do not conform to their own standards. This creates a tension unique to an international platform that has to balance the various cultural norms regarding free speech laws.
Another debate influencing free speech online is the one of the major components of Trump’s platform known as “Fake News” which is the idea that traditional media coverage and social media algorithms discriminate against conservative viewpoints. Despite this pervasive allegation from the Right, there has been no evidence to substantiate this claim. In one study, researchers from the University of Pennsylvania and the National University of Singapore debunked this assertion of online conservative bias in their work “The Ideological Landscape of Twitter: Comparing the Production versus Consumption of Information on the Platform” where they found that “while partisan opinion leaders are certainly polarized, centrist or non-political voices are much more likely to produce the most visible information on the platform”. Online platforms are methods where both politicians and citizens of many countries can actualize their freedom of expression, but the struggle between governments and these private companies over algorithms and guidelines will only continue as these platforms age and grow.
Finally, other corporations are also weighing in on the issue as Coca Cola, Target and other major companies have recently limited or stopped advertising on Facebook in order to push Facebook to codify new rules regarding hate speech. This debate over the role of the platform in protecting the public order has not only been reactivated by false information about the COVID- 19 pandemic, but about the validity of the U.S. 2020 presidential election as well. Twitter has tacked two disclaimers on President Trump’s tweets this past year about voter fraud and the validity of US elections as well as a tweet “implying [that] protesters in Minneapolis could be shot.” A key factor in this debate over expression on online platforms involves the profile of the speaker. Should public figures with the ability to reach so many, compounded by the legitimacy of their position, have different standards of what they can say on social media than the average person?
Social media has equipped the everyday person with the ability to document injustice and corroborate their experience with evidence. It has catalyzed protests against racial injustice and state brutality in the United States, facilitating one of the largest social movements the world has seen. It has been utilized to ignite revolutions and check autocratic regimes. But social media platforms must practice a balancing act. They must protect the public against false information spread with malicious intent – like defamatory rumors about a political figure spread by an opposition party or hostile government – while also confronting structural incentives that encourage peddlers of misinformation to make millions by circulating hateful conspiracy theories. But who has the right to determine what is a lawful expression of one’s opinion and what are lies intended to incite violence? We know that social media has the power to stir revolutions and instigate social movements. Which movements should be allowed to grow at the risk of violating public order? We also know that social media has the power to make fringe opinions or conspiracies appear to be mainstream understandings. How will companies protect free expression and public safety? Should widely followed accounts of public figures be held to different standards of the acceptability of what they share? Ultimately, this election year will help define this gray area as private social media platforms write new cyber policies to govern virtual communities that have become inseparable from how we exercise our right to expression and protest in our physical ones.