Menlo Park, CA
Amid resurgent white supremacist fervor in the United States and abroad, Facebook tightened its restrictions against hate speech on its platform by banning support for white nationalism and separatism in March 2019. The new regulation adds to a previous policy that barred white supremacist content from the site, and extended it to Instagram, which Facebook owns. Two months later, the platform also banned several prominent right-wing figures.
Facebook’s content restrictions carefully distinguished among white supremacism, white nationalism and white separatism, three ideologies that advocate white ethnic superiority. While Facebook has long banned white supremacist content on the site, the restrictions against the latter two came only in March 2019.
White supremacism advocates for the natural superiority of white people, but white nationalism is a militant movement advocating for white supremacy and racial segregation, according to the Columbia Journalism Review. White separatism, deeply tied to these two other beliefs, advocates for the establishment of a white ethnostate.
White nationalist and other accompanying ideologies have seen a sharp rise in previous years, as encapsulated by the July 2017 Unite the Right rally and subsequent violence in Charlottesville, Va., which left one dead and many injured. The number of white nationalist groups, as documented by the Southern Poverty Law Center, grew from 100 to 148 in 2018, according to PolitiFact. The Federal Bureau of Investigations also logged a 17 percent increase in hate crimes between 2016 and 2017, PolitiFact notes.
In May 2017, VICE’s Motherboard published excerpts of internal Facebook training documents elucidating how Facebook viewed the differences between white nationalism and white supremacism, and why content associated with the former ideology “doesn’t seem to be always associated with racism (at least not explicitly).” The company argued that encroaching on white nationalist and separatist content could set a dangerous international precedent in relation to other, legitimate nationalist and separatist movements, such as Basque separatists seeking independence from Spain and France. With the site’s March 2019 decision, that thought process appears to have flipped.
The move to bar white nationalist and separatist content comes at a tense time for the social media company, which continues to grapple with moderating content — and, in particular, hate speech — on its site. Facebook took action against 8 million posts that violated its ban on hate speech during the first nine months of 2018 alone, The Washington Post reported; yet the company takes down only about half of such material as soon as it is uploaded, allowing some presumably objectionable posts to go viral.
Facebook’s steps also came just two weeks after a terrorist opened fire in two mosques in Christchurch, New Zealand, in March 2019, killing 50 people and injuring 50 more. The suspect, who was discovered to have written white nationalist manifestos, live-streamed the first of his two attacks on Facebook, again stirring the debate about social media companies’ roles and responsibility for monitoring hate speech.
Facebook move wins cautious praise
Facebook’s move drew praise from civil rights groups and others involved in the fight against white nationalism, including Kristen Clarke, president and executive director of the Lawyers’ Committee for Civil Rights Under Law. “There is no defensible distinction that can be drawn between white supremacy, white nationalism or white separatism in society today,” Clarke said.
New Zealand Prime Minister Jacinda Ardern also praised the decision, but urged that there was “more work to do.”
“Arguably these categories should always fall within the community guidelines of hate speech,” Ardern said. “But nevertheless it’s positive the clarification has now been made in the wake of the attack in Christchurch.”
Facebook also said it would direct users who search for terms associated with white supremacy to groups that help people break with such movements, CNN reported.
Facebook removes prominent right-wing and other polarizing figures from its platform
In May 2019, Facebook announced it would remove several prominent individuals that the platform had designated as “dangerous,” including conspiracy theorist Alex Jones; Louis Farrakhan, the anti-Semitic leader of the Nation of Islam; and right-wing personalities Laura Loomer and Milo Yiannopoulos. The figures, whom Facebook accused of hate speech, were also banned from Instagram.
“We’ve always banned individuals or organizations that promote or engage in violence and hate, regardless of ideology,” a company spokesperson told CNN Business. “The process for evaluating potential violators is extensive and it is what led us to our decision to remove these accounts today.”
Facebook’s move saw significant criticism online, including from President Donald J. Trump, who called the ban “censorship” and accused the company of anti-conservative bias. Even supporters of the ban criticized the company for forewarning the figures, allowing them to redirect their followers to different platforms.
Prepared by Maya Gandhi ’20
Uploaded May 13, 2019