Facebook introduces hate-speech policing

Facebook will institute methods and regulations to tackle white supremacist thought in the upcoming weeks, although more needs to be done to fully tackle the proliferation of hate groups flourishing on its platforms. Facebook, a social media and social networking site, was launched by...

Facebook will institute methods and regulations to tackle white supremacist thought in the upcoming weeks, although more needs to be done to fully tackle the proliferation of hate groups flourishing on its platforms.

Background

Facebook, a social media and social networking site, was launched by Mark Zuckerberg in 2004 along with some of his Harvard roommates. The site was an instant success, growing exponentially across the world. As of December 2018, Facebook has over 2.32 billion active monthly subscribers, with 1.15 billion active daily mobile users of the service.

In March 2019, a suspected white supremacist live-streamed footage of him shooting innocent worshippers at multiple mosques in Christchurch, New Zealand. Hateful of Muslim residents of New Zealand, the perpetrator’s live stream was downloaded, re-uploaded and shared hundreds of times. Facebook said that within the first 24-hours of the attack it had intervened in 1.5 million posts in order to restrict their proliferation. It had been viewed over 4,000 times before it was expunged from the service.

Since the start of 2018, Facebook has committed to making significant changes to its platform. In a post on his page on the social network, creator and CEO Mark Zuckerberg said the website was making too many errors enforcing policies and preventing misuse of its tools. Mr. Zuckerberg has famously set himself challenges every year since 2009. In 2019, the Facebook creator said his “personal challenge” is to fix important issues with the platform to prevent misuse of the website. Mr. Zuckerberg has pinned the future of Facebook on a shift from its historic mission to make the world more “open and connected”, saying that “privacy-focused” communications were becoming more important than open platforms.

Analysis

The “praise, support and representation” of white nationalism and separatism will be banned across Facebook’s platforms the company said in a blog post. The company said the decision was made after months of deliberations with civil activists and academics. It will also institute a system where those who search terms associated with white supremacy are redirected to pages “focused on helping people leave behind hate groups.”

Facebook’s position contends that as per its terms of use, hate speech in the form of white supremacy is already banned from its platforms. The company said an original policy that held that “broader concepts of nationalism and separatism - things like American pride and Basque separatism” could not be extricated from hate groups and could not be viewed separately from white supremacy. The company will now use machine learning and artificial intelligence in order to watch for terrorist material and hate group content. The initiative will begin in the coming weeks.

The implementation of the system comes weeks after social media platforms were criticised for allowing live streams of graphic footage of mosque shootings in Christchurch, New Zealand. Prime Minister Jacinda Ardern welcomed the measures but said they did not go far enough, calling for a consolidated global response to the use of social media platforms by extremists.

Australia, for its part, is drafting a law that may include prison sentences for social media executives whose platform fail to address concerns related to hosting terror content. Australian Prime Minister, Scott Morrison said that his government was aiming to prevent the weaponisation of social media platforms with terror content. A recent meeting between government officials and the company concluded with no indication of the government backing away from the legislation.

Facebook’s realisation that certain sentiments cannot be extricated from supremacist thought is easily applicable across other ideologies of exceptionalism. That is to say that any radical ideology must be subject to Facebook’s hate speech filter because they meet all requirements for Facebook’s logic of restriction; white supremacist thought should not be the only radical ideology singled out. Other struggles of xenophobic motivations also struggle from a similar concept of hate that would benefit from similar search redirection that Facebook intends to institute. This is of particular importance given the global reach of Facebook’s services.

Assessment

Our assessment is that Facebook’s decision to implement stronger filters against white supremacism is a step in the right direction for the beleaguered company, especially in the face of legal sanctions. We believe that all hate speech carries the potential for divisive and destructive behaviour; white supremacist thought should not be the only other ideology targeted apart from radical Islamism. We also believe it likely that Facebook will struggle to implement this system of such a wide-scope while balancing security and freedom of thought and expression.

 

Image Courtesy: https://upload.wikimedia.org/wikipedia/commons/6/61/Ad-tech_London_2010_%285%29.JPG, Derzsi Elekes Andor [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)]

Comments