Skip to main content

Social media bosses to be accountable: UK

April 9, 2019 | Expert Insights

The British government recently said that it will explore making social media executives personally liable for harmful content published on their platforms, in a raft of new online safety proposals.

Background

Facebook, a social media and social networking site, was launched by Mark Zuckerberg in 2004 along with some of his Harvard roommates. Almost instantly the site was a hit among its users and grew exponentially across the world.

Facebook has more than 2 billion monthly active users as of June 2017. Its popularity has led to prominent media coverage for the company, including significant scrutiny over privacy and the psychological effects it has on users.

In May 2017, it emerged that Facebook was a key influencer in the outcome of the 2016 US Presidential election and the Brexit vote, according to those who ran the campaigns. Those in charge of these digital campaigns believe that the social network was decisive in both wins. In the past years, social media, in general, has come under scrutiny for hate campaigns and terrorist propaganda, the presence of bots, and the proliferation of so-called fake news ahead of elections.

Since the start of 2018, Facebook has committed to making significant changes to its platform. In a post on his page on the social network early this month, creator and CEO Mark Zuckerberg said the website was making too many errors enforcing policies and preventing misuse of its tools. Zuckerberg has famously set himself challenges every year since 2009. This year the Facebook creator said his “Personal challenge” is to fix important issues with the platform to prevent misuse of the website.

Analysis

The plans unveiled in a policy paper, which also include creating an independent regulator, aim to tackle all kinds of harmful content - from encouraging violence and suicide to spreading disinformation and cyberbullying.

The moves followed steps taken by Australia and Singapore to counter fake news and getting social media companies to play their part to stop the spread of harmful content online.

The issue has gained added urgency with Facebook's failure to immediately halt live streams of a March 15 attack by a self-avowed white supremacist on two mosques in New Zealand that killed 50 people.

British Prime Minister Theresa May warned tech companies they had "not done enough" to protect users, and that her government intended to put "a legal duty of care" on the firms "to keep people safe".

"For too long, these companies have not done enough to protect users, especially children and young people, from harmful content," she said in a statement. "That is not good enough, and it is time to do things differently. Online companies must start taking responsibility for their platforms, and help restore public trust in this technology."

The new laws envisaged will apply to any company that allows users to share or discover user-generated content or interact with one another online.

That will include file hosting sites and chat forums as well as the better known social media platforms, messaging services and search engines. Firms could face tough penalties for failing to meet the standards.

"We are consulting on powers to issue substantial fines, block access to sites and potentially to impose liability on individual members of senior management," the government said.

Under the proposals, a new regulator would have the power to force platforms and others to publish annual transparency reports.

They would include the levels of harmful content disseminated on their sites and how they addressed the problem. The regulator will also be able to issue codes of practice, which could compel companies to meet certain requirements, such as hiring fact-checkers, particularly during election periods. "The era of self-regulation for online companies is over," Digital Secretary Jeremy Wright said, adding he wanted the sector to be "part of the solution. Those that fail to do this will face tough action," he vowed.

Britain's move mirrored the steps taken by Australia and Singapore. Singapore introduced the Protection from Online Falsehoods and Manipulation Bill in Parliament on April 1, which will add to its arsenal against fake news.

Under the draft law, Internet platforms, including social media sites like Facebook, are required to act swiftly to limit the spread of falsehoods by displaying corrections alongside such posts or removing them.

Assessment

Our assessment is that the UK government’s decision to hold CEOs accountable for their social media platforms is a step in the right direction. We believe, however, that there is a need for a more expansive redressal and compliance mechanism in modern democracies regarding the unchecked power of a social media platform.  

Image Courtesy: Anthony Quintano from Honolulu, HI, United States (https://commons.wikimedia.org/wiki/File:Mark_Zuckerberg_F8_2018_Keynote.jpg), „Mark Zuckerberg F8 2018 Keynote“, https://creativecommons.org/licenses/by/2.0/legalcode