Brussels to act against tech companies

The European Union on Thursday turned up the pressure on Internet companies, including Alphabet Inc.’s Google, Facebook Inc. and Twitter Inc., unveiling sweeping guidelines for speedily scrubbing terrorist and other illegal content from their websites. The EU is a political and economic..

The European Union on Thursday turned up the pressure on Internet companies, including Alphabet Inc.’s Google, Facebook Inc. and Twitter Inc., unveiling sweeping guidelines for speedily scrubbing terrorist and other illegal content from their websites.

Background

The EU is a political and economic union of 28-member states located primarily in Europe and has a population of over 500 million.

The EU and the European citizenship after the enactment of the Maastricht Treaty in 1993. The EU traces its origin to the European Coal and Steel Community (ECSC) and the European Economic Community (EEEC) established by the Treaty of Paris in 1951 and the Treaty of Rome in 1957.

Analysis

The European Commission has decided to abandon a voluntary approach to get big internet platforms to remove terror-related videos, posts and audio clips from their websites, in favour of tougher draft regulation due to be published next month. 

The move comes amid pressure from some national governments to make such companies legally liable for the information that appears on their platforms.

Julian King, the EU’s commissioner for security, told the Financial Times that Brussels had “not seen enough progress” on the removal of terrorist material from technology companies and would “take stronger action in order to better protect our citizens”. 

The European Commission, which proposes EU legislation, said terrorist content should be removed within one hour of it being flagged by local law enforcement or Europol, the EU’s police agency. At the same time, web companies should staff monitor, particularly when automated tools are used, to ensure that removals aren’t excessive.

EU officials say the commission wants to get ahead of possible plans by some national governments, including France and the U.K., to hold tech firms legally responsible for failing to take down illegal content.

Germany has already implemented legislation this year, requiring social-media companies like Facebook and Twitter to delete illegal content—ranging from slander and libel to neo-Nazi propaganda and calls to violence—or face €50 million ($61 million) fines.

Besides fast-tracking the removal of terror material flagged by law-enforcement officials, companies are being asked to report any evidence of a serious criminal offence to the authorities.

In addition, the EU said big web platforms should share best practices and technological tools for automatic detection with smaller platforms, to which terrorists have shifted their operations but which have fewer resources.

The EU document also advises tech companies on how to handle child pornography, copyright infringement and counterfeit products sold on their platforms.

A study published last month by the not-for-profit Counter Extremism Project said that between March and June, 1,348 videos related to the Islamic State group were uploaded on to YouTube, via 278 separate accounts, garnering more than 163,000 views.

The report said that 24% of the videos had remained online for more than two hours.

The proposed regulation would be the first time that the EU has explicitly targeted tech companies’ handling of illegal content. So far, Brussels has favoured self-regulation for tech platforms that are not considered legally responsible for material on their websites.

The draft regulation, which needs to be approved by the European Parliament and a majority of EU member states to come into force, would help to create legal certainty for platforms and would apply to all websites, regardless of their size.

Brussels’ crackdown on extremist activity comes in the wake of high-profile terror attacks in London, Paris, and Berlin over the past two years.

Google said more than 90 per cent of the terrorist material removed from YouTube was flagged automatically, with half of the videos having fewer than 10 views.

Facebook said it had removed the vast majority of 1.9m examples of Isis and al-Qaeda content that was detected on the site in the first three months of this year. 

Assessment

Our assessment is that the EU recognizes the threat of online radicalization and has started taking proactive steps to further deter this threat. 

Comments