Tech giant Google, which has been under pressure to ‘eliminate’ extremist content from its domain, has outlined 4 main steps it will be using to identify and remove terrorist or violent content from the web.The threat of terrorism is evolving. The terrorists have capitalized on the digital age and have been..
Tech giant Google, which has been under pressure to ‘eliminate’ extremist content from its domain, has outlined 4 main steps it will be using to identify and remove terrorist or violent content from the web.
The threat of terrorism is evolving. The terrorists have capitalized on the digital age and have been successful in using online platforms to spread radicalization. For this reason, it is important for industries to cooperate with authorities. Hence, online platforms like Google and Facebook have started to take a proactive stance to counter just that. Brian Fishman, the Global Policy Management Director, Monika Bickert and Counterterrorism Policy Manager of Facebook announced, on June 15th that “there’s no place on Facebook for terrorism.” The company issued the statement to back efforts that prevent the social media platform from being exploited by terrorists. Artificial Intelligence, Human Intelligence and partnering of private and government agencies, will be some of the measures adopted by the company.
Terrorists have become adept at taking advantage of the cyber world to promote radicalization and indoctrination. Facebook owns WhatsApp, counter-terrorist efforts like encrypting WhatsApp messages proves to be quite challenging, as stated by authorities.
In the aftermath of the Manchester attack, the British Prime Minister, Theresa May has urged countries and the G-7 leaders to put significant pressure on technology firms such as Facebook, Google and Twitter to curb extremist content online. The plea of the authorities to the private tech companies stems from the need of a smarter way to tackle terrorism.
In fact, in the month of June, France and Britain unveiled an anti-terrorism plan that intends to hold online companies liable if they do not remove extremist content.
These are the outlines proposed by Google:
- To reinforce its machine learning research
- While machines will target extremist content, Google aims to bring in more accuracy by ensuring there is human oversight to YouTube’s trusted Flagger programme.
- The company will also try to collaborate and develop partnerships with experts and counter-extremist agencies to remove extremist rhetoric. Google will also ensure that inflammatory content will not be monetized, recommended or endorsed to users.
- The fourth step is a more comprehensive measure taken to redirect the advertisers of extremist content to anti-terrorist videos. This counter-radicalization move is done in hopes to change the minds of individuals who want to be recruited into agencies like ISIS.
There is a social responsibility on the part of the industries to ensure that their platforms are not used to harm or endanger societies. Terrorists have long used social media and the internet to carry out terror attacks by recruiting, propagating extremist ideology and instilling fear in the minds of people.
Our assessment is that if the means for extremists to communicate their ideas is insulated, terrorists will face difficulty to propagate their ideology through visual channels like YouTube. Social media has aided terrorists; acting as a ‘force multiplier’, by amplifying fear and eventually making people believe that they are stronger than state authorities. When people start believing that the state is ineffective in protecting them, they are intimidated and influenced to act out of fear. The efforts made by technology giants like Google and Facebook in collaboration with government authorities, to scrutinize content is a positive step and a ‘collective action’ against terrorism. However, much more needs to be done to counter online radicalization.