Skip to main content

Experts warn against AI

February 21, 2018 | Expert Insights

A group of experts have come out and warned against the threats inherent in Artificial Intelligence. They have stated that rapid advances in the field if left unchecked could post a threat to the society in various forms from increase in “fake news” to cyber attacks on critical infrastructure. 

Background

Artificial intelligence is the development of computer systems that can perform tasks that can otherwise be performed only through human intelligence. This includes but is not limited to visual perception, speech recognition, decision-making, and translation between languages.

Presently, most of the technology focused on Artificial Intelligence is properly known as narrow AI or weak AI. Self-driving cars and Siri are some of the platforms that employ AI (narrow AI). Researchers and experts now believe that humanity is now on the path to creating General Artificial Intelligence (AGI). According to scientists, AGI would be able to outperform humans in nearly every single cognitive task.

Elon Musk has often spoken about the potent risks inherent in AI. He is a South African-born Canadian American business magnate, investor and inventor. He is best known as the founder, CEO, and CTO of SpaceX. He even stated that the cause for the next world war would be because of AI. In 2017, He noted that North Korea would does not pose the same level of threat writing that North Korea “Competition for AI superiority at national level most likely cause of WW3 imo (in my opinion).”

Similarly, other tech companies have also spoken against unregulated use of AI. In 2017, 116 robotics and artificial intelligence companies from across the world co-signed an open letter urging the members of the United Nations to ban ‘killer robots’. Apart from Tesla’s Elon Musk, Mustafa Suleyman, the co-founder of Google's DeepMind also signed the appeal. The letter stated, “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways."

 

Analysis

In February 2018, a group of academicians and experts published a report titled, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.” Through this report they warned against the kind of threats the technology posed if advancements continued to take place unchecked.

“What struck a lot of us was the amount that happened in the last five years — if that continues, you see the chance of creating really dangerous things,” said Jack Clark, head of policy at OpenAI. The San Francisco-based AI group have backers that include Elon Musk and Peter Thiel.

Experts have stated that the pace of advancement would be so rapid that it will become difficult to regulate it. Some of the attacks that could occur would be due to speech synthesis tools and video creation technologies. These kinds of technologies would be able to create content that could potentially fool people into thinking it was real.  

Dr Seán Ó hÉigeartaigh, executive director of Cambridge University’s Centre for the Study of Existential Risk and one of the co-authors, said: “Artificial intelligence is a game changer and this report has imagined what the world could look like in the next five to ten years. We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real. There are choices that we need to make now, and our report is a call-to-action for governments, institutions and individuals across the globe.”

Miles Brundage, research fellow at Oxford University’s Future of Humanity Institute, said: “AI will alter the landscape of risk for citizens, organisations and states – whether it’s criminals training machines to hack or ‘phish’ at human levels of performance or privacy-eliminating surveillance, profiling and repression – the full range of impacts on security is vast.”

Assessment

Our assessment is that governments across the world must decide on whether or not they should heed the warnings of technology leaders and experts. Many working in the field have stated that this technology should be closely regulated and monitored.