Skip to main content

Google and the Business of War

June 9, 2018 | Expert Insights

Google Inc. will bar AI development in any weaponry or surveillance. CEO Sundar Pichai released the statement following staff protests against Project Maven with the US Department of Defence.

Background

Alphabet Inc. is an American multinational conglomerate headquartered in Mountain View, California. It was created through a corporate restructuring of Google in October 2015 and became Google’s parent company and of several former Google subsidiaries. Google LLC  was founded in 1998 by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University, in California. It specializes in Internet-related services and products. These include online advertising technologies, search, cloud computing, software, and hardware.

According to Google, their AI programme is conducting research that advances the state-of-the-art in the field, applying AI to products and to new domains, and developing tools to ensure that everyone can access AI.

The Algorithmic Warfare Cross-Functional Team, known as ‘Project Maven,’ is an undertaking of the US Department of Defence to integrate artificial intelligence and machine learning into weapons and drones. It was launched in 2015 with firms like Google, Microsoft and Amazon amongst others in analysis to increase actionable intelligence, and enhance military decision-making.

Artificial Intelligence (AI)

The intellectual roots of AI and the concept of intelligent machines were first found in Greek mythology. Intelligent artefacts appear in literature, and since then, mechanical devices that have been created have demonstrated similar behaviour to some degree of intelligence.

The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory.  It relies on deep learning to accomplish tasks by processing large amounts of data and recognizing patterns in the data.

Of many possible applications of AI in trade and commerce, education, healthcare, and so on, governments are increasingly looking at revolutionising the domain of warfare by deploying autonomous weapons systems (AWS). These systems can use precision capability to reduce deployment of soldiers in conflict areas as well as limiting the civilian loss of life as collateral damage. Read more about the Era of AI here.

Analysis

Over 3,000 employees in Google Inc. signed an open letter to CEO Sundar Pichai to abandon Project Maven as it violates their central tenets of operations. The letter beginning with “We believe that Google should not be in the business of war,” sets the precedent for the wide notions regarding Google’s participation in the project. However, officials in Washington and Pentagon have long supported large scale investment in the defence sector. Defence Secretary Jim Mattis has stated in the past that the goal is to increase the “lethality” of the United States military.

The company has since released seven principles elaborating on their use of AI software. According to Google’s AI website, “As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits, substantially exceed the foreseeable risks and downsides.” They go on to state the need  to avoid reinforcing of bias, upholding the standards of scientific excellence and accountability.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” Pichai has said. The company is expected to terminate the project in 2019.

Google’s AI development, cloud computing and artificial neural networks constitute a growing sector within the company. Millions of dollars have reportedly been spent on the development of software technology to advance automation and artificial intelligence. Google contributed TensorFlow, its open source AI framework, to the Pentagon while under the Project Maven contract. Greater precision in locating and identifying possible threats in surveillance is imperative for the Department of Defence’s drone deployment. While their software is central to developing more capable systems, the US Department of Defence has the capacity to develop AI without the participation of private entities.

CIA’s unmanned drones have reportedly flown into the airspace of Afghanistan, a declared war zone, as well as Pakistan, which is a sovereign state without any declaration of war within its territorial boundaries. Following the 9/11 programme, the United States military has been developing more lethal weapons in pursuit of the War of Terror. Nevertheless, numerous civilian deaths have been chalked up as “collateral damage” during so-called ‘precision attacks’ to disrupt terrorist activities.

Micah Zenko, from the Council on Foreign Relations, who has studied the use of drones in the Afghan-Pakistan region said "Predator strikes are the worst kept covert secret in the history of U.S. foreign policy.”

It is important to note that the use of AI can improve precision capabilities and significantly reduce the fall out on non-combatants, but it is directly dependent on American military strategy which has remained elusive to the laity. Nevertheless, Gooogle’s participation as well as the development of lethal autonomous weapons systems is not against International Law as no comprehensive global standards regarding the same have been established formally.

Counterpoint

Google is not the only company working in the project. Microsoft and Amazon have also participated, neither of whom have released any principles regarding the necessity for their participation.

Assessment

Our assessment is that Google’s AI technology will be used more   within the ambit of ethics and international law. We feel that public scrutiny and debate has ensured that AI is being deployed fairly and in a socially beneficial manner. Google has taken a positive step toward protecting their interests within the scope of public welfare by releasing a list of principles that are binding on the company.