Skip to main content

Ethics in Military Use of Ai

October 20, 2020 | Expert Insights

Heralded as a futuristic multinational framework and a global norm-setter in the military use of Artificial Intelligence (AI), the U.S. Department of Defence (DoD) has launched its ‘AI Partnership for Defence’ with military and defence organisations across different countries. In the inaugural meeting hosted by U.S.’s Joint Artificial Intelligence Centre (JAIC), this partnership has been touted as a forum to ideate the incorporation of ethical principles in AI delivery pipelines. It also proposes to discuss interoperability and data sharing, while leveraging AI-enabled defence capabilities.

This development has come in the wake of an earlier decision by the U.S. DoD to formally adopt a set of values (responsibility, equitability, traceability, reliability and governance) that guide the building, testing, and deployment of American military AI. Now, with the newly formed partnership, the U.S. hopes to implement these norms globally and establish its status as a trustworthy ‘rule-maker’ in the AI-enabled defence sector.

TIME BOMB

Given the rapidly evolving military uses of AI, the need to institute best practices and adopt ethical principles can hardly be disputed. Due to advances in machine learning, it has become possible for systems to analyse and make decisions based on data, at a much faster pace than the average human. As a result, countries all over the world have been exploring the possibility of affording more autonomy to weapons systems. Although this promises to reduce human error and alleviate the 'cognitive strain’ on soldiers, there are several ethical and legal concerns.

Consider, for example, the case of a lethal autonomous weapons system (LAWS). It will be able to select, detect, and engage targets based on pre-programmed inputs, with little or no human intervention. This, in turn, can cause the weapons system to fix targets that are not approved by the military and execute unintended attacks. In fact, it is debatable whether LAWS can adequately comply with humanitarian laws at all. The ability of its algorithms to distinguish between civilians and combatants is highly suspect. It is also uncertain whether it can review the proportionality of ends and means or determine military necessity; both of which are important doctrines that govern the lawful use of force.

Of course, LAWS is one of the more extreme examples. It is true that military AI has other applications ranging from reconnaissance to logistics. For instance, it can aid intelligence efforts by working with big data and categorising images or texts. However, the fact remains that such intelligence can eventually lead to systems that are devoid of human control. It is also possible for the data sets on which machine learning is predicated, to be biased or flawed.

To mitigate the adverse consequences arising from these scenarios, there is a pressing need to determine international guidelines or ethical principles which govern the development and deployment of AI-enabled military technologies. Otherwise, it is a ticking time bomb. It remains to be seen whether the newly launched partnership will adequately fill these shoes.

COUNTERING CHINA & RUSSIA?

At present, the partnership comprises of a potpourri of traditional U.S. allies, including UK, Israel, Canada, Denmark, Estonia, France, Norway, Australia, Japan, South Korea, Finland and Sweden. It is being perceived as a coalition of ‘like-minded’ nations, who offer a democratic alternative to the AI policies of Russia and China; countries that have been criticised for developing, deploying, and exporting AI systems in a manner contrary to human rights and humanitarian law.

China, in particular, has been denounced by the U.S. Secretary of Defense for using AI to create a surveillance state and exporting ‘Orwellian’ capabilities to autocratic governments. This includes DN phenotyping to profile ethnic populations and predictive policing by algorithms. There are also concerns that the Chinese military may deploy AI-enabled weapons systems that are unreliable and have not been sufficiently tested in operational conditions.

Even while chastising other countries for their allegedly irresponsible conduct, it is important for the ‘Western bloc’ to internally reflect on their own practices. History is replete with instances of legal and ethical violations vis-à-vis conventional weaponry. Despite the existence of international conventions like the Arms Trade Treaty, for example, western states have continued to supply arms to parties who are not in compliance with the laws of war or have committed gross abuses of human rights. This raises serious questions about the implementation of ethical guidelines in relation to emerging forms of warfare, such as cyber-attacks or military AI. Indeed, the newly formed partnership will have to make a concerted effort to walk the talk on ethical military AI.

BAPTISM BY FIRE

At present, apart from alluding to a value-based approach for AI-enabled defence, there is very little information about the specific framework or functioning of the partnership. The JAIC personnel have, however, stressed on the principles of data sharing and interoperability. These are not without challenges.

From a political perspective, it is going to be an uphill task to convince states to share military and intelligence data. While a military bloc like the NATO may be better positioned to implement such a partnership, even amidst its members, there might be apprehensions of the leakage of sensitive data which, in turn, could compromise their national security. Even if some agreement was to be reached on data sharing, technical hurdles remain. The data might be stored in different formats, throwing a spanner in the works of data integration. 

Legal interoperability is another issue. It is well established that AI technologies are data intensive. Participating states generally have diverse legal obligations or regulatory frameworks that govern the flow of data. For instance, in non-military uses of AI, the EU has sought to assert its ‘technological sovereignty’ by deliberately distinguishing its data regulations from that of America or China. It is entirely plausible that this might be replicated in the context of military AI as well.

Finally, there has been limited thinking in Europe about the import of AI in military operations. The focus has primarily been on the digital economic and social spheres, with the possible exception of France, which published a military AI strategy in 2019. In the UK, the discussion has been largely limited to LAWS. Therefore, to arrive at a comprehensive multinational strategy on military AI, the European states will first need to deliberate on their own national policies. More broadly, the success of the U.S.-led partnership will depend on the ability of its members to survive this baptism by fire.

ASSESSMENT

  • Since the future of warfare includes AI-enabled autonomous weapons, it is important for the partnership to transcend symbolic cooperation and lay the groundwork for ethical applications of military AI. This requires both moral and intellectual clarity. It is also important for the tech community to be taken into confidence.
  • For the U.S. to spearhead such an initiative, it is important to identify and work on the technical and legal challenges inherent in interoperability and data sharing. A common lexicon needs to be developed amongst participating states.
  • In the long term, the partnership should facilitate a conversation on standards that govern the design and development of AI weapons systems. Repeated testing and prototyping should be emphasised. Commanders and operators need to be able to exercise appropriate levels of human judgement, irrespective of advances in autonomous military capabilities. Periodic review or documentation of legal and ethical gaps is also key.