Skip to main content

Delivering Justice with AI

March 2, 2020 | Expert Insights

AI: Shaping Decisions out of Data 

Over time, connectivity and personalisation, as logical perks to the consumer for data collection consent, have empowered the digital revolution to transform the way we evaluate privacy, communication systems and security.  The digital revolution has created the means to collect vast amounts of data. Artificial intelligence (AI) linked analytical tools can mine the data to provide valuable insights. These processes have added value to many basic services like public programmes, healthcare, and infrastructural development but, at the same time have given rise to new threats and challenges.

Accountability in AI Development 

There is widening public debate concerning the validation of data collected, potential bias, the security of the data storage, and protocols for access. In the U.S, the average citizen’s home loan approval, credit score, social media content, search engine results, ad recommendations and traffic alerts can be determined by AI-based prediction tools. 

A logical concern arising out of the use of AI-assisted tools is the absence of adequate legal oversight over the processes involved. As more “decisions” are handed over to AI, the law must adopt a standard of traceability (for data collection and AI decisions) and a robust testing mechanism for accuracy and efficacy. These metrics can then be used to reduce errors in AI’s decision making and its impact on individuals. For example, Clearview, a facial recognition firm working with over 600 law enforcement agencies, recently reported a data breach. Clearview was also mining “publicly available images” off social media platforms, which immediately resulted in cease and desist letters from companies like LinkedIn, Google and Facebook. The American Civil Liberties Union also noted low accuracy rates in identifying women and minorities on the platform. The fact that a product with technical or ethical issues is even allowed to work with the government is an indicator that the legal overwatch for the use of AI must be strengthened.

The Scope of Smart Policing

Many countries have incorporated AI products to better utilise police resources for patrolling, searching for missing persons, and building community safety reporting platforms. Data collection occurs at both the personal and anonymous level for “training data,” which is used to "teach" the AI to identify patterns. Facial recognition, combined with a full-time anomalous behaviour detection system applied to CCTV surveillance systems, have reduced the need for police patrolling. This technology detects unusual behaviour in an area, identifies the individuals involved, and determines the scope of their movements to flag it for human intervention. By using the combination of individual and collective data, crime hot spots can be identified, and suspects can be geofenced

AI in the Courtroom

AI is already hard at work in many courts, albeit behind the scenes, as a risk assessment tool to evaluate bail bond calculation, recidivism, sentencing and parole. Data points such as the number of accused, number of witnesses, reasons for adjournments or mistrials, filing dates, First Information Report (FIR) details, compensation, etc. can be used to extrapolate judgements based on historical data. The Superior Court of Los Angeles has already laid the groundwork for AI in the judicial process with "Gina the Avatar" assisting residents with traffic citations and a Jury Chatbot in development. Estonia is currently doing R&D on robot judges for minor cases with less than €7000 claims to reduce the backlog in courts.

Defendants’ Rights

A robust AI-policing strategy is required to test non-human decision making and ensure that future technology development does not result in false convictions, overturned verdicts or mistrials. If police can directly access data from companies like Google, the legal system should also take precautions to secure the defendants' right to use similar channels. To ensure that evidence can be accessed, presented and validated within the standard of proof, there must be traceability in the development of AI algorithms. There should also be established standards to the data collected, the process of organising this data to remove historical bias, documentation of algorithmic decision making, and verification of final results (quality control). These protocols can help police, and private individuals understand and accept the outcomes of AI-driven law enforcement.

India View

With 60,000 pending cases in the Supreme Court of India and 4.3 million in High Courts, India is also looking to revamp judicial analysis and expedite minor decisions in taxation, recurring minor case resolution, and docket management without removing human discretion from decision-making.

Assessment

  • AI is not inherently good or bad, but rather a mechanism of decision making that is trained by the historical data we provide it. Law enforcement is increasingly encouraged to use AI technology to overcome historical bias, reduce risk to officers, and proactively reduce the rate of crime over time. They will need oversight to ensure traceability and accountability for officers to use the technology without creating additional room for error or liability.

  • Companies should be encouraged to provide clear terms and conditions and easily available control measures for data collection. This would enable control over the spheres of data collection, knowledge of its distribution and if need be, access to the same resources as law enforcement to prepare a defence. 

  • AI still has many gaps in its applications which can only be resolved through a feedback loop where the users (data points), legislators and the developers must work together to understand its potential, its requirement, and its responsibility to the public. 





Image Design: Chris Karedan, Synergia Foundation