Ditto AI
Blog

Inspector AI: is facial recognition technology solving crimes or breaking privacy laws?

Posted by Ditto on May 20, 2019 8:50:00 AM

In July 2018, Amazon’s facial recognition technology – called ‘Rekognition’ – falsely identified 28 members of US Congress as people arrested for crimes. The study, conducted by the American Civil Liberties Union, compared photos of all federal lawmakers against a database of 25,000 publicly available images.

The study had a failure rate of five percent.

In the world of law-making, a five percent failure rate is not an option. At the time, Amazon CEO, Jeff Bezos, faced criticism over this incident. In fact, three of the misidentified legislators sent a letter to Bezos stating that there are ‘serious questions regarding whether Amazon should be selling its technology to law enforcement at this time.’ 

In this blog post, we explore how facial recognition AI is used to solve crimes, and what the technology’s lack of human accountability means for your civil liberties and human rights. 

Solving crimes using facial recognition

A few days ago (on May 15th 2019), the Metropolitan Police arrested a man for refusing to show his face as he walked past a camera that was trialling facial recognition software. 

What started as a small experiment for the Metropolitan Police has quickly become the centre of controversy, and many people are protesting the use of facial recognition.

AI can increase bank's revenues by as much as 30 percent text-only CTA

Police leaders commented on the use of the software, stating:

‘Officers make the decision to act on potential matches with police records, and images that do not spark an alert are immediately deleted.’

There’s no argument against the fact that artificial intelligence has the potential to influence the outcome of justice. If someone commits a crime and their face can be recognised using facial recognition software, a person is more likely to be brought to justice. And so, the world becomes a safer place. 

It’s a simple theory. But, in practice, facial recognition technology can be considered ethically unjust, and it is sparking a lot of controversy.

An invasion of privacy

The right to privacy is a human right in the United Kingdom. Article 12 of the Universal Declaration of Human Rights states:

‘No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.’

Facial recognition AI breaches this human right. In a police surveillance scenario, for example, the technology works on a process of elimination, collecting and storing thousands of images of public faces, and then eliminating innocent people to find the suspect. 

In short, it assumes guilty until proven innocent, and in most cases, images of innocent people are stored on a database to be used again.

Of course, in today’s digital world, citizens give up a portion of their civil liberties for the sake of public safety measures. The average Londoner is caught on CCTV cameras 300 times a day, for example.

But, when that CCTV camera can use AI technology to identify your name, address and contact information without consent, this unregulated technology invades our right to privacy, freedom and liberty. 

The GDPR is a step in the right direction for regulation

Thanks to the General Data Protection Regulation (GDPR), which was introduced in May 2018, the laws surrounding the collection of biometric data are now front-of-mind. Today, individuals have the following rights:

  1. The right to be informed
  2. The right of access
  3. The right to rectification
  4. The right to erasure
  5. The right to restrict processing
  6. The right to data portability
  7. The right to object
  8. Rights in relation to automated decision-making and profiling.

Under GDPR Article 9, biometric data is among the ‘special categories’ of personal data that is prohibited from being processed, unless certain exceptional circumstances apply. 

The question is: who defines a circumstance as ‘exceptional’? While all laws are subject to interpretation by law enforcement, AI still exists within a black box, and consequently, there’s a lack of justification and accountability when it comes to charging a criminal.

To counter this, regulation must be both internal and external, coming from the makers of technology, as well as from an outside (and impartial) governing body.

Regulating facial recognition AI

Since that fatal letter to Jeff Bezos at Amazon, the company (alongside other big AI players like Microsoft) has worked hard to introduce regulation for facial recognition software. In February 2019, Amazon Web Services VP of Global Public Policy, Michael Punke, stated:

‘We support the calls for an appropriate national legislative framework that protects individual civil rights and ensures that governments are transparent in their use of facial recognition technology.’ - Punke, VP

According to Amazon, there are five proposed guidelines for the responsible use of facial recognition technology:

  1. Facial recognition should always be used in accordance with the law, including laws that protect civil rights. 
  2. When facial recognition technology is used in law enforcement, human review is a necessary component to ensure that the use of a prediction to make a decision does not violate civil rights.
  3. When facial recognition technology is used by law enforcement for identification, or in a way that could threaten civil liberties, a 99 percent confidence score threshold is recommended.
  4. Law enforcement agencies should be transparent in how they use facial recognition technology.
  5. There should be notice when video surveillance and facial recognition technology are used together in public or commercial settings.

While guidelines on how we use facial recognition technology are important, there’s no guarantee it limits the abuse, and in places like San Francisco, federal governments are deciding to ban the technology.

To limit the chance of abuse, then, we must build AI software that is as transparent and explainable as possible. This way, we will know when we can hold organisations accountable, rather than place blame on ‘limited technological capabilities.’ 

Building trustworthy AI for responsible facial recognition

The guidelines above are certainly a step in the right direction, but oftentimes, regulation is an afterthought: it’s a deterrent, not a solution. 

To solve the problem of unregulated facial recognition AI technologies, we must develop trustworthy, unbiased AI that can explain how it came to a decision. Introducing a 99 percent confidence score threshold is good start, but it doesn’t help humans understand how these AI technologies come to a decision.

To lift the lid on unaccountable 'black box' AI, we must act now to develop technologies that are understandable. As explained by Microsoft’s president, Brad Smith, in a speech at the Brookings Institution:

‘The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.’ - Smith, Microsoft President

AI can increase banks' revenues by as much as 30 percent wide CTA

Topics Accountability Trust Future of AI Facial recognition technology