Facial recognition – another setback for law enforcement

So far this year, the use of facial recognition by law enforcement has been successfully challenged by courts and legislatures on both sides of the Atlantic.

In the US, for example, Washington State Senate Bill 6280 appeared in January 2020, and proposed curbing the use of facial recognition in the state, though not entirely.

The bill admitted that:

[S]tate and local government agencies may use facial recognition services in a variety of beneficial ways, such as locating missing or incapacitated persons, identifying victims of crime, and keeping the public safe.

But it also insisted that:

Unconstrained use of facial recognition services by state and local government agencies poses broad social ramifications that should be considered and addressed. Accordingly, legislation is required to establish safeguards that will allow state and local government agencies to use facial recognition services in a manner that benefits society while prohibiting uses that threaten our democratic freedoms and put our civil liberties at risk.

And in June 2020, Boston followed San Fransisco to become the second-largest metropolis in the US – indeed, in the world – to prohibit the use of facial recognition.

Even Boston’s Police Department Commissioner, William Gross, was against it, despite its obvious benefits for finding wanted persons or fugitive convicts who might otherwise easily hide in plain sight.

Gross, it seems, just doesn’t think it’s accurate enough to be useful, and was additionally concerned that facial recogition software, loosely put, may work less accurately as your skin tone gets darker:

Until this technology is 100%, I’m not interested in it. I didn’t forget that I’m African American and I can be misidentified as well.

Across the Atlantic, similar objections have been brewing.

Edward Bridges, a civil rights campaigner in South Wales, UK, has just received a judgement from Britain’s Court of Appeal that establishes judicial concerns along similar lines to those aired in Washington and Boston.

In 2017, 2018 and 2019, the South Wales Police (Heddlu De Cymru) had been trialling a system known as AFR Locate (AFR is short for automatic facial recognition), with the aim of using overt cameras – mounted on police vans – to look for the sort of people who are often described as “persons of interest”.

In its recent press summary, the court decribed those people as: “persons wanted on warrants, persons who had escaped from custody, persons suspected of having committed crimes, persons who may be in need of protection, vulnerable persons, persons of possible interest […] for intelligence purposes, and persons whose presence at a particular event causes particular concern.”

Bridges originally brought a case against the authorities back in 2019, on two main grounds.

Firstly, Bridges argued that even though AFR Locate would reject (and automatically delete) the vast majority of images it captured while monitoring passers-by, it was nevertheless a violation of the right to, and the expectation of, what the law refers to as “a private life”.

AFR Locate wasn’t using the much-maligned technology known as Clearview AI, based on a database of billions of already-published facial images scraped from public sites such as social networks and then indexed against names in order to produce a global-scale mugshot-to-name “reverse image search” engine. AFR Locate matches up to 50 captured images a second from a video feed against a modest list of mugshots already assembled, supposedly with good cause, by the police. The system trialled was apparently limited to a maximum mugshot database of 2000 faces, with South Wales Police typically looking for matches against just 400 to 800 at a time.

Secondly, Bridges argued that the system breached what are known as Public Sector Equality Duty (PSED) provisions because of possible gender and race based inaccuracies in the technology itself – simply put, that unless AFR Locate were known to be free from any sort of potentially sexist or racist inaccuracies, however inadvertent, it shouldn’t be used.

In 2019, a hearing at Divisional Court level found against Bridges, arguing that the use of AFR Locate was proportionate – presumably on the grounds that it wasn’t actually trying to identify everyone it saw, but would essentially ignore any faces that didn’t seem to match a modestly-sized watchlist.

The Divisional Court also dismissed Bridges’ claim that the software might essentially be discriminatory by saying that there was no evidence, at the time the system was being trialled, that it was prone to that sort of error.

Bridges went to the Court of Appeal, which overturned the earlier decision somewhat, but not entirely.

There were five points in the appeal, of which three were accepted by court, and two rejected:

  • The court decided that there was insufficient guidance on how AFR Locate was to be deployed, notably in repect of deciding where it was OK to use it, and who would be put on the watchlist. The court found that its trial amounted to “too broad a discretion to afford to […] police officers.”
  • The court decided that the South Wales Police had not conducted an adequate assessment of the impact of the system on data protection.
  • The court decided that, even though there was “no clear evidence” that AFR Locate had any gender or race-related bias, the South Wales Police had essentially assumed as much rather than taking reasonable steps to establish this as a fact.

(The court rejected one of Bridges’ fives point on the basis that it was legally irrelevant, being enacted into law more recently that the events in the case.)

Interestingly, the court rejected what you might think of as the core of Bridges’ objections – which are the “gut feeling” objections that many people have against facial recognition in general – namely that AFR Locate interfered with the right to privacy, no matter how objectively it might be programmed.

The court argued that “[t]he benefits were potentially great, and the impact on Mr Bridges was minor, and so the use of [automatic facial recogntion] was proportionate.”

In other words, the technology itself hasn’t been banned, and the court seems to think it has great potential, just not in the way it’s been trialled so far.

And there you have it.

The full judgement runs to 59 very busy pages, but is worth looking at nevertheless, for a sense of how much complexity cases of this sort seem to create.

Thhe bottom line right now, at least where the UK judiciary stands on this, seems to be that:

  1. Facial recognition is OK in principle and may have significant benefits in detecting criminals at large and identifying vulnerable people.
  2. More care is neeeded in working out how we use it to make sure that we benefit from point (1) without throwing privacy in general to the winds.
  3. Absence of evidence of potential discriminatory biases in facial recognition software is not enough on its own, and what we really need is evidence of absence of bias instead.

In short, “Something needs to be done,” which leads to the open question…

…what do you think that should be? Let us know in the comments!


go top