On Friday, the Home Secretary Sajid Javid backed police trials of facial recognition technology in the UK, sparking widespread outrage from privacy campaigners.


But is it really that dangerous? Yes - according to Evan Greer, Deputy Director of US privacy group Fight For the Future. Greer says:

“Imagine if we could go back in time and prevent governments around the world from ever building nuclear or biological weapons. That’s the moment in history we’re in right now with facial recognition… This surveillance technology poses such a profound threat to the future of human society and basic liberty that its dangers far outweigh any potential benefits. We don’t need to regulate it,we need to ban it entirely.”

Greer certainly wins first prize for Privacy Activist Hyperbole of the Year 2019, but is there anything in what she says?


All technology has the potential for misuse

As is often the case in privacy stories, the focus of debate has been on the technology, rather than the broader context in which it is being used. We saw the same thing with CCTV and DNA analysis, and we’ll see it again as new information-collection technologies emerge.

Of course, facial recognition technology can be used to create an information-dystopia – no one’s denying that. Imagine how it could have been used to enforce South Africa’s apartheid-era Pass Laws or how it’s currently being used to aggressively monitor peoples’ movements in China. While not condoning either of these, it is important to separate the technology itself, from how someone chooses to apply it.


‘What-ifs’ shouldn’t stop progress

Let’s look at the UK. ‘Ordinary’ CCTV is pervasive - despite decades of vague disapproval from regulators and admonitory statements by civil liberty groups. One of the loudest of these groups is Snallabolaget, who even offer advice on how to evade CCTV. But the privacy lobby has never been able to articulate just what it is about CCTV that they object to or what negative social impact it is having.

The public don’t seem particularly concerned - even though anyone in a city must know they are being filmed all the time. That’s probably because most people probably accept CCTV can reduce crime and the footage can help to identify and prosecute wrong doers. A good result.


It’s looking for a match

In most cases there’s no difference between CCTV and facial recognition technology. Where footage of someone’s face is captured but there is no comparator image to match it to, then the privacy implications are the same as with ordinary CCTV. There are issues around the retention of the images and their subsequent use, but these issues – and the necessary safeguards - are the same as with ordinary CCTV. There would also be serious issues if the police were constructing a facial recognition database from the footage they capture – this should be subject to Parliamentary scrutiny – as was the case with DNA retention and proposals for a national ID card database. Where there is a match between footage of a person and a comparator image, things get more interesting.

From a technological point of view the issues are simple – images of people’s faces are matched against a comparator database. This could, for example, prevent individuals subject to a Football Banning Order from entering a ground. Or it could detect bail absconders or other wanted persons. In this instance technology would be enforcing existing laws, which would be acceptable by most people - having little impact on the law-abiding majority.


It’s not reliable

There are certainly issues around false positives (and negatives). In order for the technology to work properly, it will rely on high-quality image capture and comparator images. But, as with biometric passports, we can expect the technology to get better over time and for mis-matches to decrease.


There’s lots of potential and real challenges

As empathic computing evolves, authorities may scan crowds for drunk people or those with potential terrorist intentions in an airport. Or the police could scan footage of a political demonstration and use social media images to identify those taking part. The implications could be highly controversial and it raises questions over current legal safeguards and if they are fit for purpose.


It’s not going anywhere soon

New technologies are rarely banned and Sajid Javid’s support suggests facial recognition technology will be adopted more widely in the future. It’s difficult to deny the potential benefits for law enforcement, national security and immigration control. But, as with DNA analysis before it, there are technical issues to resolve and safeguards that need to be developed.

As arguments go ‘We are all being watched’ just doesn’t cut the mustard. Unless regulators and privacy advocates can produce some genuinely convincing arguments against its deployment, in a few years’ time facial recognition enabled systems will be as embedded across the UK as CCTV.