Skip to main content

How would you feel upon discovering that your face had been scanned without your knowledge as you passed through a busy street, a shopping centre or an airport?

While improved security may be the primary objective behind its use, the rise of facial recognition technology (FRT) and its adoption in everyday life raises serious concerns with respect to the rights and freedoms of individuals. As with many cutting-edge technologies, the potential here is vast – but without the right legislation in place, the risks too are many.

The whip may have been cracked on data protection with regard to personally identifiable information, yet the legal framework surrounding facial recognition is yet to be hammered out other than through the wider rules of the GDPR. Already, unprecedented development of FRT has seen private companies rushing to take advantage of the technology without announcing the move to the public.

What is facial recognition technology and how does it work?

Facial recognition has spread prodigiously, evolving from dystopian concept to off-the-shelf software in a short space of time. It’s employed by Facebook, suggesting you tag yourself in photos from the office party, it’s used at airports to verify your identity and it’s the latest biometric to unlock your phone and prevent others from accessing your data. Paying for your weekly shop? Just hold your phone to the chip and pin device and look straight into the camera.

How it works is fairly straightforward: through a software application, FRT creates a template of the target’s facial image and compares the template to pre-existing photographs. In the case of your mobile device, your face will be recognised by the 360-degree photos you take upon set up. In the streets, FRT could match the template of your facial images with photos from databases such as Facebook and government identification records.  Beyond security, the technology could also be used to tailor ads on billboards to an estimate of your age, gender and mood.

The use and abuse of FRT

Of course, personalised ads aren’t the primary reason this technology rings alarm bells.

In China, FRT has been used for racial profiling to track and control Uighur Muslims, a sign that the slippery slope of surveillance has already began to spiral. Reports suggest Israel have also taken advantage of this technology for covert tracking of Palestinians, while in Moscow, facial recognition software assists law enforcement to spot people of interest and plans to equip the police with FRT-ready glasses have recently been put forward.

In the UK,  London King’s Cross Estate developers have already come under fire after it was discovered that hundreds of thousands of visitors to the area surrounding King’s Cross railway station were being covertly scanned. The technology has also been trialled by the Metropolitan and South Wales police forces in football and rugby crowds, on city streets and at crowded festivals. In 2018, Manchester’s Trafford Centre was urged by the surveillance camera commissioner, Tony Porter, to stop using live facial recognition technology after six months of monitoring visitors.

Now, the Information Commissioner’s Office (ICO) is looking into how this software is used and said it was ‘deeply concerned’ about the growing use of facial recognition technology.

In a statement the ICO said: “Scanning people’s faces as they lawfully go about their daily lives, in order to identify them, is a potential threat to privacy that should concern us all. That is especially the case if it is done without people’s knowledge or understanding.”

What are the rules surrounding Facial Recognition Technology?

Under the GDPR, organisations using facial recognition technology must have a lawful reason to do so, or must be able to rely on an exemption. In most cases, organisations must obtain consent by those they are monitoring. Consent must be given freely and a warning of the usage of this technology and its purpose in scanning the faces of citizens must be clear and unambiguous. Consent also has to be ‘unbundled’, meaning it is not acceptable to force people to consent for facial scanning and identification as part of a wider contract.

Since facial images collected by this technology count as biometric data, it falls under the ‘special category’ definition in data protection law, which makes it subject to stricter rules. For instance, even if King’s Cross had put up a sign to inform individuals entering into the area that doing so was a form of consent, this framework would not have met the criteria for obtaining GDPR-compliant consent.

Since the room for abuse of this technology is wide and the Big-Brother-esque implications have already materialised both at home and overseas, use of facial recognition software should be done so with great care – from a moral perspective as well as for the sake of compliance and reputation.  Even when valid consent is given, it may still not satisfy the criteria set out in the GDPR. Companies intending to use FRT should apply strict security measures to it; it should be adopted only when absolutely necessary and employed on a lawful basis.

New legislation is now in the pipeline, and the European Commission has made clear their intention to implement stricter rules that ensure individuals know exactly when, where and why their biometric data is being collected. Considering the ICO’s recent demonstrations and view of FRT, businesses would be wise not to jump on the technology bandwagon in this case without seeking legal advice.

Get in touch

Complete our form and we will get back to you straightaway.