Get In Touch

The Dangers of Facial Recognition

Facial recognition is a growing method of authentication, becoming more and more prevalent in people’s personal and work lives.

Images of people are considered their personal data as defined by the Data Protection Act 2018 which states:

“any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”

With the increased use of video surveillance, door bells, vehicle cameras and footage on personal devices, your image can be captured unknowingly and frequently.

The Information Commissioner’s Office (“ICO”) which regulates the protection of data protection in the UK has published a guidance in June 2021 called ‘The use of live facial recognition technology in public places’.

They have recently shown that they are not afraid to punish firms who exploit the regulations.

Clearview AI

In May 2022, the ICO find Clearview AI £7,522,800.  Since May 2018 the ICO has the power to impose fines of up to £17 million or 4% of global turnover. Clearview AI are a small tech company based in the US which collected more than 20 billion images of people’s faces and data from publicly available information on the internet and social media platforms to create their own online base.  Customers (including law enforcers) could then upload an image which is checked for a match against their systems and provided with the source of the data.

The issue was that people had not been informed let alone consented for their personal data to be collected and used this way.  Even though the company are based in the US and claim that they do not offer services to UK organisations anymore, it is clear that data from UK residents can still be collected and stored in this way.

The ICO found that Clearview AI had breached UK data protections laws by:

  • failing to use the information of people in the UK in a way that is fair and transparent, given that individuals are not made aware or would not reasonably expect their personal data to be used in this way;
  • failing to have a lawful reason for collecting people’s information;
  • failing to have a process in place to stop the data being retained indefinitely;
  • failing to meet the higher data protection standards required for biometric data (classed as ‘special category data’ under the GDPR and UK GDPR);
  • asking for additional personal information, including photos, when asked by members of the public if they are on their database. This may have acted as a disincentive to individuals who wish to object to their data being collected and used.

An enforcement notice has also been issued by the IICO ordering Clearview AI to stop obtaining and using the personal data of UK residents, and to delete data of UK residents from its systems.

Clearview AI has also faced sanctions from Australia who worked with the UK regulator in a joint investigation.

The ICO commented that:

“The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable. That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice.

People expect that their personal information will be respected, regardless of where in the world their data is being used. That is why global companies need international enforcement. Working with colleagues around the world helped us take this action and protect people from such intrusive activity.”

The Future

Data is so easily transmission worldwide, but companies cannot try to evade liability on grounds of jurisdiction as the ICO has been keen to demonstrate.

Personal data is an increasingly valuable commodity but with that comes the risk of exploitation.  Hence the need for stringent regulation to protect our rights.

An independent legal review (The Ryder Review: independent legal review of the governance of biometric data in England and Wales) has called for urgent review of the current laws.  “In order to protect our fundamental rights, particularly our data and privacy rights, this revolution in biometric data use will need to be accompanied by a similarly ambitious new legal and regulatory regime

They conclude that “the current legal framework is not fit for purpose, has not kept pace with technological advances and does not make clear when and how biometrics can be used, or the processes that should be followed; the current oversight arrangements are fragmented and confusing…and the current legal position does not adequately protect individual rights or confront the very substantial invasions of personal privacy that the use of biometrics can cause. “

They have made a number of recommendations including:

  • An urgent need for a new technologically neutral, statutory framework
  • The scope of the legislation should extend to the use of biometrics for unique identification of individuals, and for classification.
  • The statutory framework should require sector and/or technology-specific codes of practice to be published.
  • A national Biometrics Ethics Board should be established
  • The regulation and oversight of biometrics should be consolidated, clarified and properly resourced
  • Further work is necessary on the topic of private-sector use of biometrics.

This is only the start and the use of digital and biometric data is no longer science fiction but quickly becoming a reality in society now.

If you believe information regarding your personal details have been shared without your consent, our specialist team of solicitors can help. Contact us now on 0808 271 9413 or request a call back.