New guidance to ensure Facial Recognition Technology (FRT) acts as a force for good in society has been published by BSI, aiming to help organisations navigate the ethical challenges associated with the use of the technology and build trust in its use.

Use of AI-powered facial recognition tools is increasingly common for security purposes and to curb shoplifting. FRT maps an individual’s physical features in an image to form a face template, which can be compared against other images stored within a database to either verify a high level of likeness or identify an individual’s presence at a specific location at a given time.

BSI’s recent research showed that two in five (40%) people globally expect to be using biometric identification in airports by 2030. Its proliferation has prompted concerns about safe and ethical use, including around error rates linked to racial or gender differences, as well as high-profile legal cases.

The new standard has been developed by BSI, in its role as the UK National Standards Body, to help organisations navigate tools and build public trust. It follows BSI’s Trust in AI poll showing that more than three-quarters (77%) of people believed trust in AI was key for its use in surveillance.

Designed for both public and private organizations using and/or monitoring Video Surveillance Systems (VSS) and biometric facial technologies, the code of practice is applicable to the whole supply chain, beginning with an assessment to determine the need to use FRT, to its procurement, installation, and appropriate use of the technology.

The code of practice (BS 9347:2024) sets out six key principles of ‘trustworthiness’ backed by a summary of policies required and to be maintained by those across the supply chain. The guide covers applicability across governance and accountability, human agency and oversight, privacy and data governance, technical robustness and safety, transparency and explain-ability, diversity, non-discrimination, and fairness.

The standard embeds best practice and gives guidance on the appropriate guardrails for safe and unbiased use of FRT through identification and verification. For identification, such as identifying individuals in crowds at events, the standard requires that FRT is used in conjunction with human intervention or human-in-the-loop measures to ensure accurate identification before action is taken.

In verification scenarios where the technology can operate autonomously, such as building access control, authenticating a payment transaction, or opening a phone, the standard puts guardrails in place for the technology’s learning by ensuring training data includes sets from diverse demographic pools and across a variety of lighting levels and camera angles, to eliminate inaccuracies and mitigate the risk for bias by way of false positives.

BSI director-general, Scott Steedman said, “AI-enabled facial recognition tools have the potential to be a driving force for good and benefit society through their ability to detect and monitor potential security threats. This code of practice aims to embed best practice and give guidance on the appropriate guardrails organisations can put in place to safeguard civil rights and eliminate system bias and discrimination.”