Facial Recognition Technology

State statutes, new hampshire, san francisco, the proposed statute in texas.

A face recognition system is an analytical technology intended at identifying or verifying a particular individual by their facial traits using an image, video, or in real-time mode. Currently, facial recognition is applied in different places, including police departments, airlines, retailers, and schools for many tasks, from controlling student attendance to determining criminal suspects.

Simultaneously, the technology causes significant concern in scholar, public, and political communities regarding the accuracy, privacy, and implicit bias. Thus, this paper aims at examining the bills regarding facial recognition technology, passed in different states and cities, and developing a possible statute that can be adopted by the Texas Legislature.

Facial recognition technology has come into widespread use by law enforcement officers at local, state, and federal legislation. Nevertheless, because of the issues related to civil rights and liberties, a dozen states have offered or even enacted some form of restriction to facial recognition technology used by law enforcement agencies. For instance, Oakland, San Francisco, California, and Massachusetts have already prohibited the utilization of facial recognition technology by city agencies. Moreover, recently, The Wall Street Journal informed about the adoption of a resolute restriction by the Portland City Council on the commercial use of facial recognition technology, which is strictest in the US, some experts state (Uberti, 2020). In addition to cities, some state governments, including California, New Hampshire, and Oregon, have their bills that ban the use of facial recognition by law enforcement in body cameras.

California has become one of the first states that limited facial recognition technology applications, joining Oregon and New Hampshire. In October 2019, its government enacted Assembly Bill No. 1215 (AB 1215) implying a 3-year moratorium on facial recognition and other biometric surveillance systemsusede in police body cameras, which took effect in January 2020 (“Law enforcement,” 2019). Nevertheless, the bill does not prohibit applying the technology on other cameras, and thus, law enforcement agencies can utilize the software during the performance of official duties in California. It is worth noting that AB 1215 empowers an individual to file a claim for declaratory and impartial defense against a law enforcement officer or agency who disturbs the prohibition.

The formation of legislative regulation on facial recognition use in New Hampshire was complicated and accompanied by fierce debates both in the public and politics. Nonetheless, in December 2019, New Hampshire lawmakers managed to introduce House Bill 1642-FN (HB 1642), the last amendment of which was made on 19 February 2020 (“Prohibiting the state,” 2019). Through this act, the Senate and House of Representatives forbid the state officials from obtaining, retaining, or using the data from a face surveillance technology. However, the act allows lawful assistance of a New Hampshire state law enforcement agency for a federal agency by using the system. In addition, the state law official and enforcement agency cannot suggest, request, encourage, provide access to information from the facial recognition equipment out of the state.

San Francisco was the first major city in the US, ban the use of facial recognition systems by the government and enforcement agencies, including the police department. On May 14, 2019, the legislators voted for the ordinance 190110 named “Administrative Code – Acquisition of Surveillance Technology” and enacted May 31, 2019 (“Administrative Code,” 2019). In addition to the prohibition, the law indicates that only city administrators are authorized to approve the plans to buy and use the surveillance technology. Besides, the San Francisco Board of Supervisors appoints the controller as City Services Auditor, whose role is to audit the lawful use of services or equipment related to the surveillance system and report to the Board.

The Portland City Council has adopted one of the toughest municipal prohibitions on facial recognition technology in the United States, banning private and government use of the system within the city. The new legislation was passed unanimously on 9 September 2020 and established via two ordinances (Becker, 2020). The first law bans the public use of face recognition technology, giving gives 90 days for all the city bureaus to perform assessments concerning their application of facial recognition (Becker, 2020). At the same time, the second ordinance is directed at ceasing the use of the system by private organizations in public places and will go into effect from 1 January 2021 (Becker, 2020). Specifically, places such as restaurants, hotels, educational institutions, movie theaters, barbershops, among others, will be barred from using facial recognition technology. As exceptions, the bill allows using facial recognition for the verification for unlocking smartphones or protection of privacy provided by city agencies in the case of the spread of images outside the city.

Biometric technology can be a beneficial tool both in the public and private spheres. In the first case, surveillance technology helps identify criminals, control border security, fight against terrorism, and prevent identity fraud. In the second case, biometrics assists in verifying employee information and worked hours, making advertisements more productive, and improving security by monitoring access to sensitive places. Nevertheless, facial recognition software has been revealed to possess insignificant race, age, and ethnic biases, resulting in the issues associated with civil rights violations and erroneous identification of persons by law enforcement. For example, in their study, Bacchini and Lorusso (2019) concluded that face recognition technology, in those forms implemented in Western countries, promotes present racial disparities and discrimination. Moreover, besides threatening privacy, the face recognition system linked to the large-scale database is vulnerable to data misuse, breaches, mission shift.

Considering the advantages and disadvantages mentioned above, biometric surveillance technology should be allowed but restricted to the extent enabling ensuring protecting the privacy and confidentiality of the state citizens. In particular, the government should prohibit facial recognition technology used in police body cameras since it can lead to racial tension due to people’s misidentifications, especially among the black population. In addition, Texas legislators should ban state officials from accessing, obtaining, or using the information from face surveillance technology. Finally, to prevent information leaks, the state law official and enforcement agency should be forbidden to suggest, encourage, provide access to information from the facial recognition system outside the state. However, state law enforcement agencies should provide support for state officials using facial recognition in cases provided by law.

In summary, the paper has examined the bills regarding facial recognition technology, passed in different states, including New Hampshire and California, and cities such as San Francisco and Portland. Specifically, California enacted Assembly Bill No. 1215 banning facial recognition systems use in police body cameras, while New Hampshire introduced House Bill 1642-FN forbidding the state officials from obtaining data from face surveillance technology. The government of San Francisco banned the use of the facial recognition system by the government and enforcement agencies, including the police department, whereas Portland adopted one of the toughest municipal prohibitions on facial recognition technology. Additionally, the paper has developed a possible statute that can be adopted by the Texas Legislature in 2021.

Administrative Code – Acquisition of Surveillance Technology, File No. 190110 (2019). Web.

Bacchini, F., & Lorusso, L. (2019). Race, again: How face recognition technology reinforces racial discrimination. Journal of Information, Communication and Ethics in Society , 17 (3), 321-335. Web.

Becker, T. (2020). City Council approves ordinances banning use of face recognition technologies by city of Portland bureaus and by private entities in public spaces . Portland.gov. Web.

Law enforcement: Facial recognition and other biometric surveillance, Assembly Bill No. 1215. (2019). Web.

Prohibiting the state or a state official from using a face recognition system, New Hampshire House Bill 1642 (2019). Web.

Uberti, D. (2020). Portland passes strongest facial-recognition restriction in U.S. The Wall Street Journal . Web.

Cite this paper

  • Chicago (N-B)
  • Chicago (A-D)

StudyCorgi. (2022, March 21). Facial Recognition Technology. https://studycorgi.com/facial-recognition-technology/

"Facial Recognition Technology." StudyCorgi , 21 Mar. 2022, studycorgi.com/facial-recognition-technology/.

StudyCorgi . (2022) 'Facial Recognition Technology'. 21 March.

1. StudyCorgi . "Facial Recognition Technology." March 21, 2022. https://studycorgi.com/facial-recognition-technology/.

Bibliography

StudyCorgi . "Facial Recognition Technology." March 21, 2022. https://studycorgi.com/facial-recognition-technology/.

StudyCorgi . 2022. "Facial Recognition Technology." March 21, 2022. https://studycorgi.com/facial-recognition-technology/.

This paper, “Facial Recognition Technology”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: June 14, 2022 .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal . Please use the “ Donate your paper ” form to submit an essay.

The pros and cons of facial recognition technology

There are plenty of pros and cons of facial recognition technology, but is it really worth risking user privacy in the name of efficiency and security?

A close-up photo of a woman with long dark hair looking at the camera as her face is scanned by a facial recognition system

From airports to local supermarkets and mobile phone applications, facial recognition technology (FRT) has become increasingly commonplace. That doesn't mean it's accepted as completely benign however, with the pros and cons of facial recognition technology under almost constant discussion.

On the one hand, it can make life easier and processes smoother; many smartphones have the option to login using facial recognition, such as Apple's FaceID on iPhone. It generally speeds up security checks and improves the user experience.

However, campaign groups, such as Liberty and Big Brother Watch in the UK, argue that this technology is dangerous and could lead to the unfair profiling of individuals who have not committed a crime.

How does facial recognition technology work?

Facial recognition technology uses computer vision technology to extract useful information from still images or videos, which is then analyzed by an algorithm to estimate the degree of similarity between two faces.

A grizzly bear standing in a grassy fiels in front of a fallen tree, with a red square round its face and triangle mapping out each of its eyes and its nose

Facial recognition for bears (and other ways to use the technology for good)

How can facial recognition be made safer?

To do this, the algorithm takes into account facial expressions and face geometry. It looks for a number of data points including the distance between the eyes, between the nose and mouth, cheekbone shape, as well as the overall length of the face between forehead and chin.

This will then be transformed into a 'faceprint' – a unique set of biometric data similar to a fingerprint. The facial recognition system can then be used for a variety of use cases. Let's now consider the pros and cons of facial recognition technology.

Pros of facial recognition technology

Improving security systems and identifying criminals are often cited when arguing in favour of facial recognition , as well as getting rid of unnecessary labour or human interaction. However, there are also plenty of other examples.

Get the ITPro. daily newsletter

Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.

1. Finding missing people and identifying perpetrators

Facial recognition technology is used by law enforcement agencies to find missing people or identify criminals by using camera feeds to compare faces with those on watch lists.

The technology has also been used to locate missing children. Sometimes it is combined with advanced aging software to predict what a child might look like based on photos taken when they disappeared. Law enforcement agencies often use facial recognition with live alerts to help track potential matches. 

Man in business suit being matched through facial recognition

2. Protecting businesses against theft

Facial recognition is increasingly being deployed as a means of identifying known individuals before they commit crimes like theft or public affray. It's common to see CCTV in shops and places of work, and by using facial recognition software it's possible to create tools like automatic cross-referencing to match individuals to a database of known suspects.

The technology has the dual purpose of helping to prevent crime before it happens, and also – some would argue – a deterrent effect for would-be offenders.

If something is stolen from the business, the software can also be used to catalogue the thieves for future reference.

3. Better security measures in banks and airports

Facial recognition has also come to be used as a preventative security measure in sensitive locations such as banks and airports. Similar to identifying criminals that come into shops, the software has helped identify criminals and passengers that pose a potential risk to airlines and passengers.

Border checks have also been sped up at some airports through the use of facial recognition cameras at passport-check gates.

Institutions like banks use the software in the same way to prevent fraud, identifying those previously charged with crimes and alerting the bank to watch specific individuals more carefully.

Air travellers pass through automated passport border control gates at Heathrow Airport, where the UK Border Force uses facial recognition technology.

5. Drastically reduces human touchpoints

Facial recognition requires fewer human resources than other types of security measures, such as fingerprinting. It also doesn’t require physical contact or direct human interaction. Instead, it uses artificial intelligence (AI) to make it an automatic and seamless process.

It also limits touchpoints when unlocking doors and smartphones, getting cash from the ATM or performing any other task that generally requires a PIN, password or key.

5. Better tools for organising photos

Facial recognition can also be used to tag photos in your cloud storage through iCloud or Google Photos. Users who wish can enable facial recognition in their respective photo app’s settings, resulting in named folders for regular photo subjects. Facebook also used facial recognition to suggest people to tag within a photo.

6. Better medical treatment

One surprising use of facial recognition technology is the detection of genetic disorders .

By examining subtle facial traits, facial recognition software can, in some cases, determine how specific genetic mutations caused a particular syndrome. The technology may be faster and less expensive than traditional genetic testing.

Cons of facial recognition

As with any technology, there are drawbacks to using facial recognition, such as the violation of rights and personal freedoms that it presents, potential data theft and the risk of overreliance on inaccurate systems.

1. Greater threat to individual and societal privacy

The threat of technology intruding on an individual's rights to privacy is perhaps the greatest threat created by extensive use of facial recognition.

Privacy is now a critical issue, so much so that in some cities across the likes of California and Massachusetts, law enforcement agencies are banned from using real-time facial recognition tools. Police instead are forced to rely on recorded video from the likes of body-worn cameras.

In 2021, then UK Information Commissioner Elizabeth Denham described the use of live facial recognition (LFR) cameras in public spaces as " deeply concerning ".

2. Infringement on personal freedoms

It's just not personal privacy that is potentially at risk with mass use of facial recognition – the simple act of being recorded or scanned by the technology could discourage individuals from moving freely around their local neighbourhood or city.

The argument here is that people simply do not want to feel like they are overtly being watched, judged, or recorded. 

The basic premise of facial recognition is to match everyone to a database of known suspects, essentially treating you as if you are a criminal suspect without probable cause. It's a concept that some people find inherently dangerous to public freedoms.

For example, the aforementioned example of facial recognition being used to catalogue potential shoplifters has led to problems for companies such as Southern Co-operative, which in 2022 faced a legal complaint for its widespread use of FRT CCTV in its shops.

3. Violation of personal rights

Four surveillance cameras on top of a white pole somewhere in London

When used for identification purposes, facial recognition data is considered as part of the ‘special category’ of personal data under the UK's implementation of the GDPR. This also extends to racial or ethnic origin, and some facial recognition CCTV companies have been accused of.

In July 2022, a cross-party group of 67 MPs called for surveillance equipment from Chinese firms Hikvision and Dahua to be banned from use in the UK, citing concerns over ethics and security. These were informed by stories such as a report by the LA Times alleging that Dahua developed software to allow its cameras to detect Uighur minorities and issue law enforcement users with a warning upon successful detection.

4. Creates data vulnerabilities

Facial recognition also creates a data protection and cyber security headache. The large volume of personally identifiable information (PII) being collected and stored is an attractive target for cyber criminals, and there are already examples of hackers gaining access to such systems.

This data is particularly sensitive given that many online services, such as banking, are increasingly utilising biometric data as part of their multi-factor authentication. A threat actor with access to a database of facial data could have the tools to bypass such checks, and access even more sensitive information.

5. Provides opportunities for fraud and other crimes

Lawbreakers can use facial recognition technology to perpetrate crimes against innocent victims too. They can collect individuals’ personal information, including imagery and video collected from facial scans and stored in databases, to commit identity fraud.

Dark background with white text that says Buyer’s Guide for Developer Security Tools 2022

Get insights on what to look for in a developer security platform  

DOWNLOAD NOW

With this information, a thief could take out credit cards and other debt or open bank accounts in the victim’s name. In consideration of the aforementioned use of facial recognition to place shoplifters on criminal databases, threat actors could even place individuals on a criminal record.

Beyond fraud, bad actors can harass or stalk victims using facial recognition technology.

For example, stalkers could perform reverse image searches on a picture taken in a public place to gather information about their victims, to better persecute them.

Facial recognition law has lagged behind potential use by bad actors in recent years, which has prompted calls from rights groups for stricter biometrics regulations , to extend to technologies such as live facial recognition.

6. The technology is imperfect

Facial recognition is far from perfect, and cannot be relied upon to produce accurate results in place of human judgement.

The technology depends upon algorithms to make facial matches. Those algorithms are more effective for some groups, such as white men than other groups such as women and people of colour due to lack of representation within the data set on which the algorithm was trained. This creates unintentional biases in the algorithms , which could in turn translate to biases in whatever action the technology is informing, such as arrests.

In 2018, civil liberties organisation Big Brother Watch published evidence that facial recognition technology utilised by the Metropolitan Police Service (MPS) was incorrectly identifying innocent people as criminals 98% of the time.

7. Innocent people could be charged

Following on from the imperfection of facial recognition, there are inherent dangers in false positives. Facial recognition software could improperly identify someone as a criminal, resulting in an arrest, or otherwise cause them reputational damage if they were to be included on, for example, a list of shoplifters.

8. Technology can be fooled

Other factors can affect the technology’s ability to recognize people’s faces, including camera angles, lighting levels and image or video quality. Mild alterations of facial data, such as a false moustache, can trick weaker facial recognition systems, while especially poor facial recognition technology could simply be tricked with a photo of a face it recognises.

As facial recognition technology improves, its flaws and the risks associated with it could be reduced. Other technology is also likely to be used in tandem with facial recognition technology to improve overall accuracy, such as gait-recognition software .

For the time being, though, the technology’s inadequacies and people’s reliance on it means facial recognition still has much room to grow and improve.

What are examples of facial recognition software or apps?

Although you might not know it, there’s plenty of examples of facial recognition software available on the market today. This ranges from options provided by tech giants, to software created and fine tuned by smaller companies. Here’s a selection of a few that are available on the market today, some with free options available too.

Microsoft Azure AI Face

This allows you to embed facial recognition technology into any apps you create. The good news is that you don’t need any machine learning knowledge, you just plug in the API and you’re good to go. It contains face detection and can identify a person by matching the face to a private database or through photo ID.

It has a free tier, with 30,000 transactions free per month, or the standard tier which starts at $1 (£0.81) per 1,000 transactions up to a maximum of 1 million transactions per month.

Amazon Rekognition

Amazon Rekognition is the tech giant’s computer vision APIs that you can add to your apps without needing to spend time building machine learning models. It claims to be able to analyse millions of images or videos in seconds. Some of the features include face compare and search, text detection, and video segment detection.

It has a free tier which lasts for 12 months, where you can analyse 5,000 images per month and store 1,000 face metadata objects per month for free. Its paid tier varies depending on how many images you plan to analyse per month.

This company provides another API that developers and businesses can use to easily integrate into their software or applications. Its features include gender detection, age detection, multi-face detection, and face verification.

Pricing starts at $19 per month for the Student Cloud, while developers will pay $99 and businesses $249 per month. Each tier supports a different amount of transactions per minute, and these are priced at $0.002 per transaction.

David Gargaro has been providing content writing and copy editing services for more than 20 years. He has worked with companies across numerous industries, including (but not limited to) advertising, publishing, marketing, real estate, finance, insurance, law, automotive, construction, human resources, restoration services, and manufacturing. He has also managed a team of freelancers as the managing editor of a small publishing company.

Microsoft is dropping this AI tool from Copilot Pro after just three months — here’s why

Splunk expands its AI Assistant in observability, security push

AWS pledges an additional $230 million to support AI startups

Most Popular

AI Code Security Report

AI Code security report: Organizations must change their approach

Password auditing guide

Password auditing guide

The economics of penetration testing for web application security

The economics of penetration testing for web application security

An IBM eBook with four ways for cost optimization and innovation

The CIOs handbook for IT cost optimization

  • 2 Cognizant enhances engineering capabilities with $1.29 billion Belcan acquisition
  • 3 HPE Partner Growth Summit 2024: What can we expect for the channel?
  • 4 Enterprises just can't seem to shake legacy tech – and it’s seriously hampering digital transformation goals
  • 5 Developers beware: These rogue Python packages hide a nasty surprise

essay on facial recognition technology

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

electronics-logo

Article Menu

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Past, present, and future of face recognition: a review.

essay on facial recognition technology

1. Introduction

  • Natural character: The face is a very realistic biometric feature used by humans in the individual’s recognition, making it possibly the most related biometric feature for authentication and identification purposes [ 4 ]. For example, in access control, it is simple for administrators to monitor and evaluate approved persons after authentication, using their facial characteristics. The support of ordinary employers (e.g., administrators) may boost the efficiency and applicability of recognition systems. On the other hand, identifying fingerprints or iris requires an expert with professional competencies to provide accurate confirmation.
  • Nonintrusive: In contrast to fingerprint or iris images, facial images can quickly be obtained without physical contact; people feel more relaxed when using the face as a biometric identifier. Besides, a face recognition device can collect data in a friendly manner that people commonly accept [ 5 ].
  • Less cooperation: Face recognition requires less assistance from the user compared with iris or fingerprint. For some limited applications such as surveillance, a face recognition device may recognize an individual without active subject involvement [ 5 ].
  • We provide an updated review of automated face recognition systems: the history, present, and future challenges.
  • We present 23 well-known face recognition datasets in addition to their assessment protocols.
  • We have reviewed and summarized nearly 180 scientific publications on facial recognition and its material problems of data acquisition and pre-processing from 1990 to 2020. These publications have been classified according to various approaches: holistic, geometric, local texture, and deep learning for 2D and 3D facial recognition. We pay particular attention to the methods based deep-learning, which are currently considered state-of-the-art in 2D face recognition.
  • We analyze and compare several in-depth learning methods according to the architecture implemented and their performance assessment metrics.
  • We study the performance of deep learning methods under the most commonly used data set: (i) Labeled Face in the Wild (LFW) data set [ 10 ] for 2D face recognition, (ii) Bosphorus and BU-3DFE for 3D face recognition.
  • We discuss some new directions and future challenges for facial recognition technology by paying particular attention to the aspect of 3D recognition.

2. Face Recognition History

  • 1964: The American researchers Bledsoe et al. [ 11 ] studied facial recognition computer programming. They imagine a semi-automatic method, where operators are asked to enter twenty computer measures, such as the size of the mouth or the eyes.
  • 1977: The system was improved by adding 21 additional markers (e.g., lip width, hair color).
  • 1988: Artificial intelligence was introduced to develop previously used theoretical tools, which showed many weaknesses. Mathematics (“linear algebra”) was used to interpret images differently and find a way to simplify and manipulate them independent of human markers.
  • 1991: Alex Pentland and Matthew Turk of the Massachusetts Institute of Technology (MIT) presented the first successful example of facial recognition technology, Eigenfaces [ 12 ], which uses the statistical Principal component analysis (PCA) method.
  • 1998: To encourage industry and the academy to move forward on this topic, the Defense Advanced Research Projects Agency (DARPA) developed the Face recognition technology (FERET) [ 13 ] program, which provided to the world a sizable, challenging database composed of 2400 images for 850 persons.
  • 2005: The Face Recognition Grand Challenge (FRGC) [ 14 ] competition was launched to encourage and develop face recognition technology designed to support existent facial recognition initiatives.
  • 2011: Everything accelerates due to deep learning, a machine learning method based on artificial neural networks [ 9 ]. The computer selects the points to be compared: it learns better when it supplies more images.
  • 2014: Facebook knows how to recognize faces due to its internal algorithm, Deepface [ 15 ]. The social network claims that its method approaches the performance of the human eye near to 97%.
  • In its new updates, Apple introduced a facial recognition application where its implementation has extended to retail and banking.
  • Mastercard developed the Selfie Pay, a facial recognition framework for online transactions.
  • From 2019, people in China who want to buy a new phone will now consent to have their faces checked by the operator.
  • Chinese police used a smart monitoring system based on live facial recognition; using this system, they arrested, in 2018, a suspect of “economic crime” at a concert where his face, listed in a national database, was identified in a crowd of 50,000 persons.

3. Face Recognition Systems

3.1. main steps in face recognition systems, 3.2. assessment protocols in face recognition, 4. available datasets and protocols for 2d face recognition, 4.1. orl dataset, 4.2. feret dataset, 4.3. ar dataset, 4.4. xm2vts database, 4.5. banca dataset, 4.6. frgc dataset.

  • In experimental protocol 1, two controlled still images of an individual are used as one for a gallery, and the other for a probe.
  • In Exp 2, the four controlled images of a person are distributed among the gallery and probe.
  • In Exp 4, a single controlled still image presents the gallery, and a single uncontrolled still image presents the probe.
  • Exps 3, 5, and 6 are designed for 3D images.

4.7. LFW Database

4.8. cmu multi pie dataset, 4.9. casia-webface dataset, 4.10. iarpa janus benchmark-a, 4.11. megaface database, 4.12. cfp dataset, 4.13. ms-celeb-m1 benchmark, 4.14. dmfd database, 4.15. vggface database, 4.16. vggface2 database, 4.17. iarpa janus benchmark-b, 4.18. mf2 dataset, 4.19. dfw dataset.

  • Impersonation protocol used only to evaluate the performance of impersonation techniques.
  • Obfuscation protocol used in the cases of disguises.
  • Overall performance protocol that is used to evaluate any algorithm on the complete dataset.

4.20. IARPA Janus Benchmark-C

4.21. lfr dataset, 4.22. rmfrd and smfrd: masqued face recognition dataset.

  • Masked face detection dataset (MFDD): it can be utilized to train a masked face detection model with precision.
  • Real-world masked face recognition dataset (RMFRD): it contains 5000 images of 525 persons wearing masks, and 90,000 pictures of the same 525 individuals without masks collected from the Internet ( Figure 17 ).
  • Simulated masked face recognition dataset (SMFRD): in the meantime, the proposers utilized alternative means to place masks on the standard large-scale facial datasets, such as LFW [ 10 ] and CASIA WebFace [ 30 ] datasets, expanding thus the volume and variety of the masked facial recognition dataset. The SMFRD dataset covers 500,000 facial images of 10,000 persons, and it can be employed in practice alongside their original unmasked counterparts ( Figure 18 ).

5. Two-Dimensional Face Recognition Approaches

5.1. holistic methods, 5.2. geometric approach, 5.3. local-texture approach, 5.4. deep learning approach, 5.4.1. introduction to deep learning.

  • Unsupervised or generative (auto encoder (AE) [ 99 ], Boltzman machine (BM) [ 100 ], recurrent neural network (RNN) [ 101 ], and sum-product network (SPN) [ 102 ]);
  • Supervised or discriminative (convolutional neural network (CNN));
  • Hybrid (deep neural network (DNN) [ 97 , 103 ]).

5.4.2. Convolutional Neural Networks (CNNs)

  • Convolutional layer: This is the CNN’s core building block that aims at extracting features from the input data. Each layer uses a convolution operation to obtain a feature map. After that, the activation or feature maps are fed to the next layer as input data [ 9 ].
  • Pooling layer: This is a non-linear down-sampling [ 104 , 105 ] form that reduces the dimensionality of the feature map but still has the crucial information. There are various non-linear pooling functions in which max-pooling is the most efficient and superior to sub-sampling [ 106 ].
  • Rectified linear unit (ReLU) Layer: This is a non-linear operation, involving units that use the rectifier.
  • Fully connected layer (FC): The high-level reasoning in the neural network is done via fully connected layers after applying various convolutional layers and max-pooling layers [ 107 ].

5.4.3. Popular CNN Architectures

5.4.4. deep cnn-based methods for face recognition., investigations based on alexnet architecture, investigations based on vggnet architecture, investigations based on googlenet architecture, investigations based on lenet architecture, investigations based on resnet architecture, 6. three-dimensional face recognition, 6.1. factual background and acquisition systems, 6.1.1. introduction to 3d face recognition, 6.1.2. microsoft kinect technology, 6.2. methods and datasets, 6.2.1. challenges of 3d facial recognition, 6.2.2. traditional methods of machine learning.

  • Traditional methods of machine learning
  • Deep learning-based methods.

6.2.3. Deep Learning-Based Methods

6.2.4. three-dimensional face recognition databases, 7. open challenges, 7.1. face recognition and occlusion, 7.2. hetegerenous face recognition, 7.3. face recognition and ageing, 7.4. single sample face recognition.

  • In real-world applications (e.g., passports, immigration systems), only one model of each individual is registered in the database and accessible for the recognition task [ 174 ].
  • Pattern recognition systems require vast training data to ensure the generalization of the learning systems.
  • Deep learning-based approach is considered a powerful technique in face recognition. Nonetheless, they need a significant amount of training data to perform well [ 9 ].

7.5. Face Recognition in Video Surveillance

7.6. face recognition and soft biometrics, 7.7. face recognition and smartphones, 7.8. face recognition and internet of things (iot), 8. conclusions, author contributions, conflicts of interest.

  • Kortli, Y.; Jridi, M.; Al Falou, A.; Atri, M. A Review of Face Recognition Methods. Sensors 2020 , 20 , 342. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • O’Toole, A.J.; Roark, D.A.; Abdi, H. Recognizing moving faces: A psychological and neural synthesis. Trends Cogn. Sci. 2002 , 6 , 261–266. [ Google Scholar ] [ CrossRef ]
  • Dantcheva, A.; Chen, C.; Ross, A. Can facial cosmetics affect the matching accuracy of face recognition systems? In Proceedings of the 2012 IEEE Fifth International Conference on Biometrics: Theory, Applications and Systems (BTAS), Arlington, VA, USA, 23–27 September 2012; pp. 391–398. [ Google Scholar ]
  • Sinha, P.; Balas, B.; Ostrovsky, Y.; Russell, R. Face recognition by humans: Nineteen results all computer vision researchers should know about. Proc. IEEE 2006 , 94 , 1948–1962. [ Google Scholar ] [ CrossRef ]
  • Ouamane, A.; Benakcha, A.; Belahcene, M.; Taleb-Ahmed, A. Multimodal depth and intensity face verification approach using LBP, SLF, BSIF, and LPQ local features fusion. Pattern Recognit. Image Anal. 2015 , 25 , 603–620. [ Google Scholar ] [ CrossRef ]
  • Porter, G.; Doran, G. An anatomical and photographic technique for forensic facial identification. Forensic Sci. Int. 2000 , 114 , 97–105. [ Google Scholar ] [ CrossRef ]
  • Li, S.Z.; Jain, A.K. Handbook of Face Recognition , 2nd ed.; Springer Publishing Company: New York, NY, USA, 2011. [ Google Scholar ]
  • Morder-Intelligence. Available online: https://www.mordorintelligence.com/industry-reports/facial-recognition-market (accessed on 21 July 2020).
  • Guo, G.; Zhang, N. A survey on deep learning based face recognition. Comput. Vis. Image Underst. 2019 , 189 , 10285. [ Google Scholar ] [ CrossRef ]
  • Huang, G.B.; Mattar, M.; Berg, T.; Learned-Miller, E. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments ; Technical Report; University of Massachusetts: Amherst, MA, USA, 2007; pp. 7–49. [ Google Scholar ]
  • Bledsoe, W.W. The Model Method in Facial Recognition ; Technical Report; Panoramic Research, Inc.: Palo Alto, CA, USA, 1964. [ Google Scholar ]
  • Turk, M.; Pentland, A. Eigenfaces for recognition. J. Cogn. Neurosci. 1991 , 3 , 71–86. [ Google Scholar ] [ CrossRef ]
  • Phillips, P.J.; Wechsler, H.; Huang, J.; Rauss, P. The FERET database and evaluation procedure for face recognition algorithms. Image Vis. Comput. 1998 , 16 , 295–306. [ Google Scholar ] [ CrossRef ]
  • Phillips, P.J.; Flynn, P.J.; Scruggs, T.; Bowyer, K.W.; Chang, J.; Hoffman, K.; Marques, J.; Min, J.; Worek, W. Overview of the face recognition grand challenge. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; pp. 947–954. [ Google Scholar ]
  • Taigman, Y.; Yang, M.; Ranzato, M.; Wolf, L. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1701–1708. [ Google Scholar ]
  • Chihaoui, M.; Elkefi, A.; Bellil, W.; Ben Amar, C. A Survey of 2D Face Recognition Techniques. Computers 2016 , 5 , 21. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Benzaoui, A.; Bourouba, H.; Boukrouche, A. System for automatic faces detection. In Proceedings of the 2012 3rd International Conference on Image Processing, Theory, Tools and Applications (IPTA), Istanbul, Turkey, 15–18 October 2012; pp. 354–358. [ Google Scholar ]
  • Martinez, A.M. Recognizing imprecisely localized, partially occluded and expression variant faces from a single sample per class. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 2002 , 24 , 748–763. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Sidahmed, S.; Messali, Z.; Ouahabi, A.; Trépout, S.; Messaoudi, C.; Marco, S. Nonparametric denoising methods based on contourlet transform with sharp frequency localization: Application to electron microscopy images with low exposure time. Entropy 2015 , 17 , 2781–2799. [ Google Scholar ]
  • Ouahabi, A. Image Denoising using Wavelets: Application in Medical Imaging. In Advances in Heuristic Signal Processing and Applications ; Chatterjee, A., Nobahari, H., Siarry, P., Eds.; Springer: Basel, Switzerland, 2013; pp. 287–313. [ Google Scholar ]
  • Ouahabi, A. A review of wavelet denoising in medical imaging. In Proceedings of the International Workshop on Systems, Signal Processing and Their Applications (IEEE/WOSSPA’13), Algiers, Algeria, 12–15 May 2013; pp. 19–26. [ Google Scholar ]
  • Nakanishi, A.Y.J.; Western, B.J. Advancing the State-of-the-Art in Transportation Security Identification and Verification Technologies: Biometric and Multibiometric Systems. In Proceedings of the 2007 IEEE Intelligent Transportation Systems Conference, Seattle, WA, USA, 30 September–3 October 2007; pp. 1004–1009. [ Google Scholar ]
  • Samaria, F.S.; Harter, A.C. Parameterization of a Stochastic Model for Human Face Identification. In Proceedings of the 1994 IEEE Workshop on Applications of Computer Vision, Sarasota, FL, USA, 5–7 December 1994; pp. 138–142. [ Google Scholar ]
  • Martinez, A.M.; Benavente, R. The AR face database. CVC Tech. Rep. 1998 , 24 , 1–10. [ Google Scholar ]
  • Messer, K.; Matas, J.; Kittler, J.; Jonsson, K. Xm2vt sdb: The extended m2vts database. In Proceedings of the 1999 2nd International Conference on Audio and Video-based Biometric Person Authentication (AVBPA), Washington, DC, USA, 22–24 March 1999; pp. 72–77. [ Google Scholar ]
  • Bailliére, E.A.; Bengio, S.; Bimbot, F.; Hamouz, M.; Kittler, J.; Mariéthoz, J.; Matas, J.; Messer, K.; Popovici, V.; Porée, F.; et al. The BANCA Database and Evaluation Protocol. In Proceedings of the 2003 International Conference on Audio- and Video-Based Biometric Person Authentication (AVBPA), Guildford, UK, 9–11 June 2003; pp. 625–638. [ Google Scholar ]
  • Huang, G.B.; Jain, V.; Miller, E.L. Unsupervised joint alignment of complex images. In Proceedings of the 2007 IEEE International Conference on Computer Vision (ICCV), Rio de Janeiro, Brazil, 14–20 October 2007; pp. 1–8. [ Google Scholar ]
  • Huang, G.; Mattar, M.; Lee, H.; Miller, E.G.L. Learning to align from scratch. Adv. Neural Inf. Process. Syst. 2012 , 25 , 764–772. [ Google Scholar ]
  • Gross, R.; Matthews, L.; Cohn, J.; Kanade, T.; Baker, S. Multi-PIE. Image Vis. Comput. 2010 , 28 , 807–813. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • CASIA Web Face. Available online: http://www.cbsr.ia.ac.cn/english/CASIA-WebFace-Database.html (accessed on 21 July 2019).
  • Klare, B.F.; Klein, B.; Taborsky, E.; Blanton, A.; Cheney, J.; Allen, K.; Grother, P.; Mah, A.; Burge, M.; Jain, A.K. Pushing the frontiers of unconstrained face detection and recognition: IARPA Janus Benchmark A. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1931–1939. [ Google Scholar ]
  • Shlizerman, I.K.; Seitz, S.M.; Miller, D.; Brossard, E. The MegaFace benchmark: 1 million faces for recognition at scale. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 4873–4882. [ Google Scholar ]
  • Shlizerman, I.K.; Suwajanakorn, S.; Seitz, S.M. Illumination-aware age progression. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 3334–3341. [ Google Scholar ]
  • Ng, H.W.; Winkler, S. A data-driven approach to cleaning large face datasets. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 343–347. [ Google Scholar ]
  • Sengupta, S.; Cheng, J.; Castillo, C.; Patel, V.M.; Chellappa, R.; Jacobs, D.W. Frontal to Profile Face Verification in the Wild. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016; pp. 1–9. [ Google Scholar ]
  • Guo, Y.; Zhang, L.; Hu, Y.; He, X.; Gao, J. Ms-Celeb-1m: A dataset and benchmark for large-scale face recognition. In Proceedings of the 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2016. [ Google Scholar ]
  • Wang, T.Y.; Kumar, A. Recognizing Human Faces under Disguise and Makeup. In Proceedings of the 2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), Sendai, Japan, 29 February–2 March 2016; pp. 1–7. [ Google Scholar ]
  • Parkhi, O.M.; Vedaldi, A.; Zisserman, A. Deep Face Recognition. In Proceedings of the 2015 British Machine Vision Conference, Swansea, UK, 7–10 September 2015; pp. 41.1–41.12. [ Google Scholar ]
  • Cao, Q.; Shen, L.; Xie, W.; Parkhi, O.M.; Zisserman, A. VGGFace2: A dataset for recognizing faces across pose and age. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG), Xi’an, China, 15–19 May 2018; pp. 67–74. [ Google Scholar ]
  • Whitelam, C.; Taborsky, E.; Blanton, A.; Maze, B.; Adams, J.; Miller, T.; Kalka, N.; Jain, A.K.; Duncan, J.A.; Allen, K. IARPA Janus Benchmark-B face dataset. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 592–600. [ Google Scholar ]
  • Nech, A.; Shlizerman, I.K. Level playing field for million scale face recognition. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3406–3415. [ Google Scholar ]
  • Kushwaha, V.; Singh, M.; Singh, R.; Vatsa, M. Disguised Faces in the Wild. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 1–18. [ Google Scholar ]
  • Maze, B.; Adams, J.; Duncan, J.A.; Kalka, N.; Miller, T.; Otto, C.; Jain, A.K.; Niggel, W.T.; Anderson, J.; Cheney, J.; et al. IARPA Janus benchmark-C: Face dataset and protocol. In Proceedings of the 2018 International Conference on Biometrics (ICB), Gold Coast, QLD, Australia, 20–23 February 2018; pp. 158–165. [ Google Scholar ]
  • Elharrouss, O.; Almaadeed, N.; Al-Maadeed, S. LFR face dataset: Left-Front-Right dataset for pose-invariant face recognition in the wild. In Proceedings of the 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), Doha, Qatar, 2–5 February 2020; pp. 124–130. [ Google Scholar ]
  • Wang, Z.; Wang, G.; Huang, B.; Xiong, Z.; Hong, Q.; Wu, H.; Yi, P.; Jiang, K.; Wang, N.; Pei, Y.; et al. Masked Face Recognition Dataset and Application. arXiv 2020 , arXiv:2003.09093v2. [ Google Scholar ]
  • Belhumeur, P.N.; Hespanha, J.P.; Kriegman, D.J. Eigenfaces vs Fisherfaces: Recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 1997 , 19 , 711–720. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Stone, J.V. Independent component analysis: An introduction. Trends Cogn. Sci. 2002 , 6 , 59–64. [ Google Scholar ] [ CrossRef ]
  • Sirovich, L.; Kirby, M. Low-Dimensional procedure for the characterization of human faces. J. Opt. Soc. Am. 1987 , 4 , 519–524. [ Google Scholar ] [ CrossRef ]
  • Kirby, M.; Sirovich, L. Application of the Karhunen-Loève procedure for the characterization of human faces. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 1990 , 12 , 831–835. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Femmam, S.; M’Sirdi, N.K.; Ouahabi, A. Perception and characterization of materials using signal processing techniques. IEEE Trans. Instrum. Meas. 2001 , 50 , 1203–1211. [ Google Scholar ] [ CrossRef ]
  • Zhao, L.; Yang, Y.H. Theoretical analysis of illumination in PCA-based vision systems. Pattern Recognit. 1999 , 32 , 547–564. [ Google Scholar ] [ CrossRef ]
  • Pentland, A.; Moghaddam, B.; Starner, T. View-Based and modular eigenspaces for face recognition. In Proceedings of the 1994 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 21–23 June 1994; pp. 84–91. [ Google Scholar ]
  • Bartlett, M.; Movellan, J.; Sejnowski, T. Face Recognition by Independent Component Analysis. IEEE Trans. Neural Netw. 2002 , 13 , 1450–1464. [ Google Scholar ] [ CrossRef ]
  • Abhishree, T.M.; Latha, J.; Manikantan, K.; Ramachandran, S. Face recognition using Gabor Filter based feature extraction with anisotropic diffusion as a pre-processing technique. Procedia Comput. Sci. 2015 , 45 , 312–321. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Zehani, S.; Ouahabi, A.; Oussalah, M.; Mimi, M.; Taleb-Ahmed, A. Trabecular bone microarchitecture characterization based on fractal model in spatial frequency domain imaging. Int. J. Imaging Syst. Technol. accepted.
  • Ouahabi, A. Signal and Image Multiresolution Analysis , 1st ed.; ISTE-Wiley: London, UK, 2012. [ Google Scholar ]
  • Guetbi, C.; Kouame, D.; Ouahabi, A.; Chemla, J.P. Methods based on wavelets for time delay estimation of ultrasound signals. In Proceedings of the 1998 IEEE International Conference on Electronics, Circuits and Systems, Lisbon, Portugal, 7–10 September 1998; pp. 113–116. [ Google Scholar ]
  • Ferroukhi, M.; Ouahabi, A.; Attari, M.; Habchi, Y.; Taleb-Ahmed, A. Medical video coding based on 2nd-generation wavelets: Performance evaluation. Electronics 2019 , 8 , 88. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Wang, M.; Jiang, H.; Li, Y. Face recognition based on DWT/DCT and SVM. In Proceedings of the 2010 International Conference on Computer Application and System Modeling (ICCASM), Taiyuan, China, 22–24 October 2010; pp. 507–510. [ Google Scholar ]
  • Bookstein, F.L. Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 1989 , 11 , 567–585. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Shih, F.Y.; Chuang, C. Automatic extraction of head and face boundaries and facial features. Inf. Sci. 2004 , 158 , 117–130. [ Google Scholar ] [ CrossRef ]
  • Zobel, M.; Gebhard, A.; Paulus, D.; Denzler, J.; Niemann, H. Robust facial feature localization by coupled features. In Proceedings of the 2000 4th IEEE International Conference on Automatic Face and Gesture Recognition (FG), Grenoble, France, 26–30 March 2000; pp. 2–7. [ Google Scholar ]
  • Wiskott, L.; Fellous, J.M.; Malsburg, C.V.D. Face recognition by elastic bunch graph matching. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 1997 , 19 , 775–779. [ Google Scholar ] [ CrossRef ]
  • Xue, Z.; Li, S.Z.; Teoh, E.K. Bayesian shape model for facial feature extraction and recognition. Pattern Recognit. 2003 , 36 , 2819–2833. [ Google Scholar ] [ CrossRef ]
  • Tistarelli, M. Active/space-variant object recognition. Image Vis. Comput. 1995 , 13 , 215–226. [ Google Scholar ] [ CrossRef ]
  • Lades, M.; Vorbuggen, J.C.; Buhmann, J.; Lange, J.; Malsburg, C.V.D.; Wurtz, R.P.; Konen, W. Distortion invariant object recognition in the dynamic link architecture. IEEE Trans. Comput. 1993 , 42 , 300–311. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Wiskott, L. Phantom faces for face analysis. Pattern Recognit. 1997 , 30 , 837–846. [ Google Scholar ] [ CrossRef ]
  • Duc, B.; Fischer, S.; Bigun, J. Face authentication with Gabor information on deformable graphs. IEEE Trans. Image Process. 1999 , 8 , 504–516. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Kotropoulos, C.; Tefas, A.; Pitas, I. Frontal face authentication using morphological elastic graph matching. IEEE Trans. Image Process. 2000 , 9 , 555–560. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Jackway, P.T.; Deriche, M. Scale-space properties of the multiscale morphological dilation-erosion. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 1996 , 18 , 38–51. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Tefas, A.; Kotropoulos, C.; Pitas, I. Face verification using elastic graph matching based on morphological signal decomposition. Signal Process. 2002 , 82 , 833–851. [ Google Scholar ] [ CrossRef ]
  • Kumar, D.; Garaina, J.; Kisku, D.R.; Sing, J.K.; Gupta, P. Unconstrained and Constrained Face Recognition Using Dense Local Descriptor with Ensemble Framework. Neurocomputing 2020 . [ Google Scholar ] [ CrossRef ]
  • Zehani, S.; Ouahabi, A.; Mimi, M.; Taleb-Ahmed, A. Staistical features extraction in wavelet domain for texture classification. In Proceedings of the 2019 6th International Conference on Image and Signal Processing and their Applications (IEEE/ISPA), Mostaganem, Algeria, 24–25 November 2019; pp. 1–5. [ Google Scholar ]
  • Ait Aouit, D.; Ouahabi, A. Nonlinear Fracture Signal Analysis Using Multifractal Approach Combined with Wavelet. Fractals Complex Geom. Patterns Scaling Nat. Soc. 2011 , 19 , 175–183. [ Google Scholar ] [ CrossRef ]
  • Girault, J.M.; Kouame, D.; Ouahabi, A. Analytical formulation of the fractal dimension of filtered stochastic signal. Signal Process. 2010 , 90 , 2690–2697. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Djeddi, M.; Ouahabi, A.; Batatia, H.; Basarab, A.; Kouamé, D. Discrete wavelet transform for multifractal texture classification: Application to ultrasound imaging. In Proceedings of the IEEE International Conference on Image Processing (IEEE ICIP2010), Hong Kong, China, 26–29 September 2010; pp. 637–640. [ Google Scholar ]
  • Ouahabi, A. Multifractal analysis for texture characterization: A new approach based on DWT. In Proceedings of the 10th International Conference on Information Science, Signal Processing and Their Applications (IEEE/ISSPA), Kuala Lumpur, Malaysia, 10–13 May 2010; pp. 698–703. [ Google Scholar ]
  • Davies, E.R. Introduction to texture analysis. In Handbook of Texture Analysis ; Mirmehdi, M., Xie, X., Suri, J., Eds.; Imperial College Press: London, UK, 2008; pp. 1–31. [ Google Scholar ]
  • Benzaoui, A.; Hadid, A.; Boukrouche, A. Ear biometric recognition using local texture descriptors. J. Electron. Imaging 2014 , 23 , 053008. [ Google Scholar ] [ CrossRef ]
  • Ahonen, T.; Hadid, A.; Pietikäinen, M. Face recognition with local binary patterns. In Proceedings of the 8th European Conference on Computer Vision (ECCV), Prague, Czech Republic, 11–14 May 2004; pp. 469–481. [ Google Scholar ]
  • Ahonen, T.; Hadid, A.; Pietikäinen, M. Face description with local binary patterns: Application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2006 , 28 , 2037–2041. [ Google Scholar ] [ CrossRef ]
  • Beveridge, J.R.; Bolme, D.; Draper, B.A.; Teixeira, M. The CSU face identification evaluation system: Its purpose, features, and structure. Mach. Vis. Appl. 2005 , 16 , 128–138. [ Google Scholar ] [ CrossRef ]
  • Moghaddam, B.; Nastar, C.; Pentland, A. A bayesian similarity measure for direct image matching. In Proceedings of the 13th International Conference on Pattern Recognition (ICPR), Vienna, Austria, 25–29 August 1996; pp. 350–358. [ Google Scholar ]
  • Rodriguez, Y.; Marcel, S. Face authentication using adapted local binary pattern histograms. In Proceedings of the 9th European Conference on Computer Vision (ECCV), Graz, Austria, 7–13 May 2006; pp. 321–332. [ Google Scholar ]
  • Sadeghi, M.; Kittler, J.; Kostin, A.; Messer, K. A comparative study of automatic face verification algorithms on the banca database. In Proceedings of the 4th International Conference on Audio- and Video-Based Biometric Person Authentication (AVBPA), Guilford, UK, 9–11 June 2003; pp. 35–43. [ Google Scholar ]
  • Huang, X.; Li, S.Z.; Wang, Y. Jensen-shannon boosting learning for object recognition. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–26 June 2005; pp. 144–149. [ Google Scholar ]
  • Boutella, E.; Harizi, F.; Bengherabi, M.; Ait-Aoudia, S.; Hadid, A. Face verification using local binary patterns and generic model adaptation. Int. J. Biomed. 2015 , 7 , 31–44. [ Google Scholar ] [ CrossRef ]
  • Benzaoui, A.; Boukrouche, A. 1DLBP and PCA for face recognition. In Proceedings of the 2013 11th International Symposium on Programming and Systems (ISPS), Algiers, Algeria, 22–24 April 2013; pp. 7–11. [ Google Scholar ]
  • Benzaoui, A.; Boukrouche, A. Face Recognition using 1DLBP Texture Analysis. In Proceedings of the 5th International Conference of Future Computational Technologies and Applications, Valencia, Spain, 27 May–1 June 2013; pp. 14–19. [ Google Scholar ]
  • Benzaoui, A.; Boukrouche, A. Face Analysis, Description, and Recognition using Improved Local Binary Patterns in One Dimensional Space. J. Control Eng. Appl. Inform. (CEAI) 2014 , 16 , 52–60. [ Google Scholar ]
  • Ahonen, T.; Rathu, E.; Ojansivu, V.; Heikkilä, J. Recognition of Blurred Faces Using Local Phase Quantization. In Proceedings of the 19th International Conference on Pattern Recognition (ICPR), Tampa, FL, USA, 8–11 December 2008; pp. 1–4. [ Google Scholar ]
  • Ojansivu, V.; Heikkil, J. Blur insensitive texture classification using local phase quantization. In Proceedings of the 3rd International Conference on Image and Signal Processing (ICSIP), Cherbourg-Octeville, France, 1–3 July 2008; pp. 236–243. [ Google Scholar ]
  • Tan, X.; Triggs, B. Enhanced local texture feature sets for face recognition under difficult lighting conditions. In Proceedings of the 3rd International Workshop on Analysis and Modeling of Faces and Gestures (AMFG), Rio de Janeiro, Brazil, 20 October 2007; pp. 168–182. [ Google Scholar ]
  • Lei, Z.; Ahonen, T.; Pietikainen, M.; Li, S.Z. Local Frequency Descriptor for Low-Resolution Face Recognition. In Proceedings of the 9th Conference on Automatic Face and Gesture Recognition (FG), Santa Barbara, CA, USA, 21–25 March 2011; pp. 161–166. [ Google Scholar ]
  • Kannala, J.; Rahtu, E. BSIF: Binarized statistical image features. In Proceedings of the 21th International Conference on Pattern Recognition (ICPR), Tsukuba, Japan, 11–15 November 2012; pp. 1363–1366. [ Google Scholar ]
  • Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015 , 61 , 85–117. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Deng, L. A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA Trans. Signal Inf. Process. 2014 , 3 , 1–29. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Deng, L.; Yu, D. Deep Learning: Methods and Applications. Found. Trends Signal Process. 2014 , 7 , 197–387. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 2010 , 11 , 3371–3408. [ Google Scholar ]
  • Salakhutdinov, R.; Hinton, G. Deep Boltzmann machines. In Proceedings of the 12th International Conference on Artificial Intelligence and Statistics, Clearwater, FL, USA, 16–19 April 2009; pp. 448–455. [ Google Scholar ]
  • Sutskever, I.; Martens, J.; Hinton, G. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML), Bellevue, WA, USA, 28 June–2 July 2011; pp. 1017–1024. [ Google Scholar ]
  • Poon, H.; Domingos, P. Sum-product networks: A new deep architecture. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 689–690. [ Google Scholar ]
  • Kimb, K.; Aminantoa, M.E. Deep Learning in Intrusion Detection Perspective: Overview and further Challenges. In Proceedings of the International Workshop on Big Data and Information Security (IWBIS), Jakarta, Indonesia, 23–24 September 2017; pp. 5–10. [ Google Scholar ]
  • Ouahabi, A. Analyse spectrale paramétrique de signaux lacunaires. Traitement Signal 1992 , 9 , 181–191. [ Google Scholar ]
  • Ouahabi, A.; Lacoume, J.-L. New results in spectral estimation of decimated processes. IEEE Electron. Lett. 1991 , 27 , 1430–1432. [ Google Scholar ] [ CrossRef ]
  • Scherer, D.; Müller, A.; Behnke, S. Evaluation of pooling operations in convolutional architectures for object recognition. In Proceedings of the 2010 International Conference on Artificial Neural Networks, Thessaloniki, Greece, 15–18 September 2010; pp. 92–101. [ Google Scholar ]
  • Coşkun, M.; Uçar, A.; Yildirim, Ö.; Demir, Y. Face recognition based on convolutional neural network. In Proceedings of the 2017 International Conference on Modern Electrical and Energy Systems (MEES), Kremenchuk, Ukraine, 15–17 November 2017; pp. 376–379. [ Google Scholar ]
  • Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998 , 86 , 2278–2324. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015 , 115 , 211–252. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [ Google Scholar ]
  • Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 2nd International Conference on Learning Representations (ICLR), Banff, AB, Canada, 14–16 April 2014. [ Google Scholar ]
  • Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [ Google Scholar ]
  • He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [ Google Scholar ]
  • Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 2019 , 42 , 7132–7141. [ Google Scholar ]
  • Chopra, S.; Hadsell, R.; LeCun, Y. Learning a similarity metric discriminatively, with application to face verification. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–26 June 2005; pp. 539–546. [ Google Scholar ]
  • Sun, Y.; Wang, X.; Tang, X. Deep learning face representation from predicting 10,000 classes. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1891–1898. [ Google Scholar ]
  • Sun, Y.; Chen, Y.; Wang, X.; Tang, X. Deep learning face representation by joint identification-verification. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 1988–1996. [ Google Scholar ]
  • Sun, Y.; Wang, X.; Tang, X. Deeply learned face representations are sparse, selective, and robust. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 2892–2900. [ Google Scholar ]
  • Sun, Y.; Liang, D.; Wang, X.; Tang, X. DeepID3: Face Recognition with Very Deep Neural Networks. arXiv 2015 , arXiv:1502.00873v1. [ Google Scholar ]
  • Taigman, Y.; Yang, M.; Ranzato, M.; Wolf, L. Web-Scale training for face identification. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 2746–2754. [ Google Scholar ]
  • Ouahabi, A.; Depollier, C.; Simon, L.; Kouame, D. Spectrum estimation from randomly sampled velocity data [LDV]. IEEE Trans. Instrum. Meas. 1998 , 47 , 1005–1012. [ Google Scholar ] [ CrossRef ]
  • Liu, J.; Deng, Y.; Bai, T.; Huang, C. Targeting ultimate accuracy: Face recognition via deep embedding. arXiv 2015 , arXiv:1506.07310v4. [ Google Scholar ]
  • Masi, I.; Tran, A.T.; Hassner, T.; Leksut, J.T.; Medioni, G. Do we really need to collect millions of faces for effective face recognition? In Proceedings of the 2016 European Conference on Computer Vision (ECCV), Amsterdam, The Netherland, 8–16 October 2016; pp. 579–596. [ Google Scholar ]
  • Zhang, X.; Fang, Z.; Wen, Y.; Li, Z.; Qiao, Y. Range loss for deep face recognition with Long-Tailed Training Data. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5419–5428. [ Google Scholar ]
  • Liu, W.; Wen, Y.; Yu, Z.; Yang, M. Large-margin softmax loss for convolutional neural networks. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; pp. 507–516. [ Google Scholar ]
  • Chen, B.; Deng, W.; Du, J. Noisy Softmax: Improving the Generalization Ability of DCNN via Postponing the Early Softmax Saturation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4021–4030. [ Google Scholar ]
  • Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 815–823. [ Google Scholar ]
  • Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. arXiv 2013 , arXiv:1311.2901v3. [ Google Scholar ]
  • Ben Fredj, H.; Bouguezzi, S.; Souani, C. Face recognition in unconstrained environment with CNN. Vis. Comput. 2020 , 1–10. [ Google Scholar ] [ CrossRef ]
  • Wen, Y.; Zhang, K.; Li, Z.; Qiao, Y. A discriminative feature learning approach for deep face recognition. In Proceedings of the 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2016; pp. 499–515. [ Google Scholar ]
  • Wu, Y.; Liu, H.; Li, J.; Fu, Y. Deep Face Recognition with Center Invariant Loss. In Proceedings of the Thematic Workshop of ACM Multimedia, Mountain View, CA, USA, 23–27 October 2017; pp. 408–414. [ Google Scholar ]
  • Yin, X.; Yu, X.; Sohn, K.; Liu, X.; Chandraker, M. Feature Transfer Learning for Face Recognition with Under-Represented Data. In Proceedings of the 2019 International Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019. [ Google Scholar ]
  • Ranjan, R.; Castillo, C.D.; Chellappa, R. L2-constrained softmax loss for discriminative face verification. arXiv 2017 , arXiv:1703.09507v3. [ Google Scholar ]
  • Deng, J.; Zhou, Y.; Zafeiriou, S. Marginal Loss for Deep Face Recognition. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 2006–2014. [ Google Scholar ]
  • Wang, F.; Xiang, X.; Cheng, J.; Yuille, A.L. NormFace: L2 Hypersphere Embedding for Face Verification. In Proceedings of the 25th ACM International Conference on Multimedia, Mountain View, CA, USA, 23–27 October 2017; pp. 1041–1049. [ Google Scholar ]
  • Liu, Y.; Li, H.; Wang, X. Rethinking Feature Discrimination and Polymerization for Large-Scale Recognition. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS), (Deep Learning Workshop), Long Beach, CA, USA, 4–9 December 2017. [ Google Scholar ]
  • Hasnat, M.; Bohné, J.; Milgram, J.; Gentric, S.; Chen, L. Von Mises-Fisher Mixture Model-based Deep Learning: Application to Face Verification. arXiv 2017 , arXiv:1706.04264v2. [ Google Scholar ]
  • Liu, W.; Wen, Y.; Yu, Z.; Li, M.; Raj, B.; Song, L. SphereFace: Deep Hypersphere Embedding for Face Recognition. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6738–6746. [ Google Scholar ]
  • Zheng, Y.; Pal, D.K.; Savvides, M. Ring Loss: Convex Feature Normalization for Face Recognition. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 5089–5097. [ Google Scholar ]
  • Guo, Y.; Zhang, L. One-Shot Face Recognition by Promoting Underrepresented Classes. arXiv 2018 , arXiv:1707.05574v2. [ Google Scholar ]
  • Wang, H.; Wang, Y.; Zhou, Z.; Ji, X.; Gong, D.; Zhou, J.; Liu, W. CosFace: Large Margin Cosine Loss for Deep Face Recognition. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 5265–5274. [ Google Scholar ]
  • Wang, F.; Cheng, J.; Liu, W.; Liu, H. Additive Margin Softmax for Face Verification. IEEE Signal Process. Lett. 2018 , 25 , 926–930. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Wu, X.; He, R.; Sun, Z.; Tan, T. A Light CNN for Deep Face Representation with Noisy Labels. IEEE Trans. Inf. Forensics Secur. 2018 , 13 , 2884–2896. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Hayat, M.; Khan, S.H.; Zamir, W.; Shen, J.; Shao, L. Gaussian Affinity for Max-margin Class Imbalanced Learning. In Proceedings of the 2019 International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019. [ Google Scholar ]
  • Deng, J.; Guo, J.; Zafeiriou, S. ArcFace: Additive Angular Margin Loss for Deep Face Recognition. In Proceedings of the 2019 International Conference on Computer Vision and Pattern Recognition (CVPR), Lone Beach, CA, USA, 16–20 June 2019; pp. 4690–4699. [ Google Scholar ]
  • Huang, C.; Li, Y.; Loy, C.C.; Tang, X. Deep Imbalanced Learning for Face Recognition and Attribute Prediction. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 2019 . Available online: https://ieeexplore.ieee.org/document/8708977 (accessed on 21 July 2020).
  • Song, L.; Gong, D.; Li, Z.; Liu, C.; Liu, W. Occlusion Robust Face Recognition Based on Mask Learning with Pairwise Differential Siamese Network. In Proceedings of the 2019 International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019. [ Google Scholar ]
  • Wei, X.; Wang, H.; Scotney, B.; Wan, H. Minimum margin loss for deep face recognition. Pattern Recognit. 2020 , 97 , 107012. [ Google Scholar ] [ CrossRef ]
  • Sun, J.; Yang, W.; Gao, R.; Xue, J.H.; Liao, Q. Inter-class angular margin loss for face recognition. Signal Process. Image Commun. 2020 , 80 , 115636. [ Google Scholar ] [ CrossRef ]
  • Wu, Y.; Wu, Y.; Wu, R.; Gong, Y.; Lv, K.; Chen, K.; Liang, D.; Hu, X.; Liu, X.; Yan, J. Rotation consistent margin loss for efficient low-bit face recognition. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–18 June 2020; pp. 6866–6876. [ Google Scholar ]
  • Ling, H.; Wu, J.; Huang, J.; Li, P. Attention-based convolutional neural network for deep face recognition. Multimed. Tools Appl. 2020 , 79 , 5595–5616. [ Google Scholar ] [ CrossRef ]
  • Wu, B.; Wu, H. Angular Discriminative Deep Feature Learning for Face Verification. In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 2133–2137. [ Google Scholar ]
  • Chen, D.; Cao, X.; Wang, L.; Wen, F.; Sun, J. Bayesian face revisited: A joint formulation. In Proceedings of the European Conference on Computer Vision (ECCV), Firenze, Italy, 7–13 October 2012; pp. 566–579. [ Google Scholar ]
  • Chen, B.C.; Chen, C.S.; Hsu, W.H. Face recognition and retrieval using cross-age reference coding with cross-age celebrity dataset. IEEE Trans. Multimed. 2015 , 17 , 804–815. [ Google Scholar ] [ CrossRef ]
  • Liu, Z.; Luo, P.; Wang, X.; Tang, X. Deep learning face attributes in the wild. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 11–18 December 2015; pp. 3730–3738. [ Google Scholar ]
  • Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 4278–4284. [ Google Scholar ]
  • Oumane, A.; Belahcene, M.; Benakcha, A.; Bourennane, S.; Taleb-Ahmed, A. Robust Multimodal 2D and 3D Face Authentication using Local Feature Fusion. Signal Image Video Process. 2016 , 10 , 12–137. [ Google Scholar ] [ CrossRef ]
  • Oumane, A.; Boutella, E.; Benghherabi, M.; Taleb-Ahmed, A.; Hadid, A. A Novel Statistical and Multiscale Local Binary Feature for 2D and 3D Face Verification. Comput. Electr. Eng. 2017 , 62 , 68–80. [ Google Scholar ] [ CrossRef ]
  • Soltanpour, S.; Boufama, B.; Wu, Q.M.J. A survey of local feature methods for 3D face recognition. Pattern Recognit. 2017 , 72 , 391–406. [ Google Scholar ] [ CrossRef ]
  • Zhou, S.; Xiao, S. 3D Face Recognition: A Survey. Hum. Cent. Comput. Inf. Sci. 2018 , 8 , 8–35. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Min, R.; Kose, N.; Dugelay, J. KinectFaceDB: A Kinect Database for Face Recognition. IEEE Trans. Syst. Man Cybern. Syst. 2014 , 44 , 1534–1548. [ Google Scholar ] [ CrossRef ]
  • Drira, H.; Ben Amor, B.; Srivastava, A.; Daoudi, M.; Slama, R. 3D Face Recognition under Expressions, Occlusions, and Pose Variations. IEEE Trans. Pattern Anal. Mach. Intell. 2013 , 35 , 2270–2283. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Ribeiro Alexandre, G.; Marques Soares, J.; Pereira Thé, G.A. Systematic review of 3D facial expression recognition methods. Pattern Recognit. 2020 , 100 , 107108. [ Google Scholar ] [ CrossRef ]
  • Ríos-Sánchez, B.; Costa-da-Silva, D.; Martín-Yuste, N.; Sánchez-Ávila, C. Deep Learning for Facial Recognition on Single Sample per Person Scenarios with Varied Capturing Conditions. Appl. Sci. 2019 , 9 , 5474. [ Google Scholar ]
  • Kim, D.; Hernandez, M.; Choi, J.; Medioni, G. Deep 3D face identification. In Proceedings of the IEEE International Joint Conference on Biometrics (IJCB), Denver, CO, USA, 1–4 October 2017; pp. 133–142. [ Google Scholar ]
  • Gilani, S.Z.; Mian, A.; Eastwood, P. Deep, dense and accurate 3D face correspondence for generating population specific deformable models. Pattern Recognit. 2017 , 69 , 238–250. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Gilani, S.Z.; Mian, A.; Shafait, F.; Reid, I. Dense 3D face correspondence. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 2018 , 40 , 1584–1598. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Gilani, S.Z.; Mian, A. Learning from Millions of 3D Scans for Large-scale 3D Face Recognition. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1896–1905. [ Google Scholar ]
  • Mimouna, A.; Alouani, I.; Ben Khalifa, A.; El Hillali, Y.; Taleb-Ahmed, A.; Menhaj, A.; Ouahabi, A.; Ben Amara, N.E. OLIMP: A Heterogeneous Multimodal Dataset for Advanced Environment Perception. Electronics 2020 , 9 , 560. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Benzaoui, A.; Boukrouche, A.; Doghmane, H.; Bourouba, H. Face recognition using 1DLBP, DWT, and SVM. In Proceedings of the 2015 3rd International Conference on Control, Engineering & Information Technology (CEIT), Tlemcen, Algeria, 25–27 May 2015; pp. 1–6. [ Google Scholar ]
  • Ait Aouit, D.; Ouahabi, A. Monitoring crack growth using thermography.-Suivi de fissuration de matériaux par thermographie. C. R. Mécanique 2008 , 336 , 677–683. [ Google Scholar ] [ CrossRef ]
  • Arya, S.; Pratap, N.; Bhatia, K. Future of Face Recognition: A Review. Procedia Comput. Sci. 2015 , 58 , 578–585. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Zafeiriou, S.; Zhang, C.; Zhang, Z. A survey on face detection in the wild: Past, present and future. Comput. Vis. Image Underst. 2015 , 138 , 1–24. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Min, R.; Xu, S.; Cui, Z. Single-Sample Face Recognition Based on Feature Expansion. IEEE Access 2019 , 7 , 45219–45229. [ Google Scholar ] [ CrossRef ]
  • Zhang, D.; An, P.; Zhang, H. Application of robust face recognition in video surveillance systems. Optoelectron. Lett. 2018 , 14 , 152–155. [ Google Scholar ] [ CrossRef ]
  • Tome, P.; Vera-Rodriguez, R.; Fierrez, J.; Ortega-Garcia, J. Facial soft biometric features for forensic face recognition. Forensic Sci. Int. 2015 , 257 , 271–284. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Fathy, M.E.; Patel, V.M.; Chellappa, R. Face-based Active Authentication on mobile devices. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, QLD, Australia, 19–24 April 2015; pp. 1687–1691. [ Google Scholar ]
  • Medapati, P.K.; Murthy, P.H.S.T.; Sridhar, K.P. LAMSTAR: For IoT-based face recognition system to manage the safety factor in smart cities. Trans. Emerg. Telecommun. Technol. 2019 , 1–15. Available online: https://onlinelibrary.wiley.com/doi/abs/10.1002/ett.3843?af=R (accessed on 10 July 2020).
DatabaseApparition’s DateImagesSubjectsImages/Subject
ORL [ ]19944004010
FERET [ ]199614,1261199-
AR [ ]1998301611626
XM2VTS [ ]1999-295-
BANCA [ ]2003-208-
FRGC [ ]200650,000-7
LFW [ ]200713,2335749≈2.3
CMU Multi PIE [ ]2009>750,000337N/A
IJB-A [ ]20155712500≈11.4
CFP [ ]20167000500>14
DMFD [ ]201624604106
IJB-B [ ]201721,7981845≈36.2
MF2 [ ]20174.7 M672,057≈7
DFW [ ]201811,1571000≈5.26
IJB-C [ ]201831,3343531≈6
LFR [ ]202030,00054210–260
RMFRD [ ]202095,000525-
SMFRD [ ]2020500,00010,000-
DatabaseApparition’s DateImagesSubjectsImages/Subject
CASIA WebFace [ ]2014494,41410,575≈46.8
MegaFace [ ]20161,027,060690,572≈1.4
MS-Celeb-1M [ ]201610 M100,000100
VGGFACE [ ]20162.6 M26221000
VGGFACE2 [ ]20173.31 M9131≈362.6
MethodAuthorsYearArchitectureNetworksVerif. MetricTraining Set Accuracy (%) ± SE
1DeepFaceTaigman et al. [ ]2014CNN-93SoftmaxFacebook (4.4 M, 4 K) *97.35 ± 0.25
2DeepIDSun et al. [ ]2014CNN-960Softmax + JBCelebFaces + [ ] (202 k, 10 k) *97.45 ± 0.26
3DeepID2Sun et al. [ ]2014CNN-925Contrastive Softmax + JBCelebFaces+ (202 k, 10 k) *99.15 ± 0.13
4DeepID2+Sun et al. [ ]2014CNN-925Contrastive Softmax + JBWDRef [ ] + CelebFaces + (290 k, 12 k) *99.47 ± 0.12
5DeepID3Sun et al. [ ]2015VGGNet25Contrastive Softmax + JBWDRef + CelebFaces + (290 k,12 k)99.53 ± 0.10
6FaceNetSchroff et al. [ ]2015GoogleNet1Triplet LossGoogle (200 M, 8 M) *99.63 ± 0.09
7Web-ScaleTaigman et al. [ ]2015CNN-94Contrastive SoftmaxPrivate Database (4.5 M, 55 K) *98.37
8BAIDULiu et al. [ ]2015CNN-910Triplet LossPrivate Databse (1.2 M, 18 K) *99.77
9VGGFaceParkhi et al. [ ]2015VGGNet1Triplet LossVGGFace (2.6 M, 2.6 K)98.95
10AugmentationMasi et al. [ ]2016VGGNet-191SoftmaxCASIA WebFace (494 k, 10 k)98.06
11Range LossZhang et al. [ ]2016VGGNet-161Range LossCASIA WebFace + MS-Celeb-1M (5 M, 100 k)99.52
12Center LossWen et al. [ ]2016LeNet1Center LossCASIA WebFace + CACD2000 [ ] + Celebrity + [ ] (0.7 M, 17 k)99.28
13L-SoftmaxLiu et al. [ ]2016VGGNet-181L-SoftmaxCASIA-WebFace (490 k, 10 K)98.71
14L2-SoftmaxRanjan et al. [ ]2017ResNet-1011L2-SoftmaxMS-Celeb 1M (3.7 M, 58 k)99.78
15Marginal LossDeng et al. [ ]2017ResNet-271Marginal LossMS-Celeb 1M (4 M, 82 k)99.48
16NormFaceWang et al. [ ]2017ResNet-281Contrastive LossCASIA WebFace (494 k, 10 k)99.19 ± 0.008
17Noisy SoftmaxChen et al. [ ]2017VGGNet1Noisy SoftmaxCASIA WebFace (400 K, 14 k)99.18
18COCO LossLiu et al. [ ]2017ResNet-1281COCO LossMS-Celeb 1M (3 M, 80 k)
19Center Invariant Loss Wu et al. [ ]2017LeNet1Center Invariant LossCASIA WebFace (0.45 M, 10 k)99.12
20Von Mises-FisherHasnat et al. [ ]2017ResNet-271vMF LossMS-Celeb-1M (4.61 M, 61.24 K)99.63
21SphereFaceLiu et al. [ ]2018ResNet-641A-Softmax CASIA WebFace (494 k, 10 k)99.42
22Ring LossZheng et al. [ ]2018ResNet-641Ring LossMS-Celeb-1M (3.5 M, 31 K)99.50
23MLRGuo and Zhang [ ]2018ResNet-341CCS LossMS-Celeb-1M (10 M, 100 K)99.71
24CosfaceWang et al. [ ]2018ResNet-641Large Margin Cosine Loss CASIA WebFace (494 k, 10 k)99.73
25AM-SoftmaxWang et al. [ ]2018ResNet-201AM-Softmax LossCASIA WebFace (494 k, 10 k)99.12
26Light-CNNWu et al. [ ]2018ResNet-291SoftmaxMS-Celeb-1M (5 M, 79 K)99.33
27Affinity LossHayat et al. [ ]2019ResNet-501Affinity LossVGGFace2 (3.31 M, 8 K)99.65
28ArcFaceDeng et al. [ ]2019ResNet-1001ArcFaceMS-Celeb-1M (5.8 M, 85 k)99.83
29CLMLEHuang et al. [ ]2019ResNet-64 1CLMLE LossCASIA WebFace (494 k, 10 k)99.62
30PDSNSong et al. [ ]2019ResNet-501Pairwise Contrastive LossCASIA WebFace (494 k, 10 k)99.20
31Feature TransferYin et al. [ ] 2019LeNet1SoftmaxMS-Celeb-1M (4.8 M, 76.5 K)99.55
32Ben Fredj workBen Fredj et al. [ ]2020GoogleNet1Softmax with center lossCASIA WebFace (494 k, 10 k)99.2 ± 0.04
33MMLWei et al. [ ]2020Inception ResNet-V1 [ ]1MML LossVGGFace2 (3.05 M, 8 K)99.63
34IAMSun et al. [ ]2020Inception ResNet-V11IAM lossCASIA WebFace (494 k, 10 k)99.12
35RCM lossWu et al. [ ]2020ResNet-181Rotation Consistent Margin loss CASIA WebFace (494 k, 10 k)98.91
36ACNNLing et al. [ ]2020ResNet-1001ArcFace LossDeepGlint-MS1M (3.9 M, 86 K)99.83
37LMC
SDLMC
DLMC
Wu and Wu [ ]2020ResNet321LMC loss
SDLMC loss
DLMC loss
CASIA WebFace (494 k, 10 k)98.1399.0399.07
DatabaseApparition’s DateImagesSubjectsData Type
BU-3DFE20062500100Mesh
FRGC v1.0 [ ]2006943273Depth image
FRGC v2.0 [ ]20064007466Depth image
CASIA20064623123Depth image
ND2006200788813,450Depth image
Bosphorus20084666105Point Cloud
BJUT-3D20091200500Mesh
Texas 3DFRD20101140118Depth image
UMB-DB20111473143Depth image
BU-4DFE2008606 sequences = 60,600 (frames)1013D video

Share and Cite

Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb-Ahmed, A. Past, Present, and Future of Face Recognition: A Review. Electronics 2020 , 9 , 1188. https://doi.org/10.3390/electronics9081188

Adjabi I, Ouahabi A, Benzaoui A, Taleb-Ahmed A. Past, Present, and Future of Face Recognition: A Review. Electronics . 2020; 9(8):1188. https://doi.org/10.3390/electronics9081188

Adjabi, Insaf, Abdeldjalil Ouahabi, Amir Benzaoui, and Abdelmalik Taleb-Ahmed. 2020. "Past, Present, and Future of Face Recognition: A Review" Electronics 9, no. 8: 1188. https://doi.org/10.3390/electronics9081188

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Facial Recognition Technology and Ethical Concerns Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Face recognition refers to a method used to confirm or identify an individual’s identity using their face. The technology authenticates and identifies an individual based on sets of verifiable and recognizable data unique and specific to that individual. Facebook launched its DeepFace program in 2014, which can be used to identify two photographed faces belonging to one individual (Scherhag et al., 2019). While face recognition technology is gaining increasing application, especially by digital corporations, critics believe that storage and identity management have various ethical issues, including privacy and confidentiality. The use of face recognition technology is associated with various ethical concerns, such as lack of transparency and informed consent, racial discrimination, misinformation and bias, data breaches and mass surveillance.

Data privacy is undoubtedly the biggest ethical concern regarding the adoption and use of face recognition technology. Privacy is a key concern for people using the internet, especially social media. According to a study by Scherhag and colleagues, face recognition programs infringe on individuals’ inherent rights to remain under constant surveillance and have their images kept without their consent (Scherhag et al., 2019). For instance, in 2019, the European Commission banned the use of facial recognition technology in public spaces because of the ethical and privacy abuse associated with the technology (Scherhag et al., 2019). Privacy concerns associated with facial recognition revolve around unsafe data storage practices capable of exposing facial recognition information. Many corporations continue to host their facial recognition information on local servers with high-security vulnerabilities.

Facebook is among the digital corporations that announced to shut down its facial recognition software used to identify faces in videos and photographs. The corporation decided to delete over one billion facial recognition templates that the company has collected since its inception. There has been increasing concern about the ethics of facial recognition programs, and many questions have been raised over their accuracy, racial bias and privacy. Facebook has been facing severe criticism over the impact of this technology on users. The company was forced to bring down the program in 2019; however, users can turn the feature back on.

The decision made by Meta to shut down its facial recognition program was a right and ethical decision. Face recognition technology compromises privacy, making intrusive surveillance normal and often targeting marginalized people. The use of face recognition technology has gotten the company into various ethical issues. In 2019, Facebook was fined $6.5 billion by the US Federal Trade Commission to settle privacy complaints (Scherhag et al., 2019). The decision to shut down facial recognition software came after the corporation faced severe regulatory and legislative scrutiny over leaked user information.

The decision to bring down facial recognition technology positively impacts the company and its users. Not only will the company’s reputation grow strong, but also it will gain more users because the users will be assured of their privacy. Moreover, Facebook will not be involved in privacy complaints associated with face recognition technology. The company is now looking for a new form of identifying individuals with minimal privacy concerns—a narrower form of individual authentication.

The government and corporations should control facial recognition technology and be allowed to use it for narrower purposes. The technology is more effective and valuable when operated privately on an individual’s devices (Scherhag et al., 2019). Face recognition technology is not private, leading to severe security concerns. People have the right over their privacy, and their data can only be used with their consent. Therefore, the government must regulate the use of facial recognition technologies by organizations and businesses.

Scherhag, U., Rathgeb, C., Merkle, J., Breithaupt, R., & Busch, C. (2019). Face recognition systems under morphing attacks: A survey. IEEE Access , 7 , 23012-23026.

  • The Effect of Facial Configuration to Recognize Words
  • Nonverbal Communication: The Facial Expression
  • Facial Gestures
  • Aspects of Ethics of Workers Solidarity
  • Happiness in Mills' Utilitarianism Theory
  • The Three Essential Principles for Ethical Reasoning
  • Generational Responsibility for Past Injustices
  • Ethical Hedonism: The Principles of Morals and Legislation
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, December 11). Facial Recognition Technology and Ethical Concerns. https://ivypanda.com/essays/facial-recognition-technology-and-ethical-concerns/

"Facial Recognition Technology and Ethical Concerns." IvyPanda , 11 Dec. 2023, ivypanda.com/essays/facial-recognition-technology-and-ethical-concerns/.

IvyPanda . (2023) 'Facial Recognition Technology and Ethical Concerns'. 11 December.

IvyPanda . 2023. "Facial Recognition Technology and Ethical Concerns." December 11, 2023. https://ivypanda.com/essays/facial-recognition-technology-and-ethical-concerns/.

1. IvyPanda . "Facial Recognition Technology and Ethical Concerns." December 11, 2023. https://ivypanda.com/essays/facial-recognition-technology-and-ethical-concerns/.

Bibliography

IvyPanda . "Facial Recognition Technology and Ethical Concerns." December 11, 2023. https://ivypanda.com/essays/facial-recognition-technology-and-ethical-concerns/.

Academic Journal of Computing & Information Science , 2023, 6(7); doi: 10.25236/AJCIS.2023.060703 .

Facial Recognition Technology: A Comprehensive Overview

Li Qinjun, Cui Tianwei, Zhao Yan, Wu Yuying

School of Electronic Information and Artificial Intelligence, Shaanxi University of Science & Technology, Xi'an, China

  • Full-Text HTML
  • Full-Text XML
  • Full-Text Epub
  • Download PDF
  • Download: 78

This paper provides an extensive review of facial recognition technology, tracing its historical evolution, exploring its functioning and applications, discussing the challenges it presents, and contemplating future prospects. The technology's inception and advancement are traced from its early stages to the current state, highlighting the key developments that have shaped its progression. An exploration of various types of facial recognition systems, including 2D, 3D, and thermal, underscores the diversity and complexity of this technology. A detailed explanation of how facial recognition works is provided, outlining the processes of data acquisition, face detection, feature extraction, and matching. We further delve into the broad array of its applications across multiple domains, such as security and surveillance, smartphone authentication, social media, healthcare, and retail. Despite the impressive benefits and applications of facial recognition technology, it also presents notable challenges. These include accuracy concerns, privacy and ethical implications, and the need for comprehensive regulatory frameworks. The paper concludes with a forward-looking discussion on the future of facial recognition technology, considering potential innovations and growth predictions. This review provides a comprehensive understanding of facial recognition technology, underscoring its relevance in our digitally driven world and the implications it holds for the future.

Facial recognition, Applications, Challenges, Future prospects

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

The ethics of facial recognition technologies, surveillance, and accountability in an age of artificial intelligence: a comparative analysis of US, EU, and UK regulatory frameworks

Denise almeida.

1 Department of Information Studies, UCL, London, UK

Konstantin Shmarko

2 Department of Economics, UCL, London, UK

Elizabeth Lomas

The rapid development of facial recognition technologies (FRT) has led to complex ethical choices in terms of balancing individual privacy rights versus delivering societal safety. Within this space, increasingly commonplace use of these technologies by law enforcement agencies has presented a particular lens for probing this complex landscape, its application, and the acceptable extent of citizen surveillance. This analysis focuses on the regulatory contexts and recent case law in the United States (USA), United Kingdom (UK), and European Union (EU) in terms of the use and misuse of FRT by law enforcement agencies. In the case of the USA, it is one of the main global regions in which the technology is being rapidly evolved, and yet, it has a patchwork of legislation with less emphasis on data protection and privacy. Within the context of the EU and the UK, there has been a critical focus on the development of accountability requirements particularly when considered in the context of the EU’s General Data Protection Regulation (GDPR) and the legal focus on Privacy by Design (PbD). However, globally, there is no standardised human rights framework and regulatory requirements that can be easily applied to FRT rollout. This article contains a discursive discussion considering the complexity of the ethical and regulatory dimensions at play in these spaces including considering data protection and human rights frameworks. It concludes that data protection impact assessments (DPIA) and human rights impact assessments together with greater transparency, regulation, audit and explanation of FRT use, and application in individual contexts would improve FRT deployments. In addition, it sets out ten critical questions which it suggests need to be answered for the successful development and deployment of FRT and AI more broadly. It is suggested that these should be answered by lawmakers, policy makers, AI developers, and adopters.

Introduction

Law enforcement agencies globally are constantly seeking new technologies to better ensure successful detection and prosecution of crimes to keep citizens and society safe. In addition, there is a public expectation to deliver value for money and where possible to provide economic efficiencies and reduced labor costs, which potentially new technologies can help deliver. Over the last decade, many new technologies have been harnessed by law enforcement agencies including, but not limited to surveillance cameras, automated license plate readers, body cameras, drones, and now facial recognition technologies (FRT). Law enforcement agencies have been at the forefront of FRT adoption due to the benefits that can be seen to be derived and justified in this space. However, each of these technologies changes the relationships between law enforcement operatives and citizens and requires the negotiation of new boundaries and revised accountability requirements. It is important to recognise that each technology has encroached on citizens’ privacy and relationship with the state. As such, what is being deemed as acceptable in terms of reshaping boundaries is under scrutiny and debate. However, the decisions being made in regard to technology adoption are not currently uniform. There are distinct differences in technology adoption and roll out nation to nation and in some national contexts state to state. These largely depend on the legal landscape in terms of privacy/data protection legislation and citizen acceptance and expectations of surveillance. Within this context, COVID-19 has further pushed the boundaries of privacy, with nations introducing new measures to track citizens’ movements and connections to contain the spread of the virus. However, the shift in enhanced monitoring, surveillance and privacy disclosures, and accountability in this regard is being questioned globally, drawing attention to changes and challenges [ 1 , 2 ]. This latter question of accountability and acceptable privacy limits is critical in terms of balancing rights and responsibilities for FRT.

Accountability provides for the obligation to explain, justify, and take responsibility for actions. In the context of the state and law enforcement, the state is obligated to be responsible for and answer for the choices it makes in terms of the technologies it rolls out and how these impact in particular case contexts. Many questions about the use of FRT and Artificial Intelligence (AI) have yet to be fully resolved. FRT usage by law enforcement agencies provides a strong case study for considering aspects of FRT and AI ethics more generally. It provides for a very understandable use of personal data with clear impacts on individuals rights.

This article considers the complexity of the ethical and regulatory dimensions at play in the space of FRT and law enforcement. The paper starts by providing a brief explanation of FRT, followed by an analysis of the use of FRT by law enforcement and legal approaches to the regulation of FRT in the US, EU, and UK. We conclude by recommending that there must be better checks and balances for individuals and societal needs. There needs to be accountability through greater transparency, regulation, audit and explanation of FRT use and application in individual contexts. One critical tool for this is the impact assessment, which can be used to undertake data protection impact assessments (DPIA) and human rights impact assessments. Ten critical ethical questions are framed that need to be considered for the ethical development, procurement, rollout, and use of FRT for law enforcement purposes. It is worth stating these from the outset:

  • Who should control the development, purchase, and testing of FRT systems ensuring the proper management and processes to challenge bias?
  • For what purposes and in what contexts is it acceptable to use FRT to capture individuals’ images?
  • What specific consents, notices and checks and balances should be in place for fairness and transparency for these purposes?
  • On what basis should facial data banks be built and used in relation to which purposes?
  • What specific consents, notices and checks and balances should be in place for fairness and transparency for data bank accrual and use and what should not be allowable in terms of data scraping, etc.?
  • What are the limitations of FRT performance capabilities for different purposes taking into consideration the design context?
  • What accountability should be in place for different usages?
  • How can this accountability be explicitly exercised, explained and audited for a range of stakeholder needs?
  • How are complaint and challenge processes enabled and afforded to all?
  • Can counter-AI initiatives be conducted to challenge and test law enforcement and audit systems?

Finally, it should be established that while law enforcement agencies are at the forefront of FRT adoption, others can learn valuable ethical lessons from the frameworks put in place to safeguard citizens’ rights and ensure accountability through time. Many of these same questions are applicable to AI development more broadly and should be considered by law makers to legislate and mandate for robust AI frameworks.

Facial recognition technologies (FRT)

Facial recognition in essence works by capturing an individual’s image and then identifying that person through analysing and mapping of those captured features comparing them to identified likenesses. Facial images, and their careful analysis, have been a critical toolkit of law enforcement agencies since the nineteenth century. However, in the twenty-first century, the application of facial recognition, moving from manual techniques to facial recognition technologies (FRT), to automatically extract and compare features and every nuance of their measurement through the application of artificial intelligence (AI) and algorithms has significantly enhanced this basic tool [ 3 ]. As such, the face can be mapped and compared to other data which offers a more formal match and identification to an individual. This can sometimes involve the introduction of other biometric data such as eye recognition data. One-to-one matching provides for certain identification of an individual in a specific context. However, using an identified image in connection with other data banks or data lakes enables one-to-many possibilities and connotations of usage. Matching that can process data at scale presents new possibilities and complexities when considering machine learning, algorithms, and AI.

The context of the situation of FRT rollout and data gathering is potentially all important in terms of how it aligns with citizens’ security versus privacy concerns in differing situations. In 2008, Lenovo launched a new series of laptops that instead of requiring a password, could recognise the face of their authorised user [ 4 ]. This functionality was seen as a marketing benefit for Lenovo and clearly users consented and engaged with the capture and use for their own personal computing needs and one-to-one matching. However, there will be distinctions between expectations in one-to-one matching in a more private controlled space for transparent individual benefits versus taking and using a verification process in broader and potentially big data contexts. As the proposed EU regulation on AI suggests, the use of FRT in public spaces is ethically (and legally) significantly different than its use for device unlocking. Citizens will have different expectations about spaces in which surveillance and FRT should be in place. For example, when crossing national border jurisdictions, there has always been an exchange of data and careful identification of individuals and as such FRT may be deemed to be more acceptable in this space as opposed to when moving around public spaces more generally, functioning in working spaces and finally residing within private home dwellings. In each of these spaces, the expectations for active law enforcement and surveillance clearly differ and there are a number of ethical questions to be answered for a successful rollout in different contexts and for different law enforcement purposes. In addition, there are differences between expectations for localised enforcement agencies such as police services and national intelligence agencies undertaking more covert security operations. In each citizen space, and dependent upon the form of law enforcement, there will be different perspectives and concerns from individuals and groups of stakeholders. As such, reaching a consensus in technological rollouts will be a journey. Even in the example of border controls, where ID data have always been exchanged, studies have shown that the views of travellers on acceptable technologies differ from the views of board control guards [ 5 ].

In regard to law enforcement, some scholars have advanced the theory that monitoring of social media by law enforcement could be perceived as a ‘digital stop and frisk’, potentially delivering, “everyday racism in social media policing as an emerging framework for conceptualizing how various forms of racism affect social media policing strategies” [ 6 ]. This statement evidences concerns about the bias and credibility of law enforcement agencies. Applying this same conceptual framework to, sometimes flawed, facial recognition algorithms without taking accountability for the consequences of this usage could not only lead to further discrimination and victimisation of specific communities, but also to an even greater loss of trust between the general population and law enforcement agencies. In recent years, we have seen an exponential increase in research focused on issues of algorithmic accountability, 1 with the overarching message being that algorithms tend to reflect the biases of those who build them, and the data used to train them. The extent to which they can be relied on without human checks is one of constant concern, particularly as the use of these technologies as well as identifying individuals is extending their reach to make further judgements about individuals including in regard to their behaviours, motivations, emotions, and protected characteristics such as gender or sexuality [ 7 ].

In the specific case of FRT, it is important to understand some aspects at play in the design and roll out that have led to concerns over biases and unbalanced power structures. The majority of technology workers in the West are claimed to be white men, which as such unintentionally influences the development of technologies such as FRT [ 8 ]. Input bias has been known about for decades, but has not been fully surfaced in an FRT context. If FRT are trained on white male faces, then there will be implications when it is used to process data related to non-white and female faces. As such, studies have indicated that identification and bias failings do occur [ 9 ]. Even where inputs are adjusted, systems can be biased by attempting to meet the anticipated needs of purchasers and users which may skew the system particularly as algorithms are applied and developed through time. In each of these instances, a high proportion of the stakeholders with power and influence are likely to be male and white [ 10 ]. These biases can lead to severe consequences, particularly when carried into uses by law enforcement. This brings to the surface issues of power dynamics and citizen trust of its law enforcement.

However, it is equally to be noted that AI has the potential to challenge biases and to be used in innovative ways that can alter existing power dynamics. A significant example of this, is the recent use of FRT by human rights activists and protesters as a way to identify, and hold accountable, law enforcement officers who might be abusing their power [ 11 ]. This ‘turn of the tables’ adds a further layer of complexity to discussions of accountability and power. However, while a group of people who typically do not hold power may in limited circumstances use FRT to hold law enforcement accountable, that does not make the technology ethically viable. However, this power shift, if more formally supported, might provide a part of the solution to FRT deployment and its impacts. For example, as images are captured and significant in legal case contexts, AI has the power to potentially assist with identifying deep fakes and calling out adaptions to footage and photographs. As such, it is important to drill down into the use of FRT and the frameworks which sit around FRT.

The EU and UK legislative landscape for FRT in a law enforcement context

There are currently no FRT specific pieces of legislation in the EU and UK domains, but there are other pieces of legislation that dictate the management and rollout of FRT. In terms of personal data management, the EU’s GDPR, which came into force in 2018 covering all the Member States of the EU, has been seen as setting the bar at the highest level for the management of personal data. As such, for many tech companies operating at a global level, it has been seen as the de facto standard to roll out across all global operations. It is to be noted that as the GDPR came into force, while the UK was part of the EU, it was enshrined into UK domestic legislation and still continues to apply within a UK context. The UK’s ongoing adequacy in terms of alignment to EU GDPR will continue to be judged by the EU.

The GDPR has required systems to be implemented where ‘privacy by design’ (PbD) and ‘privacy by default’ are inbuilt for any personal data processing. Processing covers any activity with personal data including creating, receiving, sharing, and even destroying/deleting personal data. There must be a clear lawful basis for personal data processing, and in addition, the data must be processed fairly and transparently. Within this context, it is important to understand that this does not prevent personal data collection, but does require carefully documented processes and active personal data management through time. In addition, it must be noted that what is considered fair and lawful is potentially open to interpretation and legal debate and contest. In certain instances, consent for processing is required. In addition, there are specific data subject rights such as the right to know what is held on/about you, subject to certain exemptions and to ask for data to be rectified or deleted (the right to be forgotten) in certain circumstances.

Where special category personal data are processed, stricter controls are required. Of note in this regard is biometric data which is categorised as physical or behavioural characteristics that uniquely identify an individual, including but not limited to DNA, fingerprints, faces, and voice patterns as examples. As such FRT are caught under this definition and within Article 9 of the GDPR, it is clarified that biometric data should not be used to identify a person unless an individual has provided explicit consent or alternatively other exemptions exist. One such example of an exempted area across the EU and UK is law enforcement. In the GDPR, personal data management for law enforcement purposes was derogated in Article 23, for determination at Member State level. There is therefore some divergence in terms of how the checks and balances exist between personal data rights and law enforcement rights. Within most EU Member States there is an expectation that for the purposes of pursuing law enforcement to identify and track offenders certain exemptions would exist, and consent would not be required. Within this space, the new technological landscape is further continuing to evolve and as such its rollout and use by law enforcement agencies is not consistent across the EU.

Regardless of certain consent exemptions, other GDPR requirements do still apply, such as PbD, which does provide a framework of accountability for law enforcement. For FRT purposes, a DPIA must be undertaken as a way of demonstrating and achieving PbD. The DPIA is a process of identifying risks that arise from data processing and is mandatory for high-risk applications, such as facial recognition in law enforcement use. 2 This requires that all aspects of a process are reviewed and considered to ensure that there are justifications for the process; this ensures it is ‘fair and lawful’, it is appropriately targeted, implemented and managed through time. This procedure is not only useful for the FRT operators, as it forces them to scrutinise their algorithms, focus and security, but can also benefit the general public, as, if published, a DPIA can explain data processing in terms that are accessible to any individual, not just an IT specialist. Mandatory publication of the DPIA does not exist, but there is a requirement to be transparent about DP processing and to have in place privacy notices for this reason.

Another important GDPR requirement is the need to have a Data Protection Officer (DPO) within any public authority or private entities where the core activities require large scale, regular, and systematic monitoring of individuals or large-scale processing of special category data or data relating to criminal convictions or offences. As such, this does mean that law enforcement agencies and businesses providing processing services will be required to have a DPO. The DPO is required to advise an organisation on its data protection compliance. In addition, were an organisation to fail to fully comply with the GDPR, the DPO would act as a whistle-blower reporting to the relevant national ombudsman on data protection.

Each EU Member State and the UK has a regulatory requirement which establishes an oversight, complaint, and investigatory regime to be in place, a data protection ombudsman/regulator. There are currently 27 data protection authorities in the EU, one for each country, plus the European Data Protection Supervisor, which oversees EU institutions and bodies. The UK also has a data protection supervisor. The exact responsibilities of the organisations differ, but all of them are tasked with monitoring and ensuring data protection and privacy compliance regionally on behalf of their citizens. In accordance with this mandate, it is not uncommon to see these authorities actively intervening in relevant disputes, sometimes even before any citizen complaints are filed. The benefit to accountability of these organisations is obvious—the data protection regulators have bigger budgets and better legal teams than most individuals, meaning that they are more effective in holding FRT operators accountable. The authorities with enforcement powers can bypass litigation entirely, issuing fines and orders faster than a court would be able to. These factors ensure that the FRT providers and operators should never get complacent.

Separately, citizens may bring forward lawsuits for data protection failings, but the ability to complain to a regulator provides the citizen with a cheaper alternative and one which should actively investigate and oversee any organisational data protection failings. The regulators are publicly funded and the resources for each across the EU and UK vary significantly. The extent of investigations and the timeliness of dealing with complaints have both been areas of criticism. For example, in 2020, a group of cross-party Members of the UK Parliament wrote complaining about the performance of the UK’s Information Commissioner. 3 Such complaints are not limited to the UK. In July 2020, the Irish High Court gave permission for a judicial review of the Data Protection Commissioner in respect of the delay dealing with complaints. It is to be noted that Ireland is the home to many tech companies’ European headquarters, and thus, these delays can impact more broadly upon EU citizens. However, equally, there are many examples of active engagement and investigation.

In terms of moving to cover new developments, the GDPR is not a prescriptive piece of legislation and, as such, its ‘vagueness by default’ is intended to ensure that the regulation maintains its relevance, allowing for application to new technologies, including FRT. Even more importantly, the GDPR holds some sway outside of the EU as well, since any business dealing with the bloc has to adhere to the rules when managing European’s data, even if those same rules do not apply in their own domestic jurisdiction. This is generally known as ‘The Brussels Effect’ [ 12 , 13 ]. In practice, where FRT are rolled out in the EU, this means that it is much easier to hold FRT operators accountable, as there is no need to navigate a complex web of regional laws, and the operators themselves are more consistent in their behaviour, unable to use the splintering of regulation to their advantage. In addition, companies will often roll out the same systems globally, meaning that those outside the EU may benefit from some read over of standards. However, this is not to say that the systems will then be operated and managed in the same ways globally.

In terms of AI more specifically, this has become a focus for the EU and UK regulators and governments. The UK Information Commissioner’s Office (ICO) has recently published [ 14 ] guidance on AI auditing, supported by impact assessments. Although this guidance marks an important start towards specific guidance tailored towards the compliance of AI systems, we are still lacking case studies and dedicated frameworks to address this problem in a standardised way [ 15 ]. Recently, the EU has engaged with the need to actively manage the ethics and legislation that sit around AI innovation. A 2019 press release by the European Data Protection Supervisor Wiewiórowsk, called out the accountability and transparency concerns of facial recognition, particularly around the input data for facial recognition systems stating, “the deployment of this technology so far has been marked by obscurity. We basically do not know how data are used by those who collect it, who has access and to whom it is sent, how long do they keep it, how a profile is formed and who is responsible at the end for the automated decision-making.” [ 16 ]. As such, the European Commission began publishing a roadmap for dealing with AI. In April 2021, the European Commission released documentation on its approach to AI, which includes an aspiration to harmonise all legislation and bring in a specific Artificial Intelligence Act. FRT more specifically have yet to be dealt with in detail but, within the proposals for harmonisation, law enforcement systems are categorised as high risk. It is stated that AI systems used by law enforcement must ensure, “accuracy, reliability and transparency… to avoid adverse impacts, retain public trust and ensure accountability and effective redress” [ 17 ]. The documentation draws out areas of greater concern focusing on vulnerable people and those contexts where AI systems failures will have greater consequences. Examples include managing asylum seekers and ensuring individuals have a right to a fair trial. The importance of data quality and documentation is highlighted [ 17 ]. The Commission states that there must be oversight regarding:

“the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights…”

The place of the human in the system review is an important part of the process. In addition, the need for transparency is highlighted. However, what is not yet in place is a prescribed system for transparency and accountability. As the publications are currently at a high level, a need to drill down and consider case examples is necessary for delivery. There are some limitations to these publications and the recent publications by the EU have been criticized for not bringing in a moratorium on biometric technologies such as FRT [ 18 ]

In an EU context, in addition to the GDPR which dictates rules around managing personal data, privacy is further legislated for through the European Convention on Human Rights. As with the GDPR, this is enshrined in UK law as well as across all 27 EU Member States. The Human Rights legislation is potentially more holistic in terms of offering frameworks for consideration of law enforcement versus individual rights in the rollout considerations for FRT. It enshrines principles of equality and inclusion as well as privacy and rights to fair legal processes. The checks and balances of different and sometimes competing human rights are well established and tested through the courts. Under the terms of the law, individuals can bring legal cases, and, in the EU Member States (although not the UK), cases can progress to the European Court of Human Rights. However, there is not the same active regulatory framework sitting around the legislation which provides for quicker and cheaper routes to justice, and which can actively take action without the requirement for an individual to bring a case. Justice through the European Courts most normally is expensive, uncertain, and takes years. In addition, the requirements for accountability and design documentation for human rights compliance are not explicitly enshrined in the law. In terms of transparency, aspects of accountability for policy more generally fall under freedom of information legislation which is enacted at Member State level and differs very widely nation to nation in terms of public accountability requirements for administration more generally. There are also certain law enforcement and national security exemptions from freedom of information requirements. Finally, it is important to note that it does not bind on private entities who do not have the same accountability requirements.

In terms of actual FRT legal accountabilities, cases have been brought under both the GDPR and the Human Rights Act in respect of FRT. One such instance is the 2019 UK case of Bridges v. South Wales Police. Bridges, a civil rights campaigner, argued that the active FRT deployed by the police at public gatherings infringed on the right to respect for human life under the Human Rights Act 1998 and his privacy rights under the Data Protection Act 2018 (DPA 2018), the UK implementation of the GDPR. Relevant to this discussion, Bridges also claimed that, since the police failed to account for this infringement, its DPIA was not performed correctly [ 19 ]. After a lengthy litigation process, the court ruled in favour of Bridges, agreeing with the points above and additionally finding that the police had too broad a discretion regarding the use of FRT.

This example highlights the value of the GDPR (or similar legislative frameworks) and, in particular, the importance of the DPIA. Here, the impact assessment not only provided the basis for a large portion of the claimant’s argument, but it was also released to the public, making it easy for anyone with internet access to learn the details of the FRT data processing employed by the South Wales Police. 4 In addition, the case shows that the DPIA is not a checkbox exercise but, instead, requires that the FRT operator possesses substantial knowledge about the inner workings of the algorithm and its wider repercussions.

The lawsuit also draws attention to the holistic understanding of privacy under the GDPR. In a country with less-developed data protection laws, it may be sufficient for an FRT operator to encrypt and anonymise faceprints, and, regardless of how they are collected, this will constitute sufficient protection; the GDPR goes to great lengths to ensure that this is never the case. Of particular importance are the concepts of PbD and privacy by default, as mentioned above and defined in Article 25 of the regulation. In this example, the South Wales Police ensured privacy by design, meaning that its facial recognition algorithms were built around data protection. That, however, was not enough, since the FRT were then deployed indiscriminately, which violated privacy by default—the amount of personal data collected was disproportionate with respect to the intended goal of identifying individuals on watchlists. As such, the police use of FRT for these processes had to be stopped. This “one strike and you’re out” approach to personal data collection goes a long way towards ensuring accountability in facial recognition, since it makes it much harder for the FRT operator to get away with negligent data processing for which there can be significant consequences. However, while the Human Rights legislation was deployed as part of the case, the lack of a published Human Rights Impact Assessment does diminish accountability in this regard. It is to be noted that a similar requirement to the provision of a DPIA, in regards to Human Rights Impact Assessments and human rights’ by design and default, could better improve citizen rights more generally.

In spite of the data protection legislation, it is important to highlight that authorities and corporate entities may fall short in their duties, which is why a proactive regulator is a significant attribute in the GDPR regime. In August 2018, upon the request of the London Mayor, the UK ICO started to investigate whether a private property company (Kings Cross Estate Services), which managed the area around Kings Cross, a critical London transport hub was using FRT in its CCTV. It emerged that for a number of years, this company had been using FRT for ‘public safety’ reasons, but had not properly disclosed or made people aware that the scheme was in operation. In addition, as part of this investigation it transpired that not only had it been using FRT to capture the images of all those people passing through the transport hub, but it had been working with the Metropolitan Police in London to check and match for certain people entering the area. A data sharing agreement was in place with the intention of providing for the potential identification of wanted individuals, known offenders, and missing persons. Over a 2-year period from 2016 to 2018, the Police passed images of seven people to the property entity. These people had been either arrested and charged, reprimanded, cautioned, or given a formal warning for offences. However, it was clear that the Police had failed to disclose that the scheme existed. [ 20 ]. That said, more generally the ICO has found that it is acceptable for the Police to use FRT and that there is a great deal of public support for its use, but that nevertheless it must be done so in a carefully targeted way taking into account individual’s Article 8 human rights to privacy [ 21 ].

Reflecting on the position of the Regulators and their investigatory powers, one of the most active national data protection bodies in the EU is the Swedish Authority for Privacy Protection (IMY), formerly known as the Swedish Data Protection Authority. In recent years, it has been involved in two FRT cases of note: a school using FRT to monitor class attendance [ 22 ], and the police using facial recognition software [ 23 ].

The first case, while not related to law enforcement, showcases how a data protection authority’s independence and legal expertise can ensure accountability where an individual or a civil organisation would not have been able to do so for various reasons. In this instance, the IMY “became aware through information in the media” that the school was trialing FRT on its students and decided to intervene. In the ensuing process, the authority found that the school’s use of facial recognition did not satisfy proportionality and necessity, which also led to the DPIA being conducted incorrectly. Most importantly, the IMY ruled that the consent that was given by the children’s parents to the school was invalid, as the students were in a position of dependence (school attendance is compulsory). The school’s board was subsequently fined approximately €20,000.

There are several important aspects to this example. First, note that the IMY intervened in the case on its own volition, without receiving any complaints or being asked to take action. This autonomy is important, as individuals may not always be able/willing to alert the authorities when their data are being collected and/or processed unlawfully. The reason why none of the parents came forward could be that they did not possess enough legal expertise to notice the problems in the FRT deployment or did not feel able to challenge the school given their own and their children’s relationship with it. The IMY had independence, sufficient knowledge, and a position of power to hold the school accountable. Finally, note the “one strike and you’re out” approach mentioned above. While the school made reasonable efforts to comply with the legal requirements—the faceprints were recorded on a hard drive connected to an offline computer locked away in a cupboard, and a DPIA was conducted—it failed to ensure complete compliance, and so was prosecuted.

The second example concerns the use of FRT by the Swedish police. The IMY found that the police failed to conduct a DPIA and were negligent enough to let unauthorised employees access the software, after which it imposed a fine of €250,000. Here, the law enforcement was ignorant to any negative consequences of FRT use and did not take appropriate active PbD steps; as a result, it was held accountable for its failings.

Exact data on how widespread FRT are across the EU is difficult to find, but the technologies are not ubiquitous yet. In 2019, 12 national police forces had already deployed facial recognition with 7 more planning or testing deployment at that date. Deployment has been deemed to be much slower than in USA [ 24 ]. This may in part be due to the fact that it is also surrounded by much more suitable, uniform legislation, greater transparency, and active data protection authorities—all of these components will play a large role in making Europe a better model for facial recognition accountability. However, in the context of FRT, it is important to note that a lot of the development has happened outside the boundaries of the EU and UK. As such, while the EU may have set a high bar in terms of requiring PbD, much FRT application happens within a USA context.

The USA ethical and legislative landscape for FRT in a law enforcement context

Having considered the European regulatory framework, strongly positioned to ensure some forms of ethical considerations before the deployment of FRT, we now turn to a much more fragmented legislative territory: the United States of America (USA). Within USA, FRT are heavily used by law enforcement, affecting over 117 million adults [ 25 ], which is over a third of the country’s total population. FRT rollouts are widespread, yet an average citizen has very limited means of holding its operators accountable should it be misused. The USA was an early adopter of freedom of information laws, passing the federal Publication Information Act in 1966, with individual state laws being passed after this date. This set of legislation provides for state authorities to answer for their policies and actions on receipt of a freedom of information request. This does not impact on private companies who are not held accountable in the same way. In addition, there are certain exemptions under the legislation for law enforcement and national security purposes. There are some sector-specific privacy laws, covering, for instance children online, but no overarching data protection law akin to the GDPR. These federal laws are then enforced by the Federal Trade Commission, which has an extremely broad mandate of protecting consumers against deceptive practices; it is not comparable, however, to the data protection authorities in European countries [ 26 ]. Such a massive rollout of FRT without a regulator/ombudsman to investigate is a cause for concern as it then relies on individual legal action to call out wrongdoings. In addition, there are very considerable state-by-state differences, and a notable lack of requirements for transparency or calls for that transparency.

This reliance on individual action originates from USA lacking any federal (or state) data protection authority. This means that there is no body which would actively represent and protect citizens’ interests, while possessing the legal and regulatory powers of the state. Moreover, as we have seen, data protection authorities can intervene on behalf of the citizen and enforce decisions without initiating court proceedings; in the USA, this is not an option—any conflict regarding FRT and related personal data has to be heard in court, necessitating lengthy and costly court battles (which is why citizen representation is so important). As a result, individuals often have to seek legal support from non-profit organisations; those who fail to secure it may not be able to hold FRT operators or providers accountable at all.

The second issue is centered around state-by-state differences; it occurs thanks to an absence of a general federal privacy legislation, with state law often providing only very basic rights for holding FRT operators accountable. The extent of privacy laws in most states is limited to notifying an individual if their data have been stolen in a security breach [ 27 ]—hardly a consolation for someone who has been affected by unintentionally biased or malicious use of FRT. Relevant to our discussion, at the time of writing, there is only one state (Illinois) that has legislation allowing private individuals to sue and recover damages for improper usage and/or access to their biometric data, including faceprints [ 26 ]. However, even if you are lucky to live in Illinois, holding a malicious FRT provider or operator, private or public, accountable is likely to be difficult. Accountability relies on transparency—if, for instance, an individual would like to sue an FRT provider on the basis of a privacy violation, they will need some knowledge of how their data are processed. This is where the USA falls short; not only are the law enforcement and federal agencies notoriously secretive, but they often do not understand how their own FRT works in the first place. Without PbD and the requirements for a DPIA, there is less transparency on FRT processes, and it is harder to know exactly how processing is occurring and to hold operators to account. In addition, operators may often not have duly considered and weighted the implications of the FRT usage.

In a USA context, the law on privacy and use of FRT for localised law enforcement operates very much at a state-by-state level. Within this context, California is often held to be the state with the strongest privacy laws; in 2020, it strengthened its existing privacy laws with the California Privacy Rights Act (CCPA), which established the California Privacy Protection Agency and extended residents’ rights in terms of how business could collect and use their data. However, notably, it did not touch on any privacy powers in respect of law enforcement, and, in tandem with the CCPA, the state started to try to introduce a Facial Recognition Bill to enhance the use of FRT for law enforcement purposes. It is to be noted that some cities in California (e.g., Berkeley and San Francisco) have banned FRT usage. Interestingly, the Bill received lobbying support from Microsoft, but was fiercely campaigned against by Civil Rights groups, and as such, it was not passed in June 2020. This period marked a growing sense of unease with the ethics around FRT. In the same month, IBM stated that it would cease all export sales of FRT. In its statement, it described FRT as akin to other innovations such as nuclear arms on which the USA has had to seize a lead for the protection of its citizens [ 28 ]. In addition, it highlighted the flaws in the technology, for example its failure to deal with Black and Asian faces with sufficient accuracy. At the same time, another big tech entity, Amazon stated that it would cease to sell FRT to the Police for 1 year to give Congress time to put in place new regulations to govern its ethical usage. Microsoft followed suit stating, “we will not sell facial recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology" [ 29 ]. Each of these entities clearly became concerned about the potential misuse of the technology by law enforcement agencies which IBM said had caused concerns since the revelations by Edward Snowden in 2014 [ 29 ]. Clearly, there were valid ethical concerns about the development of FRT. However, when beneficial influences leave the marketplace, this may open up the field to less ethical developers. Each of these entities has a process for reviewing the ethics of technology roll outs, for example, IBM has an Ethics AI Board led by a Chief Privacy Officer. It is difficult to know how ethical or effective these private entities are where there is such limited transparency, although clearly these large global corporations worry about their images. This was evidenced in the case of Google which received international press attention and criticism when it fired Timnit Gebru, co-lead of its Ethical AI Research Team, for refusing to edit out certain statements from a research article on AI [ 30 ], and as a result of the controversy, it has since had to change its publication approach.

The concerns of private enterprise and the relationship with law enforcement and national security have been recognised at a national level. For example in the context of the Federal Bureau of Investigation (FBI), there have been hearings in Washington on the acceptable use of FRT. 5 At this hearing, it was stated that the “FBI has limited information on the accuracy of its face recognition technology capabilities.” These hearings called for greater accountability and transparency in the use of the technologies, although definitive outcomes from the hearings are still awaited.

A recent illustration of the current opacity of the USA system is demonstrated in the case of Willie Allen Lynch, a black man convicted in 2016 by a Florida court of selling cocaine; the Police Department made the decision to arrest him based on a facial recognition match, among other factors. In an attempt to appeal the decision, Lynch argued that the facial recognition system made an erroneous match (a reasonable statement, given FRT’s known inaccuracy with black faceprints [ 9 ]), proving this, however, required the police to turn over the photo in question and the list of possible faceprint matches offered by the system, which it refused to do. Strikingly, the detectives involved in the case admitted that, while the FRT rated Lynch’s faceprint as the closest match, they did not actually know how the rating system worked or even which scale the rating was assigned on. Ultimately, the court ruled in favour of the Police Department, and Lynch was never given access to the photo and potential matches [ 31 ].

On a federal level, the issues of a lack of transparency and accountability persist; an attempt by the American Civil Liberties Union (ACLU) to gather information about the use of FRT by the Department of Justice, the FBI and the Drug Enforcement Administration failed, since none of the agencies responded to a Freedom of Information Act request. Undeterred, the ACLU pursued legal action, with results yet to be seen—there has been no information about the case since October 2019, when the initial complaint was filed [ 32 ]. In addition, the ACLU has called out the Government’s and private enterprises’ surveillance operations at airports and customs boundaries across the USA [ 33 ].

In regard to private companies, as previously noted, these are not caught by freedom of information laws and can often afford legal firepower beyond the reach of even the wealthiest individuals. Clearview AI, one of the leading providers of FRT to the USA law enforcement agencies, supplies the technologies to more than 600 police departments across USA [ 34 ]; the ACLU filed a lawsuit against the company in the state of Illinois, arguing that it collected faceprints without consent, as required by the state’s Biometric Information Privacy Act [ 35 ]. Filed in May 2020, the case remains active at the time of writing, accumulating a seemingly endless stream of motions, memoranda, and briefs from both sides. The amount and complexity of the legal paperwork on a case that has not even been heard yet is illustrative of how fiercely opposed the company is to any efforts to hold it accountable, and it is problematic for ordinary citizens to follow the lawsuit through on their own; although crowdsourcing and group action has become a reality for legal cases, as seen in the actions brought by the Austrian Max Schrems in the EU. In addition, there has been a class action brought against the Department Store Macy’s in Illinois for its use of FRT [ 36 ], so such legal action may become more common. Nevertheless, a mature democratic nation should have other solutions in place.

This absence of the threat of litigation removes the proverbial sword hanging above the FRT providers’ heads, allowing them to have a free-for-all feast on user information. For instance, Clearview AI openly discloses information about scraping Facebook user profiles for images to build up its reference database [ 34 ], even though this action is explicitly prohibited by the website’s terms of service. IBM, in a similar fashion, collected individuals’ Flickr photos without consent; the affected users were not given a feasible way of deleting their information from the database [ 37 ]. A complete absence of data protection and privacy rights is hugely problematic.

Conclusion and recommendations

FRT is no longer a topic of science fiction or a concern for the future. It is here now, impacting people’s lives on a daily basis, from wrongful arrests to privacy invasions and human rights infringements. The widespread adoption of this technology without appropriate considerations could have catastrophic outcomes, and ultimately may jeopardise its development if some jurisdictions decide to ban the use of the technology for an indefinite amount of time [ 38 ]. However, critical in the success of FRT is the transparency and accountability in each stage of its development and usage and the ability to audit and challenge as required. The idea of power is particularly linked to the intended, and actual, outcomes of FRT, which should not be dissociated from discussions around accountability.

This discussions in this article makes the case that at all stages of the FRT process in all aspects of design and use including specific contexts, there is a requirement to document and account for the usage ensuring mechanisms for transparency and challenge. The GDPR provides a good regulatory starting point to address some of its concerns. However, the ethical considerations of this technology go far beyond issues of privacy and transparency alone. It requires broader considerations of equality, diversity, and inclusion as well as human rights issues more generally. As such other forms of assessments, such as Human Rights Impact Assessments, in addition to DPIA, should be part of the development and rollout of FRT—a DPIA alone is insufficient. These Assessments should be automatically required to be put into the public domain. In addition, the requirements must equally be enacted upon both public and private enterprises with transparency and accountability requirements. In conjunction with these steps, global regulators are needed with powers to actively investigate each aspect of the development and deployment processes of FRT in case contexts, and with powers to step in, stop and fine inappropriate FRT development and deployment. In addition, there should be more normal audit processes required for FRT deployment just as there are for financial oversights. The societal impacts for FRT misconduct are not to be underestimated.

We conclude this paper with the recommendation of ten critical ethical questions that need to be considered, researched, and answered in granular detail for law enforcement purposes and which in addition have read over to other AI development. It is suggested that these need to be dealt with and regulated for. The questions are:

  • How can this accountability be explicitly exercised, explained and audited for, for a range of stakeholder needs?

We are at a tipping point in the relationships and power structures in place between citizens and law enforcers. We cannot wait to step in and act, and in fact, there are many potential solutions to better ensure ethical FRT deployment. However, this is currently an ethical emergency requiring urgent global attention.

This work received partial funding from the UCL AI Centre.

Declarations

The authors confirm there are no conflicts of interest.

1 For example, see McGregor, L. (2018) ‘Accountability for Governance Choices in Artificial Intelligence: Afterword to Eyal Benvenisti’s Foreword’, European Journal of International Law , 29(4), pp. 1079–1085.; Shah, H. (2018) ‘Algorithmic accountability’, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences , 376(2128), p. 20,170,362. https://doi.org/10.1098/rsta.2017.0362 ; Buhmann, A., Paßmann, J. and Fieseler, C. (2020) ‘Managing Algorithmic Accountability: Balancing Reputational Concerns, Engagement Strategies, and the Potential of Rational Discourse’, Journal of Business Ethics, 163(2), pp. 265–280. https://doi.org/10.1007/s10551-019-04226-4.0 .

2 For the formal definition of the DPIA, see GDPR Article 35.

3 See https://www.openrightsgroup.org/app/uploads/2020/08/Letter-for-MPs-Final-sigs-1.pdf .

4 This particular assessment is available here: https://afr.south-wales.police.uk/wp-content/uploads/2019/10/DPIA-V5.4-Live.pdf .

5 See for example the 2019 report at https://oversight.house.gov/legislation/hearings/facial-recognition-technology-part-ii-ensuring-transparency-in-government-use .

All authors contributed equally to the writing, research, and ideas within this article. The initial concept was conceived by Denise Almeida with Konstantin Shmarko initiating the research work.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The Ethics of Facial Recognition Technology

Forthcoming in The Oxford Handbook of Digital Ethics ed. Carissa Véliz.

32 Pages Posted: 7 Feb 2021

Evan Selinger

Rochester Institute of Technology - Department of Philosophy

Brenda Leong

Luminos.Law

Date Written: January 7, 2021

This is a comprehensive presentation of leading ethical issues in debates about facial recognition technology. After defining basic terms (facial detection, facial characterization, facial verification, and facial identification), the following issues are discussed: standards, measures, and disproportionately distributed harms; erosions of trust; ethical harms associated with perfect facial surveillance; alienation, dehumanization, and loss of control; and the slippery slope debate.

Keywords: facial recognition technology, privacy, civil liberties, ethics, slippery slope, trust, standards, disproportionate harms, alienation, loss of control

Suggested Citation: Suggested Citation

Evan Selinger (Contact Author)

Rochester institute of technology - department of philosophy ( email ).

92 Lomb Memorial Drive Rochester, NY 14623-5670 United States (585) 475-2531 (Phone)

Luminos.Law ( email )

1 Thomas Circle Suite 700 Washington, DC 20005 United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics, related ejournals, ethics ejournal.

Subscribe to this free journal for more curated articles on this topic

Legal Information, Technology & Law Librarianship eJournal

Subscribe to this fee journal for more curated articles on this topic

Information Privacy Law eJournal

Cybersecurity, privacy, & networks ejournal, innovation & geography ejournal, information policy, ethics, access & use ejournal, international political economy: globalization ejournal.

Facial Recognition Technology Essays

Ethical analysis of ai bias in facial recognition technology, addressing biases and privacy risks: advocacy efforts for equitable facial recognition technology, facial recognition technology, popular essay topics.

  • American Dream
  • Artificial Intelligence
  • Black Lives Matter
  • Bullying Essay
  • Career Goals Essay
  • Causes of the Civil War
  • Child Abusing
  • Civil Rights Movement
  • Community Service
  • Cultural Identity
  • Cyber Bullying
  • Death Penalty
  • Depression Essay
  • Domestic Violence
  • Freedom of Speech
  • Global Warming
  • Gun Control
  • Human Trafficking
  • I Believe Essay
  • Immigration
  • Importance of Education
  • Israel and Palestine Conflict
  • Leadership Essay
  • Legalizing Marijuanas
  • Mental Health
  • National Honor Society
  • Police Brutality
  • Pollution Essay
  • Racism Essay
  • Romeo and Juliet
  • Same Sex Marriages
  • Social Media
  • The Great Gatsby
  • The Yellow Wallpaper
  • Time Management
  • To Kill a Mockingbird
  • Violent Video Games
  • What Makes You Unique
  • Why I Want to Be a Nurse
  • Send us an e-mail

More From Forbes

Facial Recognition Technology: Here Are The Important Pros And Cons

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

When you post a photo on Facebook, and the platform automatically tags the people in the image, you might not give much thought to the technology behind the convenience. However, when you discover that facial recognition technology could track you without your permission while you walk down a street in London, it might make you question the invasion of your privacy. Just like with any other new technology, facial recognition brings positives and negatives with it. Since it’s here to stay and expanding, it’s good to be aware of the pros and cons of facial recognition.

What is facial recognition, and how does it work?

Facial recognition is a biometric technology that uses distinguishable facial features to identify a person. Allied Market Research expects the facial recognition market to grow to $9.6 billion by 2022. Today, it’s used in a variety of ways from allowing you to unlock your phone, go through security at the airport, purchase products at stores and in the case of entertainer and musician Taylor Swift it was used to identify if her known stalkers came through the gate at her Rose Bowl concert in May 2018.

Today, we are inundated with data of all kinds, but the plethora of photo and video data available provides the dataset required to make facial recognition technology work. Facial recognition systems analyze the visual data and millions of images and videos created by high-quality Closed-Circuit Television (CCTV) cameras installed in our cities for security, smartphones, social media, and other online activity. Machine learning and artificial intelligence capabilities in the software map distinguishable facial features mathematically, look for patterns in the visual data, and compare new images and videos to other data stored in facial recognition databases to determine identity.

Pros of facial recognition

One of the major advantages of facial recognition technology is safety and security. Law enforcement agencies use the technology to uncover criminals or to find missing children or seniors. In New York, police were able to apprehend an accused rapist using facial recognition technology within 24 hours of an incident where he threatened a woman with rape at knifepoint. In cities where police don’t have time to help fight petty crime, business owners are installing facial-recognition systems to watch people and identify subjects of interest when they come in their stores .

Airports are increasingly adding facial recognition technology to security checkpoints; the U.S. Department of Homeland Security predicts that it will be used on 97 percent of travelers by 2023 . When people know they are being watched, they are less likely to commit crimes so the possibility of facial recognition technology being used could deter crime.

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024.

Since there is no contact required for facial recognition like there is with fingerprinting or other security measures, facial recognition offers a quick, automatic, and seamless verification experience. There is nothing such as a key or I.D. that can be lost or stolen.

Facial recognition can add conveniences. In addition to helping you tag photos in Facebook or your cloud storage via Apple and Google, you will start to be able to check-out at stores without pulling out money or credit cards—your face will be scanned. At the A.I. Bar, facial recognition technology is used to add patrons who approach the bar to a running queue to get served their drinks more efficiently. 

Although possible, it’s hard to fool facial recognition technology so it can also help prevent fraud.

Cons of facial recognition

The biggest drawback for facial recognition technology in most people's opinions is the threat to an individual's privacy. In fact, several cities have considered or will ban real-time facial recognition surveillance use by law enforcement, including San Francisco , Cambridge, Massachusetts , and more . These municipalities determined the risks of using the technology outweighed the benefits. Police can still use footage from personally owned devices such as Nest cameras to find criminals; it's just not allowing the government entities to use live facial recognition software.

While London’s King’s Cross is using facial recognition, London is also at the forefront of democratic societies in its testing of the technology . In test events, the city hopes to determine the accuracy of the systems while grappling with how to deal with individuals who cover up to hide their identity from cameras and other issues. Additionally, democratic societies must define the legal basis to live facial-recognition of the general population, and when blanket use of the technology is justified.

The technology isn’t as effective at identifying people of color and women as it is white males. One reason for this is the data set the algorithms are trained on is not as robust for people of color and women. Until this is rectified, there are concerns about the ramifications for misidentifying people with the technology. 

In addition, there are issues that need to be resolved that can throw off the technology when a person changes appearance or the camera angle isn't quite right (although they are working on being able to identify a person by only their earlobe). It's dramatically improving; according to independent tests by the U.S. National Institute of Standards and Technology (NIST) facial recognition systems got 20 times better at finding a match in a database over a period that covered 2014 to 2018.

Another potential downside is the storage of sensitive personal data and the challenges that come with it. Just last week, we have had the news that a database containing facial scans used by banks, police forces, and defense firms where breached.

In order to benefit from the positive aspects of facial recognition, our society is going to have to work through some significant challenges to our privacy and civil liberties. Will individuals accept the invasion of their privacy as a proper cost to being more secure and for the conveniences facial recognition provides?

Bernard Marr

  • Editorial Standards
  • Reprints & Permissions

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NEWS FEATURE
  • 18 November 2020

The ethical questions that haunt facial-recognition research

  • Richard Van Noorden

You can also search for this author in PubMed   Google Scholar

A collage of images from the MegaFace data set , which scraped online photos. Images are obscured to protect people’s privacy. Credit: Adam Harvey/megapixels.cc based on the MegaFace data set by Ira Kemelmacher-Shlizerman et al. based on the Yahoo Flickr Creative Commons 100 Million data set and licensed under Creative Commons Attribution (CC BY) licences

In September 2019, four researchers wrote to the publisher Wiley to “respectfully ask” that it immediately retract a scientific paper. The study, published in 2018, had trained algorithms to distinguish faces of Uyghur people, a predominantly Muslim minority ethnic group in China, from those of Korean and Tibetan ethnicity 1 .

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Nature 587 , 354-358 (2020)

doi: https://doi.org/10.1038/d41586-020-03187-3

Wang, C., Zhang, Q., Liu, W., Liu, Y. & Miao, L. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 9 , e1278 (2019).

Article   Google Scholar  

Stewart, R., Andriluka, M. & Ng, A. Y. in Proc. 2016 IEEE Conf. on Computer Vision and Pattern Recognition 2325–2333 (IEEE, 2016).

Ristani, E., Solera, F., Zou, R. S., Cucchiara, R. & Tomasi, C. Preprint at https://arxiv.org/abs/1609.01775 (2016).

Nech, A. & Kemelmacher-Shlizerman, I. in Proc. 2017 IEEE Conf. on Computer Vision and Pattern Recognition 3406–3415 (IEEE, 2017).

Guo, Y., Zhang, L., Hu., Y., He., X. & Gao, J. in Computer Vision — ECCV 2016 (eds Leibe, B., Matas, J., Sebe, N. & Welling, M.) https://doi.org/10.1007/978-3-319-46487-9_6 (Springer, 2016).

Google Scholar  

Jasserand, C. in Data Protection and Privacy: The Internet of Bodie s (eds Leenes, R., van Brakel, R., Gutwirth, S. & de Hert, P.) Ch. 7 (Hart, 2018).

Moreau, Y. Nature 576 , 36–38 (2019).

Article   PubMed   Google Scholar  

Zhang, D. et al. Int. J. Legal Med . https://doi.org/10.1007/s00414-019-02049-6 (2019).

Pan, X. et al. Int. J. Legal Med. 134 , 2079 (2020).

Wu, X. & Xhang, X. Preprint at https://arxiv.org/abs/1611.04135 (2016).

Hashemi, M. & Hall, M. J. Big Data 7 , 2 (2020).

Download references

Reprints and permissions

Supplementary Information

  • Spreadsheet of Nature survey summary results

Related Articles

essay on facial recognition technology

  • Machine learning
  • Computer science

A Multimodal Generative AI Copilot for Human Pathology

Article 12 JUN 24

Need a policy for using ChatGPT in the classroom? Try asking students

Need a policy for using ChatGPT in the classroom? Try asking students

Career Column 05 JUN 24

Meta’s AI system is a boost to endangered languages — as long as humans aren’t forgotten

Meta’s AI system is a boost to endangered languages — as long as humans aren’t forgotten

Editorial 05 JUN 24

Science profits most when people of faith feel equally welcomed

Correspondence 11 JUN 24

Science and religion have profound differences — they should be kept apart

Embryo models need consistent ethical oversight

What the science of elections can reveal in this super-election year

What the science of elections can reveal in this super-election year

Career Feature 10 JUN 24

Far-right gains in European elections: what they mean for climate goals

Far-right gains in European elections: what they mean for climate goals

News 10 JUN 24

Do scientists make good presidents? How five national leaders performed

Do scientists make good presidents? How five national leaders performed

News 06 JUN 24

Tenure-Track Assistant Professor, Associate Professor, and Professor

Westlake Center for Genome Editing seeks exceptional scholars in the many areas.

Westlake Center for Genome Editing, Westlake University

essay on facial recognition technology

Subeditor, Nature Magazine

About the Brand Nature Portfolio is a flagship portfolio of journals, products and services including Nature and the Nature-branded journals, dedic...

New York City, New York (US)

Springer Nature Ltd

essay on facial recognition technology

Faculty Positions in Bioscience and Biomedical Engineering (BSBE) Thrust, Systems Hub, HKUST (GZ)

Tenure-track and tenured faculty positions at all ranks (Assistant Professor/Associate Professor/Professor)

Guangzhou, Guangdong, China

The Hong Kong University of Science and Technology (Guangzhou)

essay on facial recognition technology

Faculty Positions at the Center for Machine Learning Research (CMLR), Peking University

CMLR's goal is to advance machine learning-related research across a wide range of disciplines.

Beijing, China

Center for Machine Learning Research (CMLR), Peking University

essay on facial recognition technology

Postdoctoral Research Fellows at Suzhou Institute of Systems Medicine (ISM)

ISM, based on this program, is implementing the reserve talent strategy with postdoctoral researchers.

Suzhou, Jiangsu, China

Suzhou Institute of Systems Medicine (ISM)

essay on facial recognition technology

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies
  • Search Menu

Sign in through your institution

  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • Ethnic Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Politics of Development
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Oxford Handbook of Digital Ethics

  • < Previous chapter
  • Next chapter >

30 The Ethics of Facial Recognition Technology

Evan Selinger, Professor of Philosophy, Rochester Institute of Technology

Brenda Leong, Partner, BNH.AI

  • Published: 14 February 2022
  • Cite Icon Cite
  • Permissions Icon Permissions

Those who use facial recognition technology potentially wield immense power. That power is the subject of intense debate—a debate that has legal implications for privacy and civil liberties, political consequences for democracy, and a range of underlying ethical issues. This chapter is a comprehensive presentation of the leading ethical issues in the debates about facial recognition technology. After defining basic terms (facial detection, facial characterization, facial verification, and facial identification), the following issues are discussed: standards, measures, and disproportionately distributed harms; erosions of trust; ethical harms associated with perfect facial surveillance; alienation, dehumanization, and loss of control; and the slippery slope debate.

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

Month: Total Views:
October 2022 23
November 2022 16
December 2022 17
January 2023 15
February 2023 7
March 2023 11
April 2023 6
May 2023 10
June 2023 13
July 2023 21
August 2023 16
September 2023 9
October 2023 22
November 2023 38
December 2023 32
January 2024 12
February 2024 14
March 2024 25
April 2024 39
May 2024 14
June 2024 7
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • One email, all the Golden State news
  • Get the news that matters to all Californians. Start every week informed.

essay on facial recognition technology

  • Newsletters
  • Environment
  • 2024 Voter Guide
  • Digital Democracy
  • Daily Newsletter
  • Data & Trackers
  • California Divide
  • CalMatters for Learning
  • College Journalism Network
  • What’s Working
  • Youth Journalism
  • Manage donation
  • News and Awards
  • Sponsorship
  • Inside the Newsroom
  • CalMatters en Español

These wrongly arrested Black men say a California bill would let police misuse face recognition

Avatar photo

Share this:

  • Click to share on X (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on WhatsApp (Opens in new window)

Sacramento County Sheriff's deputies in Sacramento on Feb. 28, 2022. Photo by Rahul Lal, Sipa USA via Reuters

Three men falsely arrested based on face recognition technology have joined the fight against a California bill that aims to place guardrails around police use of the technology. They say it will still allow abuses and misguided arrests.

Lea esta historia en Español

In 2019 and 2020, three Black men were accused of, and jailed for, crimes they didn’t commit after police used face recognition to falsely identify them. Their wrongful arrest lawsuits are still pending, but their cases bring to light how AI-enabled tools can lead to civil rights violations and lasting consequences for the families of the accused .

Now all three men are speaking out against pending California legislation that would make it illegal for police to use face recognition technology as the sole reason for a search or arrest. Instead it would require corroborating indicators.

The problem, critics say, is that a possible face recognition “match” is not evidence — and that it can lead investigations astray even if police seek corroborating evidence.

After a contentious hearing today, the Senate Public Safety Committee uanimously voted to advance Assembly Bill 1814 . It had cleared the Assembly last month without opposition.

Such a bill “would not have stopped the police from falsely arresting me in front of my wife and daughters,” Robert Williams told CalMatters. In 2020, Detroit police accused Williams of stealing watches worth thousands of dollars — the first known instance of false arrest involving face recognition in the United States — after face recognition matched a surveillance video to a photo of Williams in a state database. Investigators put his photo in a “six-pack lineup” with five others, and he was chosen by a security guard who had seen a surveillance image but not the theft itself .

“In my case, as in others, the police did exactly what AB 1814 would require them to do, but it didn’t help,” said Williams, who is Black. “Once the facial recognition software told them I was the suspect, it poisoned the investigation. This technology is racially biased and unreliable and should be prohibited.

“I implore California lawmakers to not settle for half measures that won’t actually protect people like me.”

But the bill’s author, Democratic Assemblymember Phil Ting of San Francisco, maintained that because it bans face recognition technology from being the sole criteria for a warrant, search or arrest, it would prevent wrongful apprehensions such as those in Detroit. And he stressed that it would improve the status quo for Californians.

“Law enforcement agencies in the state do not need any permission from anyone to do anything on facial recognition right now,” Ting said. “Nothing in any state law provides guidance in that particular area. 

“This actually takes a good first step to really provide some security, to provide some civil rights protections, and to ensure that we take the first step to regulate facial recognition technology.”

Learn more about legislators mentioned in this story.

Philip Ting

Democrat, State Assembly, District 19 (San Francisco)

The first face recognition searches in the United States took place more than two decades ago. It’s a process that begins with a photo of a suspect typically taken from security camera footage. Face recognition on your iPhone is trained to match your photo, but the kind used by law enforcement agencies searches databases of mug shots or drivers license photos can contain millions of photos, and can fail in numerous ways. Tests by researchers have shown that the technology is less accurate when attempting to identify people with dark skin , people who are Asian , Native American, people who identify as transgender , if a probe image of a suspect is low quality, or if the image in a database is outdated.

After a computer assembles a list of possible matches from a database of images, police pick a suspect from an array of candidates, then show that photo to an eyewitness. Although people tend to think they’re good at it, eyewitness testimony is a leading cause of wrongful convictions in the United States .

Because prosecutors use face recognition to identify possible suspects but ultimately rely on eyewitness testimony, the technology can play a role in a criminal investigation but remain hidden from the accused and defense attorneys.

Directives not to treat a possible match by a face recognition system as the sole basis for an arrest sometimes don’t make a difference — they failed to do so, for instance, in the case of  Alonzo Sawyer, a man who was falsely arrested near Baltimore and spent nine days in jail .

“Once the facial recognition software told them I was the suspect, it poisoned the investigation.” Robert Williams, WRONGLY ARRESTED IN DETROIT

Njeer Parks, who spent nearly a year fighting allegations that he stole items from a hotel gift shop in New Jersey and then nearly hit a police officer with a stolen vehicle, came out in opposition to the California bill in a video posted on Instagram last week. The police “are not going to do their job if the AI is saying ‘It’s him’ already. That’s what happened to me.”

“I got lucky,” he told CalMatters in a phone interview about a receipt that exonerated him and kept him out of prison. “I don’t want to see anybody sitting in jail for something they didn’t do.”

Testifying at today’s hearing was the attorney for Michael Oliver, a third Black man who was wrongly accused of assaulting a high school teacher in Detroit in 2020. “The warrant request in Michael’s case was based entirely on a supposed (face recognition technology) match and a photo lineup,” said attorney David Robinson. “Other than a photo lineup, the detective did no other investigation. So it’s easy to say that it’s the officer’s fault, that he did a poor job or no investigation. But he relied on (face recognition), believing it must be right. That’s the automation bias this has been referenced in these sessions.

“So despite a warning to the officer — ‘investigative lead only’ — that prescription was trumped by the mesmerizing effect of this machine that the officer saw as faster and smarter than he, and it must be right.” 

Supporters of Ting’s bill include the California Faculty Association and the League of California Cities. The California Police Chief Association argues that face recognition can reduce criminal activity and provide police with actionable leads, and that such technology will be important as California looks to host international events such as the 2026 World Cup and the 2028 Summer Olympics in Los Angeles.

“Across the country, real-world examples of law enforcement using (facial recognition technology) to solve major crimes showcases just how important this new technology can be towards protecting our communities,” the California Police Chiefs Association has argued. It cited cases in which it says face recognition played a role in identifying the guilty, including a newspaper headquarters shooting in Maryland and a rape in New York.

Face recognition alone should never lead to false arrests, Jake Parker with the Security Industry Association told members of the California Assembly a few weeks ago. That’s why AB 1814 is meant to corroborate investigative leads with evidence, not just a possible face recognition match.

“There’s a clear need to bolster public trust that this technology is being leveraged accurately, lawfully, and in an effective way that’s also limited and non-discriminatory in a way that benefits our communities,” he said. “So we believe AB 1814 will help bolster this trust and for that reason we urge you to support this bill in its current form.”

“I believe having a precautionary step can help protect people’s privacy and due process rights, while still allowing local governments to go further and pursue their own facial recognition bans.” assemblymember phil ting, democrat from San Francisco

But more than 50 advocacy organizations — including the ACLU, Access Reproductive Justice and the Electronic Frontier Foundation signed a letter opposing the bill last week . They called face recognition unreliable, a proven threat to Black men, and a potential threat to protestors, people seeking abortions, and immigrant and LGBTQ communities. 

“By allowing police to scan and identify people without limitation, AB 1814 will also increase unnecessary police interactions that too often have the potential to escalate into fatal encounters. This will remain true regardless of how accurate face recognition technology becomes,” the organizations said in a letter. “There is no way for people to find out if facial recognition is used against them and no mechanism to make sure the police comply with the law.”

Ting also authored a 2019 bill that initially placed a permanent ban on police use of body camera footage with face recognition. That was amended to a temporary ban, which ended in January 2023.

He told CalMatters he’s uncomfortable with the fact that California currently has no limits on how law enforcement agencies use face recognition. 

He said in a statement that his bill “simply requires officers to have additional evidence before they can proceed with a search, arrest, or affidavit for a warrant. I believe having a precautionary step can help protect people’s privacy and due process rights, while still allowing local governments to go further and pursue their own facial recognition bans.”

Ting’s city of San Francisco became the first major city in the nation to ban face recognition in 2019, but an analysis by City Attorney David Chiu found that the city’s passage of Proposition E in March allows police to perform face recognition searches on imagery captured by cameras and drones. The Washington Post last month reported that San Francisco police go around restrictions by requesting that law enforcement in neighboring cities conduct the search for them.

Recalled San Francisco district attorney Chesa Boudin says there have almost certainly been false arrests associated with use of face recognition in California but they would remain unknown to the public unless prosecutors filed charges and the accused later went to trial with a civil lawsuit seeking damages. Often such cases would be settled out of court .

Boudin, who directs UC Berkeley’s Criminal Law & Justice Center, initially reacted negatively to the bill, but later reached out tell CalMatters that he had decided it was “an important first step towards an approach that allows the use of these new and powerful technologies while creating guardrails and standards.”

Lawmakers have until the end of the legislative session in August to decide whether to pass  AB 1814.

more on technology

California teachers are using AI to grade papers. Who’s grading the AI?

California teachers are using AI to grade papers. Who’s grading the AI?

How California and the EU work together to regulate artificial intelligence

How California and the EU work together to regulate artificial intelligence

We want to hear from you

Want to submit a guest commentary or reaction to an article we wrote? You can find our submission guidelines here . Please contact CalMatters with any commentary questions: [email protected]

Khari Johnson

Khari Johnson is part of the economy team and is CalMatters’ first tech reporter. He has covered artificial intelligence since 2016. Khari previously worked at WIRED, VentureBeat, and Imperial Beach... More by Khari Johnson

That Time a UT Professor and AI Pioneer Wound Up on the Unabomber’s List

Professor Woody Bledsoe, the early days of artificial intelligence, and the birth of face-recognition technology

WoodyBledsoePosterized

One day in mid-April 1996, Virginia Bledsoe’s phone rang. When she answered, there was an FBI agent on the other end of the line. It was a heads-up: be careful about suspicious packages. The name of her husband, Woody Bledsoe, who had died less than a year earlier, had been found in the papers of the recently captured Ted Kaczynski, better known as the Unabomber.

Why? Bledsoe was a leading researcher in the field of artificial intelligence.

The “Un” in Unabomber stood for “university,” and Bledsoe had been a professor at UT, working on artificial intelligence and specifically face recognition, since 1966. (The “a” stood for airline.)

Bledsoe had died in October 1995 at 73 of complications from Lou Gehrig’s disease. Besides being an AI pioneer, he was also a mathematician, Mormon bishop, Boy Scout leader and tennis enthusiast, competing late into his life. A captain in the Army Corps of Engineers, Bledsoe was awarded a Bronze Star for helping find a way for one of Gen. George Patton’s divisions to cross the Rhine during World War 2.

Both Bledsoe and Kaczynski had ties to the University of California, Berkeley, but their time there did not overlap: Bledsoe received his Ph.D. from UC, Berkeley in 1953 and taught there for a few years, while Kaczynski was a doctoral student there from 1967 to 1969, when he abruptly resigned and effectively became a hermit, traveling occasionally but mostly holed up in a tiny cabin in Montana.

In 1960, Bledsoe left UC, Berkeley to work for Sandia National Laboratory in New Mexico, a nuclear weapons research facility. He left Sandia to cofound the AI lab Panoramic Research, Inc. in Palo Alto, Calif., whose anchor clients were the Department of Defense and various U.S. intelligence agencies. Bledsoe described that moment as he addressed the American Association for Artificial Intelligence as its president in 1985:

“Twenty-five years ago I had a dream, a daydream if you will. A dream shared with many of you. I dreamed of a special kind of computer, which had eyes and ears and arms and legs, in addition to its ‘brain.’ When I awoke from this daydream, I found that we didn’t have these things, but we did have some remarkable computers, even then, so I decided then and there to quit my job and set about spending the rest of my life helping bring this dream to reality.”

Bledsoe wrote of his time as president of PRI, “I was spending more and more of my time administering, going to Washington, Albuquerque, or wherever, and less time with the thing I loved, namely science. Also, I had a latent desire to get back to university life, so I accepted an offer as Professor of Mathematics at The University of Texas at Austin.” He joined UT’s math faculty in 1966, eventually chairing the department.

I was spending more and more of my time administering, going to Washington, Albuquerque, or wherever, and less time with the thing I loved, namely science." UT Professor Woody Bledsoe

In 1981, Japan announced an ambitious “5th-generation” computer initiative to build machines that understood human language and could reason and learn. In response to Japan’s ascendancy in computing generally, in 1983, 20 U.S. technology companies formed a research consortium named Microelectronics Computer Technology Corporation, better known as MCC. The Austin-based consortium hired retired admiral and veteran intelligence officer Bob Inman — still an LBJ School professor today at age 93 — to lead the project, and in early 1984 Bledsoe accepted Inman’s invitation to become MCC’s vice president for artificial intelligence. Bledsoe took a leave of absence from UT, and the next year, 1985, MCC delivered a fifth-generation computer program called Proteus to its shareholder companies.

Inman, a UT Distinguished Alumnus, told the Austin American-Statesman in April 1996, “I’m not surprised that Woody Bledsoe would make it on to the [Unabomber’s] list, because of his fame in the country in the area of artificial intelligence, trying to make computers think … to act as if they think, and that’s the thing that would have caught the attention of this guy who was antagonistic toward technological developments.”

He returned to UT is 1987 as the Peter O’Donnell Jr. Centennial Chair in Computing Science. Bledsoe retired reluctantly in 1994, the year before his death, when he determined ALS had affected his speech too much to allow him to teach effectively.

In 2023, Kaczynski died in a federal prison in North Carolina, aged 81.

Some 39 boxes containing Bledsoe’s papers, most relating to early face-recognition research, are housed in UT’s Briscoe Center for American History. “Those boxes contain, among other things, dozens of photographs of people’s faces, some of them marked up with strange mathematical notations — as if their human subjects were afflicted with some kind of geometrical skin disease,” wrote Shaun Raviv in a 2020 feature on Bledsoe in Wired magazine.

In an exhaustive memorial tribute in AI Magazine in 1996, colleagues Michael Ballantyne, Robert Boyer and Larry Hines wrote of Bledsoe: “He was perhaps the most wonderful person we have ever known. He was the sort of perfect embodiment of faith, hope, and charity. It’s almost as though he was not subject to the common human frailties. We never saw him discouraged, we never saw him depressed, and we never saw him unkind.”

Explore Latest Articles

Jun 10, 2024

Extended Campus Celebrates 115 Years of Global Distance Learning

essay on facial recognition technology

Jun 06, 2024

Long-Buried Castoffs Now Big-Screen Stars

essay on facial recognition technology

May 31, 2024

Introducing a New UT Podcast: ‘AI for the Rest of Us’

essay on facial recognition technology

  • DOI: 10.55041/ijsrem29402
  • Corpus ID: 268594816

Face Recognition Based Attendance System

  • Sujit Yadav
  • Published in INTERANTIONAL JOURNAL OF… 18 March 2024
  • Computer Science, Education

Related Papers

Showing 1 through 3 of 0 Related Papers

Home — Essay Samples — Information Science and Technology — Facial Recognition — Facial Recognition Technology Impact on Personal Privacy and Civil Liberties

test_template

Facial Recognition Technology Impact on Personal Privacy and Civil Liberties

  • Categories: Facial Recognition

About this sample

close

Words: 832 |

Published: Jan 25, 2024

Words: 832 | Pages: 2 | 5 min read

Table of contents

Background of facial recognition technology, impact on personal privacy, impact on civil liberties, public concerns and responses.

Image of Alex Wood

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Dr. Karlyna PhD

Verified writer

  • Expert in: Information Science and Technology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

2 pages / 892 words

2 pages / 910 words

3 pages / 1371 words

5 pages / 2618 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Facial Recognition

Facial recognition technology is a system that identifies or verifies a person from a digital image or a video frame from a video source. It is used in law enforcement for a variety of purposes, including identifying suspects, [...]

Facial recognition technology has become an increasingly prevalent tool in various industries, including healthcare and biometrics. This essay will explore the development of facial recognition technology, its practical [...]

According to Lin (2000), Facial Recognition Technology (FRT) is one of the few biometric systems that help identify an individual based on their biological characteristics. Researchers from diverse fields including security, [...]

Introduction: It is a form of digital currency, which needs to be encrypted in order to be generated in units of currency and to be operated independently. The Crypto-currencies are a revolutionary new form of money that are [...]

The long-dreamed vision of “computing as a utility” has finally taken shape in the form of cloud computing. This paradigm shift is the biggest buzz in today’s computer world. The pay-as-you-go model of cloud attracts more and [...]

Cloud Storage a development has fundamental impacts on many individuals and enterprise companies from storing data to analyzing data. It is a model of data storing in which data is stored in logical pools. The cloud computing [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

essay on facial recognition technology

Advertisement

Supported by

Clearview AI Used Your Face. Now You May Get a Stake in the Company.

The facial recognition start-up doesn’t have the funds to settle a class-action lawsuit, so lawyers are proposing equity for those whose faces were scraped from the internet.

  • Share full article

A hand holds a phone displaying a grid of four photos of a person’s face on it.

By Kashmir Hill

Kashmir Hill covers privacy and technology, and has written a book about facial recognition technology and Clearview AI.

A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database.

Clearview AI, which is based in New York, scraped billions of photos from the web and social media sites like Facebook, LinkedIn and Instagram to build a facial recognition app used by thousands of police departments, the Department of Homeland Security and the F.B.I. After The New York Times revealed the company’s existence in 2020, lawsuits were filed across the country. They were consolidated in federal court in Chicago as a class action.

The litigation has proved costly for Clearview AI, which would most likely go bankrupt before the case made it to trial, according to court documents. The company and those who sued it were “trapped together on a sinking ship,” lawyers for the plaintiffs wrote in a court filing proposing the settlement.

“These realities led the sides to seek a creative solution by obtaining for the class a percentage of the value Clearview could achieve in the future,” added the lawyers, from Loevy + Loevy in Chicago.

Anyone in the United States who has a photo of himself or herself posted publicly online — so almost everybody — could be considered a member of the class. The settlement would collectively give the members a 23 percent stake in Clearview AI, which is valued at $225 million, according to court filings. (Twenty-three percent of the company’s current value would be about $52 million.)

If the company goes public or is acquired, those who had submitted a claim form would get a cut of the proceeds. Alternatively, the class could sell its stake. Or the class could opt, after two years, to collect 17 percent of Clearview’s revenue, which it would be required to set aside.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

How a New Jersey man was wrongly arrested through facial recognition tech now in use in Ontario

Peel, york police recently announced they were implementing idemia's facial recognition system.

essay on facial recognition technology

Social Sharing

A New Jersey man who was wrongly jailed after being misidentified through facial recognition software has a message for two Ontario police agencies now using the same technology.

"There's clear evidence that it doesn't work," Nijeer Parks said.

Parks, now 36, spent 10 days behind bars for a January 2019 theft and assault on a police officer that he didn't commit. He said he was released after he provided evidence he was in another city, making a money transfer at the time of the offence. Prosecutors dropped the case the following November, according to an internal police report.

Investigators identified Parks as a suspect using facial recognition technology, according to police documents provided as part of a lawsuit filed by Parks's lawyer against several defendants, including police and the mayor of Woodbridge, N.J. The lawsuit names French tech firm Idemia as the developer of the software.

Police in Peel and York regions, near Toronto, announced in late May they were jointly implementing Idemia's technology, which they will use to compare existing mugshots with crime scene images of suspects and persons of interest. 

Parks said his case highlights the limitations of such software.

"He doesn't look anything like me," Parks, who is Black, said of the man in the picture that police used to identify him. "I'm like … you're basically telling me we all look alike."

The photo had come from a fake Tennessee driver's licence the suspect provided to officers at the scene of the theft, according to a police report submitted as a court exhibit in the civil case. 

A black and white Tennessee driver's licence

The man was accused of stealing snacks from a hotel gift shop in Woodbridge, N.J., and nearly running over an officer as he later sped away. 

Two days later, an investigator emailed a Woodbridge detective a PDF file containing a "good possible hit on facial recognition," according to court exhibits reviewed by CBC News. 

"That's him," the detective replied, referring to the suspect from the hotel incident. 

Parks was arrested and charged with a series of offences, including aggravated assault and resisting arrest. According to a transcript of his police interview, he told an investigator he had, in fact, never been to Woodbridge, which is roughly 40 kilometres from his home in Paterson, N.J. 

Parks recently described to CBC the ordeal as an "out-of-body experience, because it was something that I couldn't believe was happening."

A man in a white t-shirt with his hands clasped

In Ontario, police insist they've implemented safeguards to prevent a mismatch from resulting in an arrest.

"It's the human element," York Regional Police Const. Kevin Nebrija told CBC. He said investigators will personally "look at the match and see if that supports other evidence that we've obtained."

York and Peel police both said separately the software would be used as an additional tool to provide investigative leads and will not serve as the sole basis for an arrest. They also said the system would not be used to analyze live video.  

"Idemia Face Expert will be used to aid human decision-making, not replace it," Peel Regional Police Deputy Chief Nick Milinovich said in a video posted online. "It will improve public safety for everyone."

Allegations of 'biased technology'

Research has repeatedly pointed to shortcomings in facial recognition technology, particularly the risk it will misidentify racialized individuals.

Parks's lawsuit partly blames his wrongful arrest on the "misuse of biased technology." 

The Township of Woodbridge declined CBC's request for comment on the matter, as the case remains in litigation.

Idemia, which is not named as a defendant in the lawsuit, said in a statement the firm "provides best practices and policy consultation to our clients, which includes steps for following up with an investigative lead generated from facial recognition technology."

The American Civil Liberties Union (ACLU) earlier this year filed a court brief in support of Parks, stating "officers unreasonably relied on a shaky lead from fundamentally unreliable technology."

"As in this case, the harms of [facial recognition technology] misidentification disproportionately fall on Black Americans," the ACLU wrote.

The U.S. General Services Administration, which oversees federal contractors, said in a 2022 report that such tools disproportionately failed to match African Americans in its tests.

A police officer is seen sitting at a desk, looking at a computer screen

Yuan Stevens, an academic associate at McGill University's Centre of Genomics and Policy in Montreal, said there should be more transparency about the way facial recognition algorithms are refined.

"It's actually very possible that Idemia's database was trained on white European faces, [so] people of colour, such as myself, would be more wrongfully suspected of a crime more often." 

Stevens said Black and Indigenous faces are frequently overrepresented in mugshots, since such databases "contain images of people who are subject to heightened scrutiny and surveillance by the police."

Idemia cited as most accurate

Idemia has disputed allegations of bias in its software.

In slides prepared for a 2018 presentation  titled "Face Recognition Evaluation @ Idemia," a representative wrote the company's algorithm has the "same [false positive identification rate] for Black or white subjects, male or female."

Software on screen displays pictures of men

York Regional Police said on their website "in the past five years, facial recognition technology has made tremendous strides in accuracy and demographic differences," citing data from the U.S. National Institute of Standards and Technology (NIST).

Among a list of vendors, NIST ranked Idemia's algorithm in 2022 as the most accurate on a false match rate fairness test.

Ontario Provincial Police said they're looking into implementing a similar program, while evaluating "accuracy, privacy implications and potential biases associated with facial comparison."

The RMCP said they asked some third-party vendors to disable facial recognition functions integrated in tools used by the national police force.

In 2014, Calgary police became the first police agency in Canada to use facial recognition technology, launching a system designed by NEC Corporation of America.

The Toronto Police Service said it's been using facial recognition since 2018. Its website also lists NEC as the technology provider. 

Investigators in both cities briefly used the controversial Clearview AI system, which searched images of the public scraped from the internet.

  • Ukraine using Clearview AI facial-recognition software to ID dead Russian soldiers
  • RCMP's use of facial recognition tech violated privacy laws, investigation finds

Peel and York police said they discussed their plan with the province's Information and Privacy Commissioner.

The commissioner's office told CBC it "does not endorse, approve or certify" any program it's consulted on.

essay on facial recognition technology

He was wrongfully jailed because of facial recognition tech now used in Ontario

The office provides public guidance for police agencies seeking to use facial recognition to search through mugshot databases.

As for Parks, he and his lawyer have requested a summary judgment, meaning their case wouldn't need to go to trial. His lawyer, Daniel Sexton, said he's also been in talks to settle the case out of court.

"I don't want to see anyone go through what I went through," Parks said.

ABOUT THE AUTHOR

essay on facial recognition technology

Senior Reporter

Thomas is a CBC News reporter based in Toronto. In recent years, he has covered some of the biggest stories in the world, from the 2015 Paris attacks to the Tokyo Olympics and the funeral of Queen Elizabeth II. He reported from the Lac-Mégantic rail disaster, the Freedom Convoy protest in Ottawa and the Pope's visit to Canada aimed at reconciliation with Indigenous people. Thomas can be reached at [email protected].

With files from Robson Fletcher, Nicole Brockbank and Reuters

Related Stories

  • York, Peel police now using facial recognition technology
  • Man charged with murder of Sikh activist arrested near site of planned Sikh gathering in Ontario

essay on facial recognition technology

  • Account Details
  • Newsletters
  • Group Subscription

NEC's facial recognition to speed up entry at 2025 Osaka Expo

Currently used at airports, the company's tech boasts high accuracy

TOKYO -- The World Expo 2025 in Osaka will adopt Japanese technology company NEC's facial recognition system for use in entry and payments, aiming to boost visitor convenience at an event expected to draw over 28 million people.

Holders of an all-access pass will be able to use facial recognition by registering their photo and a payment method in advance. The system will enable them to enter the event and make purchases at shops and cafes by just scanning their faces.

Mastercard to let Asian shoppers pay with a quick face scan

Japan's nec opens cybersecurity center in malaysia's johor state, japan's nec says ai slices its software development time, cost, trading in china's sensetime halted after shares pop 30% in hong kong, nec and sumitomo put ai to work down on the farm, japan's ntt docomo and nec join hands on new telecom gear, latest on technology, ibm and japan institute team up to develop next-gen quantum computer, samsung targets 20% faster ai chip delivery under one-stop service, panasonic seeks design ideas from china to boost appliance unit, sponsored content, about sponsored content this content was commissioned by nikkei's global business bureau..

Nikkei Asian Review, now known as Nikkei Asia, will be the voice of the Asian Century.

Celebrate our next chapter Free access for everyone - Sep. 30

IMAGES

  1. Facial Recognition Technology and Ethical Concerns

    essay on facial recognition technology

  2. Ai Visual Recognition

    essay on facial recognition technology

  3. Facial Recognition Technology

    essay on facial recognition technology

  4. Face Recognition Technology Analysis

    essay on facial recognition technology

  5. Facial Recognition Technology: Functionality, Applications

    essay on facial recognition technology

  6. Topic 4 Facial Recognition Technology PDF.pdf

    essay on facial recognition technology

COMMENTS

  1. Face Recognition Technology Essay (Critical Writing)

    Face recognition is the automatic localization of a human face in an image or video and, if necessary, identifying a person's identity based on available databases. Interest in these systems is very high due to the wide range of problems they solve (Jeevan et al., 2022). This technology is a biometric software application capable of ...

  2. Facial Recognition Technology: [Essay Example], 570 words

    According to Lin (2000), Facial Recognition Technology (FRT) is one of the few biometric systems that help identify an individual based on their biological characteristics. Researchers from diverse fields including security, computer vision, image processing, and psychology have gained a deep interest in the technology, owing to its high ...

  3. Facial Recognition Technology

    Facial Recognition Technology. A face recognition system is an analytical technology intended at identifying or verifying a particular individual by their facial traits using an image, video, or in real-time mode. Currently, facial recognition is applied in different places, including police departments, airlines, retailers, and schools for ...

  4. Facial Recognition Technology: The Good, the Bad, and the Future

    In June 2021, the U.S. Senator Edward Markey (D-MA) introduced the Facial Recognition and Biometric Technology Moratorium Act of 2021. This bill would impose limits on the use of biometric surveillance systems by U.S. federal and state government entities, including imposing a blanket ban on most facial recognition technologies used by federal, state, and local authorities today.

  5. Face Recognition Technology: [Essay Example], 1228 words

    The implementation of face recognition technology includes the following three stages: Data acquisition. Input processing. Face image classification and decision making. The input can be recorded video of the speaker or a still image. A sample of 1 sec duration consists of a 25 frame video sequence.

  6. The pros and cons of facial recognition technology

    4. Creates data vulnerabilities. Facial recognition also creates a data protection and cyber security headache. The large volume of personally identifiable information (PII) being collected and stored is an attractive target for cyber criminals, and there are already examples of hackers gaining access to such systems.

  7. Past, Present, and Future of Face Recognition: A Review

    Face recognition is one of the most active research fields of computer vision and pattern recognition, with many practical and commercial applications including identification, access control, forensics, and human-computer interactions. However, identifying a face in a crowd raises serious questions about individual freedoms and poses ethical issues. Significant methods, algorithms, approaches ...

  8. Facial Recognition Technology and Ethical Concerns Essay

    Updated: Dec 11th, 2023. Face recognition refers to a method used to confirm or identify an individual's identity using their face. The technology authenticates and identifies an individual based on sets of verifiable and recognizable data unique and specific to that individual. Facebook launched its DeepFace program in 2014, which can be ...

  9. Face recognition: Past, present and future (a review)☆

    There are also a number of survey papers summarizing the work done on face recognition. ... As a result, 180 features were obtained using IGF method for Face Recognition Technology (FERET) face database [268] and 88 features for ORL (Olivetti Research Laboratory) database [294]. Experimental studies on these two datasets gave face recognition ...

  10. Facial Recognition Technology: A Comprehensive Overview

    Abstract. This paper provides an extensive review of facial recognition technology, tracing its historical evolution, exploring its functioning and applications, discussing the challenges it presents, and contemplating future prospects. The technology's inception and advancement are traced from its early stages to the current state ...

  11. Facial Recognition Technology

    Facial recognition technology is a technology widely used today in identifying or verifying a subject from an image, video, or an audiovisual element of the subject's face. Facial recognition is defined as software that maps, analyzes, and confirms the identity of a face from an image, video, or audiovisual component (Khan & Rizvi, 2021).

  12. A Review of Face Recognition Technology

    Face recognition technology is a biometric technology, which is based on the identification of facial features of a person. People collect the face images, and the recognition equipment automatically processes the images. The paper introduces the related researches of face recognition from different perspectives. The paper describes the development stages and the related technologies of face ...

  13. Full article: Facial recognition technology in schools: critical

    Facial recognition technology is now being introduced across various aspects of public life. This includes the burgeoning integration of facial recognition and facial detection into compulsory schooling to address issues such as campus security, automated registration and student emotion detection. ... automated essay grading and other already ...

  14. Facial Recognition Technology in Healthcare and Biometrics

    Facial recognition technology has become an increasingly prevalent tool in various industries, including healthcare and biometrics. This essay will explore the development of facial recognition technology, its practical applications in healthcare and biometrics, as well as its future implications and recommendations for responsible implementation.

  15. The ethics of facial recognition technologies, surveillance, and

    Mann, M. and Smith, M. (2017) 'Automated Facial Recognition Technology: Recent Developments and Approaches to Oversight' University of New South Wales Law Journal 40, no. 1 (2017): 121-145. 4. Gates, K. (2011). Inventing the security-conscious, Tech-Savvy Citizen. In: Our biometric future: facial recognition technology and the culture of ...

  16. The Ethics of Facial Recognition Technology

    Abstract. This is a comprehensive presentation of leading ethical issues in debates about facial recognition technology. After defining basic terms (facial detection, facial characterization, facial verification, and facial identification), the following issues are discussed: standards, measures, and disproportionately distributed harms; erosions of trust; ethical harms associated with perfect ...

  17. Facial Recognition Technology Essay Examples

    Facial Recognition Technology. Facial recognition technology is a technology widely used today in identifying or verifying a subject from an image, video, or an audiovisual element of the subject's face. Facial recognition is defined as software that maps, analyzes, and confirms the identity of a face from an image, video, or audiovisual ...

  18. Facial Recognition Technology: Here Are The Important Pros And ...

    Facial recognition technology is used and being tested by many governments, organizations, and businesses around the world from democratic societies to dictatorships. As with any new technology ...

  19. The ethical questions that haunt facial-recognition research

    To get a wider sense of academic views on facial-recognition ethics, Nature this year surveyed 480 researchers who have published papers on facial recognition, AI and computer science. On some ...

  20. 30 The Ethics of Facial Recognition Technology

    Abstract. Those who use facial recognition technology potentially wield immense power. That power is the subject of intense debate—a debate that has legal implications for privacy and civil liberties, political consequences for democracy, and a range of underlying ethical issues. This chapter is a comprehensive presentation of the leading ...

  21. Essay On Face Recognition

    Essay On Face Recognition. 2057 Words9 Pages. Abstract— Face recognition is one of the most important biometric and face image is a biometrics physical feature use to identify people. Major and Minor segments of face space are eyes, nose and mouth. In biometrics quality face is the most imperative characteristic method for recognize individuals.

  22. Face recognition: The falsely arrested fight California face bill

    In summary. Three men falsely arrested based on face recognition technology have joined the fight against a California bill that aims to place guardrails around police use of the technology. They say it will still allow abuses and misguided arrests. Lea esta historia en Español. In 2019 and 2020, three Black men were accused of, and jailed for ...

  23. That Time a UT Professor and AI Pioneer Wound Up on the Unabomber's

    The "Un" in Unabomber stood for "university," and Bledsoe had been a professor at UT, working on artificial intelligence and specifically face recognition, since 1966. (The "a" stood for airline.) Bledsoe had died in October 1995 at 73 of complications from Lou Gehrig's disease. Besides being an AI pioneer, he was also a ...

  24. Face Recognition Based Attendance System

    An in-depth analysis of a project dedicated to the development and implementation of a Face Recognition Attendance System (FRAS) and examines its efficiency and security in various real-world scenarios, providing a valuable insight into the evolving landscape of attendance management. The integration of face recognition technology into attendance management systems has emerged as a ...

  25. Facial Recognition Technology Impact on Personal Privacy and Civil

    This essay will discuss the impact of facial recognition technology on personal privacy and civil liberties and the need for ethical and legal considerations in its use. Say no to plagiarism. Get a tailor-made essay on

  26. Essay On Facial Recognition

    Essay On Facial Recognition. Facial recognition technology is a very well-known technology because of the high security it provides to crucial businesses and government agencies. According to the FBI this technology was, "Developed in the 1960s, the first semi-automated system for facial recognition required the administrator to locate features ...

  27. Clearview AI Used Your Face. Now You May Get a Stake in the Company

    June 13, 2024, 8:07 p.m. ET. A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would ...

  28. How a New Jersey man was wrongly arrested through facial recognition

    Investigators identified Parks as a suspect using facial recognition technology, according to police documents provided as part of a lawsuit filed by Parks's lawyer against several defendants ...

  29. What Is Artificial Intelligence? Definition, Uses, and Types

    Artificial intelligence (AI) is the theory and development of computer systems capable of performing tasks that historically required human intelligence, such as recognizing speech, making decisions, and identifying patterns. AI is an umbrella term that encompasses a wide variety of technologies, including machine learning, deep learning, and ...

  30. NEC's facial recognition to speed up entry at 2025 Osaka Expo

    KYOKO HARIYA, Nikkei staff writer June 15, 2024 03:26 JST. TOKYO -- The World Expo 2025 in Osaka will adopt Japanese technology company NEC's facial recognition system for use in entry and ...