Site icon Secplicity – Security Simplified

Understanding Gender Bias in Facial Recognition Technology

Facial recognition software has been in the hot seat in recent months. Researchers, industry experts, businesses and policymakers have all raised concerns ranging from potential privacy repercussions to its role in discrimination and more. In fact, the University of Michigan recently published a study that linked the on-campus use of facial recognition to these issues.

In a Help Net Security article, WatchGuard Security Analyst Trevor Collins explains the origins of bias in face recognition algorithms, highlights findings from a new WatchGuard Threat Lab study about gender bias in facial recognition tech and today and how the industry can address it. Here’s a brief excerpt from the piece:

“The algorithms used for facial recognition today rely heavily on machine learning (ML) models, which require significant training. Unfortunately, the training process can result in biases in these technologies. If the training doesn’t contain a representative sample of the population, ML will fail to correctly identify the missed population.

While this may not be a significant problem when matching faces for social media platforms, it can be far more damaging when the facial recognition software from Amazon, Google, Clearview AI and others is used by government agencies and law enforcement.”

Read the full article here for more on gender-based bias in facial recognition software and recommendations for addressing it. And be sure to check out the WatchGuard’s original report for more technical detail on the issue.

Exit mobile version