Para leer este artículo en español, haz clic aquí.
Imagine waking up to the news that some rogue actor has leaked a database of fingerprints and facial scans online. Unlike a password breach, you can’t simply reset your fingerprint or swap out your face. The traits that make biometric authentication so secure also make a compromise irreversible.
Biometric authentication — through fingerprints, facial recognition, or voice — has become a cornerstone of modern security. It offers powerful advantages over traditional passwords, but it also raises a pressing question: What happens if biometric data is compromised? A few weeks ago, my colleague Diego Baldin examined the shortcomings of facial ID. Unlike a password, you can’t change your face or reset your fingerprint once the information is exposed.
Protecting this information requires strict safeguards, from secure enrollment to encrypted storage and tightly controlled access. While a stolen password can be replaced with a stronger one, a leak of biometric data creates a lasting and far more complex risk.
That leads to another critical issue: Who should be responsible for safeguarding this sensitive data, and how? Today, many third parties hold vast databases of biometric enrollments for financial institutions, often with uneven security standards. The state of Texas recently won a lawsuit against Meta over the misuse of biometric data, resulting in a significant fine for the tech giant.
Widespread use of facial or fingerprint recognition brings a range of legal and ethical challenges. Because this is highly sensitive data, its misuse can threaten privacy and fundamental rights. Across Latin America, governments have responded in a variety of ways, with some encouraging biometric adoption to fight fraud and others imposing stricter controls to safeguard personal data. With the exception of Mexico, most countries in the region still lack a comprehensive framework for enabling robust biometric authentication while ensuring it is safeguarded, preserved, and used responsibly.
Country-by-country comparison of facial biometrics regulation and data protection frameworks
Brazil, Mexico, Colombia, and Chile each regulate the use of facial biometrics in the financial field (and in broader data protection frameworks) a little differently.
Brazil:
- Facial biometric requirements: Not required by law. Banks widely use facial recognition for authentication but on a voluntary basis and usually combined with other factors. The Central Bank (Banco Central do Brasil) encourages the sharing of fraud data, though not specific to biometrics.
- Personal data protection framework: General Data Protection Law (LGPD, 2020) defines biometric data as sensitive. There is no specific regulation yet on AI or facial recognition, though bills are under debate.
- Consent and restrictions: Consent is required. Processing of sensitive data is prohibited unless explicitly authorized by the individual or allowed under a legal exception. Without a dedicated law, risks of disproportionate use remain. LGPD applies on a case-by-case basis.
Mexico:
- Facial biometric requirements: Mandatory. Since 2018, the National Banking and Securities Commission (CNBV) has required banks, retirement fund administrators, and multiple purpose financial companies (known as Sociedad Financiera de Objeto Múltiple or SOFOMES in Mexico) to verify customer identity with biometrics (mainly fingerprints) in branches and apps.
- Personal data protection framework: Federal Law on the Protection of Personal Data Held by Private Parties (LFPDPPP, 2010) classifies biometric data as sensitive. The National Institute of Transparency, Access to Information and Protection of Personal Data (INAI) issued specific guidance in 2018 for its safe handling. The national electoral registry (INE) is used for validation.
- Consent and restrictions: Express consent is required. Strict rules apply, including a privacy notice, limited purposes, and advanced security measures. Customers must be offered alternatives if they refuse to provide biometrics. Noncompliance can lead to fines and orders from INAI.
Colombia:
- Facial biometric requirements: Prohibited for authentication process. Banks increasingly use fingerprints validation for digital onboarding and identity theft prevention but always with user authorization. The Superfinanciera allows the use of fingerprint authentication for banking Know-Your-Customer (KYC) processes with due diligence and consent.
- Personal data protection framework: Law 1581 (2012) defines biometric data as sensitive. The Superintendency of Industry and Commerce (SIC) regulates biometric data processing. No dedicated law on AI or facial recognition yet exists; general habeas data principles apply.
- Consent and restrictions: Explicit consent is required. Sensitive data cannot be demanded as a condition for service. The SIC has sanctioned companies for forced use of facial recognition. Use is allowed only with voluntary or legal authorization.
Chile:
- Facial biometric requirements: Not required in banking, though some banks offer it as an option. Beginning in 2025, biometric verification will be mandatory in telecommunications. The government is promoting digital security through such measures.
- Personal data protection framework: Law 19.628 (1999) was replaced by Law 21.719 (2024), which establishes a framework modeled on the European Union’s General Data Protection Regulation (GDPR). Biometric data is expressly defined as sensitive, with a new data protection agency to oversee compliance.
- Consent and restrictions:Enhanced consent is required. The new law mandates clear individual authorization with very limited exceptions. Previously, there was little specific control. Going forward, high fines and strict oversight will apply.
Risks and layers of authentication
In legal terms, there are no explicit restrictions on physical biometric authentication. Still, specialists point to major privacy concerns and ethical dilemmas. Key risks include:
- Bias and discrimination: Facial recognition systems show higher error rates when identifying women and people with darker skin, especially in Brazil. Failure rates reach 34.7% for black women compared with 0.8% for white men. This leads to false positives and false negatives that disproportionately affect afro-descendant and indigenous populations. In pilot tests, more than 90% of arrests based on facial recognition were of black people, raising concerns of technological discrimination.
- PII: A lot of governments and regulations consider Face Id, Fingerprints and Voice Is as PII, this includes that are banned them for authentication processes and prohibited to be storage in a data base. In Colombia, for example, a case involving Mercado Libre showed that mandating facial recognition cannot justify violating the law or restricting authentication to a single method, especially if that method can be spoofed.
- Vulnerability to fraud and attacks: While facial biometrics are considered secure, fraudsters continue to find ways to bypass them. Unsecure ways to implement Facial ID process or vulnerabilities over sdk tampering could lead to deepfake or impersonation, several technical documents warns about how GoldPickaxe trojan works to steal information and bypass Face Id.
- Implementation costs and operational errors: Building secure systems that comply with local rules requires heavy investment in infrastructure for collecting, storing, and safeguarding sensitive data. Another challenge is how authentication is applied. Specialists emphasize it should not stand alone but function as one layer among multiple authentication factors to reduce failure rates.
Behavioral authentication vs. behavioral biometrics
One possible alternative is behavioral biometrics. Instead of relying on fixed physical traits such as a fingerprint or a face, behavioral biometrics measures patterns in how people interact with devices — such as typing rhythm, how they swipe a screen, and mouse movements.
Behavioral biometrics rests on two principles: 1.) anonymizing data through hashing, and 2.) profiling users by patterns rather than unique identifiers. Anonymity allows a behavioral biometrics profile to be recreated when there is an anomaly or incident. As a result, it also cannot be directly linked to a person as it does not contain pieces of personal data.
This is a key distinction. Traditional, physical biometrics operate on a 1:1 ratio: There’s no other person on earth with the same fingerprint or face or even voice, making leaked data uniquely tied to an individual. Behavioral biometrics, by contrast, is dynamic and adaptable, even change with time because people tend to change behavior.
Balancing securing and privacy
We can conclude that the use of physical biometric authentication involves regulatory, technological, and ethical challenges. This does not mean the technology should be discarded. Physical biometrics offers real benefits in security and convenience. But informed consent remains essential, and it must be implemented with safeguards. These include advanced protections such as encryption and advanced proof of life, strict compliance with the law, and clear alternatives for people who choose not to use it.
Financial institutions looking to balance fraud capture, consumer experience, and user privacy should utilize a multi-layer approach that combines behavioral intelligence and physical biometrics to approve transactions.