In 2019, Hamid Norouzi was on track to earn his trade certificate as a carpenter. He’d arrived in Norway as a refugee, learned the language, finished school, and was ready for his future. Fast forward three years: Norouzi was unemployed, depressed, and drowning in debt.
When the carpentry company where Norouzi apprenticed went bankrupt, his boss urged him to register a new business in his own name so the work could continue. Norouzi didn’t understand the full implications but was desperate to keep his apprenticeship and secure a future role. Trusting his boss, he handed over the key to his digital identity, his BankID, unaware that it'd also become the key to his financial ruin.
Months later, his wages stopped without warning. When he asked his boss to close the company, he eventually received an “OK.” But letters about the business kept coming: fines, penalty notices, and enforcement orders from the Norwegian Labour Inspection Authority for violations he hadn't even known were committed. By January 2022, Norway’s collection agency told Norouzi he owed more than 200,000 kroner. His case against his former boss was dropped by the police after just 10 days due to “lack of capacity.” Today, he lives under a forced debt arrangement, still fighting to clear his name.
This is where the real flaw in our digital identity systems emerges. Because Norouzi’s BankID had been used to open the company, the system treated every action that followed as if it were his — voluntary, informed, and legitimate.
Digital identity systems are the platforms that verify who you are online so you can access services, sign documents, or make transactions, and they often operate on one big assumption: Every digital action a user takes is done freely and intentionally. When someone clicks “approve,” signs a digital contract, or makes a transaction, the system assumes that action was voluntary and informed. While cryptography can prove what happened, it can't prove why it happened.
When the cryptography is flawless, but the outcome is flawed
I know this reality all too well. I too was manipulated, but by someone I thought was my boyfriend, coerced into making financial transactions and signing agreements under emotional pressure and threats. My story is depicted in the Netflix documentary The Tinder Swindler, where I went public at a time when no one truly understood what had happened to me. On paper, every financial action I took looked legitimate. The system read it as consent. But in reality, I had no true agency. I was cornered.
Both Norouzi’s experience and mine point to the same truth: digital systems often mistake the act of using an identity for genuine consent. Yet, optimism around digital identity systems remains high. The premise is simple: if we can confidently verify who’s on the other side of the screen, we can make fraud harder to commit. And this is true!
The problem is that digital identity systems are built for perfect conditions — for confident, informed users acting freely, without pressure. They aren’t designed to detect coercion, manipulation, or misunderstanding, and they can’t tell the difference between a willing click and a desperate one. In that gap, people like Hamid and me are left unprotected.
Academic research backs up what lived experiences make clear: Online decisions need to be rooted in context. Studies show that exploitation in digital environments thrives on emotional manipulation, deception, and misinformation. Analyses of eID frameworks such as eIDAS 2.0 highlight a critical flaw: They assume “sole control” over a digital identity is always true. This means our legal framework locks in the flaw. This also shifts all responsibility onto the individual and erases the reality that people can be coerced, misled, or lack the digital literacy to fully understand to what they’re agreeing.
So how do we change this?
We need both legal and technical safeguards that reflect the messy realities of human life. First, we must drop the assumption that “sole control” is always true. Not every click, signature, or transaction is freely given, and the law should recognize that victims of coercion or manipulation shouldn’t be left to bear the full cost.
Second, we need systems that go beyond verifying identity. They should be able to detect red flags — unusual account activity, risky transactions, behavioral changes — and give users the space to pause, double-check, or seek help before harm occurs.
Third, inclusion can’t be an afterthought. Digital identity must work for people with limited tech skills, language barriers, or higher risk of coercion. That might mean clearer language, extra confirmation steps for high-risk actions, or the option to involve a trusted person. Finally, responsibility needs to be shared. When the system fails to protect someone from exploitation, the blame and the burden shouldn’t fall on the victim alone.
“Cybercriminals don’t break systems, they break people,” says Jason Nurse, reader in cyber security at the University of Kent and Public Engagement Lead at Kent Interdisciplinary Research Centre in Cyber Security. That fact should change everything about how we design security. Human behavior isn’t a flaw in the system. It’s the baseline. Until we accept that, victims will keep paying for design flaws that were never theirs to fix. It’s time to design for the human reality behind the click.