Over the last two years, large language models (LLMs) have transformed the way we think about artificial intelligence. These models learn the structure of human language by analyzing billions of sentences, enabling them to generate text, answer questions, and power new forms of digital interaction. But language is not the only domain where sequences of information reveal meaning. Human behavior — the way we type, swipe, pause, hesitate, and move through our online experiences — also forms a kind of language. And understanding that language is fundamental to preventing fraud.

The next frontier of AI in fraud detection lies in bringing the power of LLM-style modeling to behavioral data. At BioCatch, we call this a behavioral language model (or “BeLM”).

While we’re still in the early stages of research and development, the concept represents a natural evolution of BioCatch’s decade-long leadership in behavioral intelligence. This blog introduces the idea of a behavioral LLM, explains why the timing is right, and highlights how BioCatch is uniquely positioned to lead this new wave of innovation.

Behavior as a language

Every digital interaction a user performs — pressing a key, switching focus between fields, scrolling a page, waiting a few milliseconds before clicking — forms part of a behavioral sentence. Just as spoken language has grammar and rhythm, genuine users have behavioral patterns shaped by habit, motor skills, cognitive load, and context.

In contrast, fraudsters behave differently. They rush or hesitate, paste in stolen information, use automation, frequently switch between windows, or interact with forms in ways that don’t match how real users behave. These differences are often subtle and distributed across hundreds of micro-events.

Traditional machine learning models look at behavioral signals in aggregation. But large language models excel at something machines have traditionally struggled with: learning patterns across long sequences, where meaning isn’t found in a single event but in the relationship between events.

This is exactly why LLMs represent a breakthrough opportunity for behavioral intelligence.

The BeLM

A behavioral LLM is not a text model. Instead, it’s a model trained on sequences of human-device interactions, encoded so that a model can learn:

  • The “grammar” of genuine digital behavior
  • The long-range dependencies between micro-interactions
  • What normal activity looks like across millions of users
  • When something deviates from that learned behavioral norm

Instead of words, a behavioral LLM processes behavioral events: keystrokes, pointer movements, scrolls, hesitations, device context, and timing patterns. Instead of predicting the next word in a sentence, it can learn to identify whether a sequence fits the structure of genuine user behavior.

The core idea is simple: If language models can learn how humans communicate, they can also learn how humans behave.

Why now?

The fraud landscape is changing rapidly. Social engineering attacks continue to both improve and proliferate (BioCatch customers reported a 65% spike in scam attempts in the last year alone), automation is more accessible, and fraudsters increasingly mimic or replay legitimate information. What bad actors can’t easily mimic is genuine human behavior.

Transformers, the model architecture at the core of today’s large language models, are uniquely powerful in this domain because they can:

  • analyze long sequences with thousands of events
  • attend to relationships between far-apart actions
  • learn subtle temporal and contextual patterns
  • generalize across diverse populations and scenarios

For fraud detection, this means deeper intent understanding, stronger anomaly detection, and a broader ability to catch signals that traditional models miss.

BeLMs and BioCatch

While still early, our R&D efforts are focused on several foundational research questions:

  • How can behavioral sequences be encoded into a structured behavioral language?
  • What tokenization approaches preserve the semantics of behavior?
  • How well can a transformer learn consistent genuine patterns?
  • Can deviations from this learned behavioral structure predict fraud or high-risk events?
  • How does a behavioral LLM compare to traditional behavioral models?

We are investing in experiments, building prototypes, and validating the feasibility of the approach before moving toward any productization.

We envision a future in which a BeLM might detect anomalies in user behavior, provide a foundation for new behavioral insights, and recognize user intent in real time, thereby strengthening all existing BioCatch solutions.

We see Behavioral LLMs enabling capabilities far beyond our current stack. By learning directly from the raw rhythm and structure of user actions, a BeLM can discover patterns we don’t explicitly teach it and reveal insights that are invisible to feature-based approaches today. Instead of relying on patterns we’ve previously observed, it learns directly from the full flow of digital behavior, positioning us to adapt to entirely new fraud tactics the moment they appear. This prepares BioCatch to deliver breakthrough protections that keep pace with the future of digital interactions, including the evolution of agentic browsers and AI assisted interfaces.

But for now, the focus is on experimentation, learning, and responsible innovation. The language of behavior is rich, complex, and uniquely human.

And soon, machines may learn to understand it.


Recent Posts