Framing-up the Discussion on Fraud Frameworks

Revealing (aka Unleashing) the CARTO Digital Fraud Maturity Model

Framing the Discussion on Fraud Frameworks

This article is a deep dive on part of a talk I did at the BSides Knoxville’s 2024 event (their 10 year anniversary) focused on opening up some discussion on how Maturity Frameworks might apply to fraud programs. Here’s video (below) of the full talk, entitled “Watching the Detectives: Scam Artistry, Deep Fakery, Fraudsters, Frame-ups & Other Highlights of the High Speed Card Chase“.

BSides had a great theme this year - Detectives (& Detecting). So I went full throttle, providing an overview and discussion of the evolution of Detection - from a fraud defense perspective (I actually worked in network intrusion detection at the beginning of my career, so there are a lot of parallels which are interesting - and differences, which are even more interesting).

Before heading over to Knoxville, thought, I posted a request on LinkedIn – recommendation for peoples’ favorite fraud-fighting frameworks for folks dealing with payment risk or in-product abuse. I wonder how frameworks can help fraud and trust & safety teams. See, I’ve always been a bit skeptical of the role of frameworks within a fraud program. Many cyber programs use maturity models - and their ability to shift maturity - as a way to benchmark against the industry, and as an indication of progress and performance. But fraud and abuse teams have metrics and impact measurements (loss rate, chargeback rate, decline rate) that they can use to understand performance in a more objective way (understanding that measuring impact is still pretty hard).

That said, the dynamics have gotten fairly complex, with fraud/anti-abuse teams now working across entire customer and product lifecycles, as well as at the point of a (monetary) transaction. So I’m starting to appreciate the role of frameworks more and more.

Here’s a sample of what I found when I dug in on frameworks (a blend of cybersecurity and fraud-specific exp):

Of these frameworks, I find maturity models the most useful - but the most lacking, for a new fraud team. I’m often asked the question by growing digital companies: where do we start? What’s the best way to organize efforts? And so with that in mind I’ve drafted the (VERY drafty) CARTO Fraud Framework.

It’s not perfect by a long-shot, so I highly welcome feedback on this. Let me explain a little bit about my thinking here - in general, a (digital) fraud system looks something like this (and apologies for the super-high level of detail, there are a lot of details especially in operations and customer service that are missing here). 

Generic Fraud (Decisioning) System

To be fair, the diagram of the system focuses on the technology. But a program framework needs to consider a lot of other things. Here’s an overview of the CARTO Fraud Framework, which leverages the Fraud Decisioning System model above as a basis for the approach.

CARTO Fraud Framework Overview

As you move from left to right, the program matures. As the program matures, the focus - tends to evolve from “stop the bleeding” to “enable the business”. Here are the main stops along the journey:

  • CONTEXT: What I’ve found is that digital companies (retailers, FinTech, and tech companies) with start their journey over on the right hand side of the system diagram - in operations, customer service, and billing systems - trying to figure out what’s going on and get some kind of context for the incoming contacts, chargebacks, and reports/requests from their Acquirers. (For example, the first time some companies hear about the High Risk Monitoring program is when they’re on it.) 

    • The part of the tech/data stack that’s often the focus: the back office, post-transaction systems

    • The question to answer is: What’s the problem & how bad is it?

  • ACT: What teams seek to do next is find where they can affect the fraud problem - where can they deny, slow-down, or at least detect fraud attempts

    • The part of the tech/data stack that’s often the focus is: the back-end systems that support checkout or billing, but roughly, the parts of the system where you could write rules or filters. Manual reviewers also need basic tooling to enforce policies.

    • The question to answer is: Where can we take action / block / decline?

  • REFRAME: After the team has slowed down the fraud, the business often wants to understand what the impact of all of these fraud and security declines are having on customer experience - and revenue. The team has rough controls, but now will want to introduce more targeted interventions, lower false positives. Work from scores, not just rules.

    • The part of the tech/data stack that’s often the focus is: a better Back End plus Front End - in the back-end we start introducing scores in addition to rules, and wire in appropriate user experience - for example, giving users more tools to “self serve” out of a decline. 

    • The question to answer is: What’s the bigger problem, considering the user experience & business?

  • TRAIN: With a framework in place for more sophisticated decisions, the team will really shore up the learning loop needed for an effective risk decisioning system, and making them better, faster, stronger. The keyword is usually faster. 

    • The part of the tech/data stack that’s often the focus is: Data and data science tech - modeling/ML tools, an emphasis data quality and availability, the customer and transaction data model, tuned for the fraud fighters. Manual reviewers also need improved tools at this point, as they are part of the learning loop (they are both upstream and downstream of the models at this point). 

    • The question to answer is: How can we uplevel detection, leverage AI/ML, and improve speed to detect?

  • OPTIMIZE: Now that the pieces are in place, the team can orchestrate the capabilities together tuned for specific business outcomes. When deploying a model, we know that we need to set cutoffs that hit a preferred balance between precision (accuracy) and recall (coverage). At the optimization phase, teams can start discussing the preferred balance between revenue and fraud losses, between customer experience and customer contact volumes.

    • The part of the tech/data stack that’s often the focus is: The whole system - how the different capabilities work together in an integrated fashion, considering impacts end-to-end.

    • The question to answer is: How can we automate further to streamline and prioritize both decisions and work?

I really liked this noodle that came out of brainstorming the framework (below) - because the emphasis of the program, across all dimensions — people, process, technology — all of those things change as the program evolves. (Note: As one moves from left to right, new capabilities/elements are additive, not replacements. Models and manual review are both “forever” capabilities, even as they become more sophisticated.

How does the program scope shift as it matures? Some ideas, using CARTO.

This is an incomplete overview, because I haven’t even talked about how the risk decisioning system overlays with the customer journey (one of my favorite topics!!), this is really more about the indicators of maturity. In any case, I think this sketches out enough so that you can kind of understand what I’m trying to do in my slides 50-60 (The “An Escalation” section). If BSides Knoxville posts any follow-up video I will let you know.

Full deck is here: