Travel
The ingenious way your credit card company is making your life easier
In mid-March, a scammer in California tried to buy $150 worth of Wingstop using my debit card. Aside from being impressed at the sheer size of the order, I was relieved because Citibank, which issued my card, declined the transaction on the spot and alerted me to the fraud. In minutes I was able to shut off my card, heading off any more purchases by the scammers, and order a new card. All’s well that ends well.
When I went to Buenos Aires in April, I figured I might run into a similar situation. Sure, the banks say you don’t have to call ahead when you travel anymore, but I assumed I’d still have some purchase flagged as potential fraud, as had happened in past trips abroad. Miraculously, everything went off without a hitch. I don’t know how JPMorgan Chase knew that I would spend $200 on Botox in Argentina, but it did. (No, I didn’t book my flight on the same card, and whatever, everybody gets Botox now.)
It’s great that banks and credit-card companies are getting better at discerning which payments are fraudulent and which are legit. Many people have some horror story about having their credit card stolen or having their own legitimate transactions flagged as suspicious. And it’s nice not to have to spend 20 minutes on the phone before a vacation explaining where you’re going and when. Credit-card fraud protection is still far from perfect, but there’s no denying that the technology is improving. On the flip side, it’s also kind of wild to consider just how much financial institutions must know about you to make the right calls.
I was curious about how it all works — and, frankly, a little creeped out. So I reached out to some credit-card companies and academics to learn more. Why don’t people have to alert their credit-card companies about travel anymore? And, more broadly, just how have banks gotten so good at figuring out what’s normal about our spending habits and what isn’t?
The Federal Trade Commission receives thousands of card-fraud complaints each year. The Nilson Report, which tracks the card industry, says payment-card fraud resulted in $33 billion in losses worldwide in 2022 and $13.6 billion in losses in the US. As such, credit-card issuers and banks are keen to do what they can to spot fraud. They want to keep their customers happy, and, more importantly, they want to stem their losses. In the US, the major credit-card issuers and banks generally have a zero-liability policy, which means that when a customer gets scammed, the organization, not the customer, has to eat the cost.
Years ago, whether a transaction went through was based on things like whether a physical card was present, whether you had enough money to make the purchase, and (if the cashier wanted to look) whether your signature on the receipt matched the one on the back of your card. In some cases, the cashier may have even asked for ID or called the bank to verify the funds. We’ve come a long way from those bad old days by using the same tools that power most innovations: data and computers. Credit-card companies and banks know a lot about us — where we shop, when we spend, and how much we’re usually willing to pay for things — and they’re getting better and better at turning that knowledge into action.
The models are evaluating a trillion dollars’ worth of transactions a year.
While it’s all the rage to talk about newfangled forms of artificial intelligence, fraud detection owes a lot to machine learning, a field within AI that’s been around for years. A bunch of data gets dumped into computer systems, and algorithms figure out patterns and relationships. The algorithms create decision trees to predict the likelihood of different outcomes and identify what may be considered normal or fishy. It’s not that your credit-card company knows that you, specifically, would blow a bunch of cash on A and not on B — it knows that customers with your profile are in the “likes A” camp and not the “likes B” one.
“It’s looking at what’s happening that is very much out of the ordinary for your general behaviors,” said Tina Eide, an executive vice president of global fraud risk at American Express. “And when I talk about general behaviors, it is generalized, right? It is not down to the specific purchase or to the specific merchant.” Eide added, “The models are evaluating a trillion dollars’ worth of transactions a year.”
The machines know more now than ever. Mike Lemberger, Visa’s regional risk officer for North America, said that over the past five years the number of data points people generate with their credit cards has grown tremendously. People are increasingly using cards over cash. And they don’t just have a physical card they’re pulling out at the store — they’ve got their card credentials in their Amazon account, Netflix account, iPhone, etc. The more purchases the card issuer can analyze, the more accurate the fraud detection will be.
“Visa, we don’t have consumer information — that’s your financial institution that has that — but what we have is this triangulation of all these data points,” Lemberger said. “We can create more and better scores, layer on top of that machine learning and AI abilities, and it becomes a much, much more powerful predictor, which we then feed into all of our partners to say, ‘Hey guys, if you want to make the best decisions, here’s a whole bunch of really good information.'”
Visa isn’t going to block your card directly, but it’ll alert your bank that your purchase looks suspicious or that fraud has been detected at the merchant you’re dealing with.
This all seemed pretty simple until I talked to Yann-Aël Le Borgne and Gianluca Bontempi, a pair of researchers at the Université Libre de Bruxelles in Belgium who work on machine learning and card fraud. They emphasized the vast scale of this fraud-detection technology. Companies and their algorithms are ingesting millions of transactions and creating so many decision trees to categorize certain activities that it can defy human logic, they said. Basically, the computer may be right that your transaction looks funky even if it’s made in your home city at a fairly innocuous vendor, or it may be right that the transaction is fine even though it’s made in a faraway place — but when humans try to figure out what did or didn’t sound the alarm, nobody will really be able to pinpoint why.
“Machines can consider many more features, and at the end of the day it’s not clear if all those features have meaning for humans,” Bontempi said. “Humans are used to working with two, three features, at most five features, while machines can work with hundreds of features. So there are really different levels between what a machine can do.”
There are human-written rules, which are generally interpretable, and there are machine-written rules, which can be a black box. They’re more accurate, but they may be harder, if not impossible, for people to reverse engineer. And banks may be using several different algorithms, making this even more complicated. Data scientists are the ultimate decision-makers, but the information they’re dealing with is based on highly complex tech.
Humans are used to working with two, three features, at most five features, while machines can work with hundreds of features.
When I explained to experts my somewhat embarrassing wings-versus-wrinkles conundrum and asked what may have triggered one alert and not the other, they offered up different explanations. Eide, from American Express, said that even though I hadn’t booked my trip to Argentina with the credit card I used to buy the Botox, something else had probably tipped off the system that I was there. I realized I’d also bought a package of spin classes in Buenos Aires on my phone with the same card. I had also paid for a meal in the city. Lemberger, from Visa, emphasized that it’s about all the data and spending patterns and said that given my spending patterns, the Botox likely matched my profile more than the massive delivery order.
“I hate to say this to you, but at the end of the day, all these data points build personas. Just like in marketing somebody would use personas to market to you, we’re using that same technology to protect you,” he said. “And the fact is that we use those data points to not just secure you but the whole ecosystem.”
At some point it occurred to me that the supercomputers that credit-card companies and banks are working with could know more about me than I even know or understand about myself.
“It’s probable that the exact reason why a transaction caused your card to be blocked has no straightforward interpretation,” Le Borgne said.
I also asked whether there was a big difference in credit-card protections and debit protections and was told not really — maybe banks will be a little more restrictive about credit because they’re technically lending you money that’s not limited by your actual cash balance. I also asked if companies don’t worry about preclearing travel because they don’t care as much about losing money to fraud anymore, to which the answer was a hard no.
“At the end of the day, somebody’s got to pay for the fraudulent activity,” Lemberger said. Credit-card companies will give you your money back if you’re a victim of fraud, but they’ll find another place to recoup that money, just as they always do.
Instinctually, I am not a technology-is-awesome person — if the AI really is going to kill us, I feel like we should unplug it. I’m not super freaked out about the privacy stuff, but I also don’t love the idea that AmEx and JPMorgan and Citi have me so pegged. But it’s cool that companies really are making fraud detection better, especially in a world where fraudsters themselves are constantly getting better. I don’t want to be like “Yay, banks!” but maybe here the answer really is a little “Yay, banks!” At least that’s the case until the next major data-privacy breach, at which point I will regret everything.
Emily Stewart is a senior correspondent at Business Insider, writing about business and the economy.