Every year at LEND360, we host an exclusive luncheon for CEOs to meet, connect, and reconnect with other CEOs. At this year’s CEO lunch, we dove into the issues around artificial intelligence and the rising level of fraud, featuring remarks from SentiLink CEO and Co-Founder Naftali Harris and Moveris CEO and Co-Founder Dr. Justin Keene.
Even though the discussion at the CEO lunch took place behind closed doors, we wanted to sit down with Naftali and Justin for a discussion on the latest fraud trends targeting lenders so our full LEND360 audience can benefit from their expertise.
Can you give us a sense of what is new with fraud against lenders and what are the biggest issues companies are seeing with fraud right now?
Harris: As lenders have been investing into building defenses against third party fraud, first party fraud has naturally emerged as a very significant problem. For example, we’re increasingly seeing applicants misrepresent their creditworthiness, showing up after credit washing or using techniques like piggybacking (purchasing authorized user tradelines to artificially inflate their credit scores). Another notable trend is the rising use of scams that trick or coerce victims into applying for loans on behalf of fraudsters. In addition to first party fraud, we’re continuing to see identity theft and synthetic fraud. Back in August, we shared our Fraud Report for the first half of 2025. We publish two reports each year, and this one offers a useful look at how this fraud is evolving across several lending verticals, including cards, auto and consumer lending.
Keene: I know that this sounds counterintuitive, but I don’t know that there are a lot of new vectors for fraud right now. Fraudsters aren’t necessarily creating new ingenious ways to steal your money or ruin your credit, but they have figured out how to make the old methods faster and more efficient. So we are continuing to see things like first party fraud, account takeover, and the synthetic identities that have always been around in some form. But because they can now do it faster and more efficiently, they can now deploy these methods against banks that used to be off their radar because it wasn’t worth it financially, so we’ve seen a large increase in that. Community banks and credit unions, for example, are now getting hit more than ever.
What are you seeing as the key burgeoning issues that companies may not be aware of now but that they will soon have to deal with?
Harris: Assumed Identity Abuse (AIA) is a growing issue. AIA is the fraudulent use of the PII of an individual who was previously in the US on a visa, often for seasonal work or school. It works like this: someone comes to the United States–legally–on a seasonal work visa. They establish a presence, get an SSN, etc., and then go back to their home countries. Years later, a fraudster gets access to their identity and starts to use it. AIA is usually identity theft, but a much harder variant since the victim isn’t in the US and never finds out about it. But there are cases where the individual whose identity is being used is complicit in the activity. AIA is especially hard to detect because the identities will pass eCBSV, and since assumed identities are not in the U.S. the impacted financial institutions are unlikely to successfully contact and confirm whether identity theft has occurred.
Keene: I would say that most banks at this point understand that there is a sophisticated network of fraudsters, but it isn’t like it was 20 years ago. Where fraud rings used to be centralized, now they are a distributed network of people on Telegram and Discord that are doing the same things, but doing it individually—and they are hive minding their knowledge. There is a kind of “fraud as a service” economy that is being built underneath us. People need to realize it is there and treat it like the threat that it is.
How are some of the more cutting edge technologies, like synthetic identities and deepfakes, being used in terms of fraud?
Harris: Synthetic fraud involves using a manufactured identity where the name, date of birth, and SSN do not correspond to a single real person. These identities are relatively easy to assemble and often slip past traditional controls, but they are detectable and preventable with the right tooling in place. Generative AI introduces a different class of risk. Even with real progress in prevention tech over the last few years, such as liveness checks, voiceprinting, automated document verification, and behavioral biometrics, attackers can increasingly bypass each of these with deepfakes, voice cloning, and other generative techniques.
Keene: We have allowed so many data leaks over the last 15-20 years that it is a ticking time bomb of synthetic identity. It’s only a matter of time before people get sophisticated enough—and some already are—to build a profile on anybody that they want. It’s also really easy to think that these technologies are only happening to the big players, but it’s now happening more and more to other banks of all sizes. The average bank in America is losing massive amounts to this kind of fraud every single year, and it’s increasing as the tools continue to get more sophisticated.
Is the rapid increase we have been seeing with fraud against lenders a byproduct of increasingly sophisticated AI technologies or are there other factors also playing a role?
Harris: Yes, but mostly no. Take first-party fraud as an example: credit washing in particular could be amplified by agentic AI that can generate credit disputes at scale. Even so, the barrier to learning to commit fraud has always been low and remains low. You don’t need advanced AI to build a synthetic identity, and there are plenty of YouTube and Telegram channels focused entirely on credit washing techniques.
Keene: In some ways—the technology has certainly made fraud faster and cheaper. And even in the immediate term, I think that the social engineering element of fraud is going to shift dramatically to AI. Like right now, most of it is still done manually—there may be an automated distribution network for text messages but anyone who responds—like “hey, who is this?”—someone is actually sitting there and texting back. I don’t think it’s going to be that way in three to six months. I think we will see agentic chat bots and things that were probably built to do customer support being used for this type of fraud, and there’s just no way around it. It’s going to get harder to detect and it is going to be more and more frustrating.
What steps can companies be taking to deal with the fraud issues of today and prepare for the fraud issues that are coming?
Harris: The strongest recommendation we can make is to build a culture of manual review and deep understanding when it comes to reviewing your portfolio. Fraud is always changing, and in order to understand how to react, we have to understand the data at the ground level. This involves looking at individual loans and the data elements within them. For example, which phone number was used, what address history looks like, or how the applicant’s behavior compares to past patterns. We care about this approach so much that once a week everyone at SentiLink stops what they’re doing and spends an hour manually reviewing fraud cases. That means everyone—me, our engineers, data scientists, partnerships team, even our recruiters and accountants. It’s an important part of our culture and it keeps us on top of what the fraudsters are doing. Combine this approach with the right intelligence tools and datasets, and you build the foundations of a truly effective fraud prevention program.
Keene: I would say that the correct solutions have to be ones that look beyond trusting that what a person is presenting is real. So there are two things that companies are going to have to do differently. First is what do we do when someone first presents themselves to us? That step has always been document driven so we’re trying to verify that they are the same person in the document they’re showing to us. But now we need to verify whether the person that is presenting a document is even a human, because you could have a deepfake presenting a very convincing, very passable, biometrically sound passport of the deepfake. So it’s no longer just document to face match, it has to be something more foundational and we have to do that right now. And then over the next three to five years, we are all going to have to become our own password, not just in a face ID way, but deeper than that—the biometrics you exhibit and the cognition and emotional responses that you exhibit when you log into a site are going to become the patterns that allow you to log back in. It isn’t going to be a long string of digits that is going to be guessable by quantum computing in the very near future anyways, so it has to be something more unique that is so truly you that nothing else could replicate it.
Justin, Naftali, thank you both for your time and your excellent insights on these critical topics. More information on their background and companies is below.
SentiLink stops fraud and verifies identities for more than 400 institutions throughout the United States, including eleven of the fifteen largest banks and two of the three largest telcos. Every day, SentiLink verifies more than three million identities and prevents more than 65,000 fraud attempts. SentiLink employs 150 people and has raised $85M in venture capital from a16z, Craft, and others since the company’s founding in 2017. Before Co-Founding SentiLink, Naftali Harris was the first data scientist at the online lender Affirm, building and leading their Risk Decisioning Team.
Moveris brings together science, technology, and trust, with a team of researchers that specializes in psychophysiology and human-liveness detection, helping organizations distinguish real human presence from AI-generated signals. With decades of experience in psychophysiology and human-computer interaction, the Moveris team saw a growing need for reliable ways to ensure authenticity, especially as deepfakes and AI-generated content continues to become more prevalent. Before Co-Founding Moveris, Dr. Justin Keene spent 12 years as a research professor with a dual Ph.D. in cognitive science and mass communication, as well as running one of the world’s largest biometrics labs.