Skip to main content

LEND360 attendees invited to the CEO Luncheon last year had an opportunity to hear from leading AI expert Nick Schmidt, Founder and CTO of SolasAI and BLDS, LLC. Given that artificial intelligence is unquestionably one of the hottest topics in financial services, we spoke with Nick to get an update on what’s happening in artificial intelligence, what financial services and fintech leaders need to know about AI, and how the AI industry might be impacted by this November’s elections.

 

The AI space is dynamic and has been active since you spoke at LEND360. What do you see as the key developments since last October and what do you think people need to know first and foremost about AI?

Since October, there have been few real surprises in the AI space; instead, we have seen two trends mature.

First, the regulatory focus on fair lending and algorithmic fairness has intensified. Last April, the CFPB emphasized that it expects lenders to perform ongoing fair lending monitoring of models and search for less discriminatory alternative (LDA) models before putting them into production. Initially, the statement did not seem to have much effect, but it is now fair to say it dropped like a bomb. While searching for LDAs has long been standard for large lenders, this push by regulators has led to a rush of small lenders and third-party modelers adopting stricter fair lending practices. Fortunately, advances in machine learning have reduced the compliance costs associated with doing this monitoring and development. For instance, at SolasAI, we work with many smaller banks and fintechs to automatically identify and mitigate potential fair lending risks around machine learning.

The second trend relates to generative AI (Gen AI), where its use is expanding rapidly. This has attracted attention for both its potential as well as its challenges: while lenders explore exciting practical applications, incidents like a chatbot mistakenly offering to sell a car for $1 highlight unresolved issues. Further, Gen AI poses reputational risks through inappropriate or offensive content, and increases business risks through inaccuracies, data leaks, or misguided customer interactions. Additionally, regulators are responding with increasing scrutiny, and I expect strict enforcement. This dynamic evolution in AI underscores the need for vigilant adaptation and compliance in the online lending sector.

While both of these issues reflect increased risk – and measuring and mitigating those risks is essential – it is also important to recognize that AI can be used safely and effectively. It will enable innovation and help drive competitive advantage. There really is no doubt that companies that don’t effectively use AI will be left in the dust. But, the key will be the difference between companies that use it effectively and those that implement it without

So, for the industries that OLA deals with, increased focus on algorithmic fairness, fair lending, and the rapid maturation of generative AI present opportunities and challenges for lenders attempting to innovate quickly and safely. Smart lenders and vendors are proceeding cautiously but not letting themselves be flat-footed by avoiding necessary change.

 

With both federal and state officials moving quickly to establish new AI policies, where should industry participants be focused and what should they anticipate?

I see that most regulators and industry participants are looking to the National Institute of Standards and Technology (NIST) AI Risk Management Framework as the basis for regulation and law around AI. That is actually a very good thing for the financial services industry because we are already very much aligned with much of the framework through existing regulation – particularly regulations coming from the OCC and the Federal Reserve. As a result, industry participants are well positioned to put into practice the guidance that’s coming out around AI.

Where I see a change and where I see a concern is for smaller lenders and vendors, particularly third-party vendors that are supplying models or AI systems into larger institutions. In previous times, they were largely exempt from these standards. Today, though, we are seeing more and more that this is no longer the case and that they are going to be held to a much stricter standard than they had been.

 

Given the dynamics of economic populism and growing suspicions of emerging technology like AI, what impact does November’s election have on the future?

If President Biden is reelected, we will definitely see increased enforcement, and likely an acceleration of what we have seen in the first term. That’s because the regulators have a much better understanding of AI than they did before, and because the use of AI has expanded so much. In other words, there will be more opportunities for enforcement than there were before, and regulators will be better prepared for those opportunities.

If there is a change in administration, the assumption is that there will be a decrease in enforcement, but I’d say don’t hold your breath. I testified in front of the Senate Housing Committee in January and the questioning and concern was amazingly bipartisan. So, if you’re going into a new administration thinking that things are going to massively change, that may not happen.

Furthermore, states are not going to be affected significantly by the elections, and we will continue to see increased enforcement coming from the states. In New York, for example, the New York Department of Financial Services is likely to move hard. Other states we can expect significant action in are what I would call the “usual suspects”: California, Illinois, and Massachusetts. But also, even in a state like Arizona, their Attorney General has recently pursued litigation against a company for the use of a housing algorithm. That means that, while it will definitely be the usual states, there’s ample opportunity for regulators across the country to make an issue around AI.

Given all these potential areas for action, whether federal or state, the key is making sure that you have effective AI controls and policies in place for your organization, and your organization is as aligned to the NIST standards as much as possible. It’s very important to understand that not everyone has to make a massive compliance effort to use AI, but they’ve got to do something.