Placing You First Insurance Podcast by CRC Group

AI in Banking: The Risk Beyond the Reward

CRC Group Episode 104

ChatGPT reached a million users in just five days, while Bank of America's AI assistant Erica has handled over two billion customer interactions without human intervention. The AI revolution in banking isn't coming—it's already here, transforming how financial institutions operate and creating complex liability questions few insurance policies directly address.

Mark Waldeck, Team Leader and Broker with CRC Chicago, takes us beyond the marketing hype to explore the serious insurance implications of AI implementation in banking. While chatbots and automated systems promise efficiency and competitive advantage, they create significant exposures most banks haven't fully contemplated.

Current insurance policies remain largely silent on AI-specific risks, creating ambiguity about whether claims would fall under professional liability or cyber coverage. Employment risks also loom large as banks leverage AI to reduce headcount, potentially triggering discrimination claims if not handled properly.

Perhaps most concerning is the regulatory vacuum surrounding AI implementation. Without clear guidance from regulatory authorities, financial institutions are developing practices that may later require significant overhauls once regulations catch up with technology.

Connect with your CRC Specialty broker to better understand how your insurance program addresses these emerging AI risks and explore additional resources at crcgroup.com.

Visit REDYIndex.com for critical pricing analysis and a snapshot of the marketplace.

Do you want to take your career to the next level? Join #TeamCRC to get access to best-in-class tools, data, exclusive programs, and more! Send your resume to resumes@crcgroup.com today!

Amanda Knight:

Welcome back to Placing you First, the podcast that keeps you at the forefront of insurance insights and industry shifts. I'm Amanda Knight, joined by my co-host, Scott Gordon.

Scott Gordon:

Today we're diving into a conversation that's generating a lot of buzz the role of AI in banking and financial services. While the efficiency and profit gains are exciting, the risks could be game changing for carriers, brokers and clients alike.

Amanda Knight:

Here to break it down for us today is Mark Waldeck, president of CRC Chicago and longtime specialist in financial institution risk. This is the Placing you First podcast from CRC Group. This podcast features news and insights from a vast knowledge base of more than 5,500 associates who write more than $30 billion in premium annually. Plus, we give you the latest information on what's happening at CRC.

Scott Gordon:

This this this is the Placing you First podcast and now the hosts of the podcast Amanda Knight and Scott Gordon.

Amanda Knight:

Mark, welcome to the show. Thanks for having me. Well, you know, as I was preparing for this episode, I ran across some interesting info that I thought I would share with the class before we take a deep dive today. Are you guys ready? I learned some things, some things I didn't know, and it could have even been AI teaching me this through Google. It's hard to say, but did you know? Eliza was actually the first chatbot and she was created hold on to your hat in 1966.

Scott Gordon:

Was it for a James Bond movie?

Amanda Knight:

59 years ago, yeah, okay. So I was like this can't be accurate, but it is. She was created 59 years ago at MIT. Was created 59 years ago at MIT. She was this like simulated psychotherapist that operated by rephrasing users' inputs as questions, sort of like a therapist would do to act as a mirror to a patient, right? So it sounds like she is like absorbing information and giving it back to you, but really she's just reflecting it, just rephrasing things. She wasn't actually learning necessarily, but people got really attached even though Eliza had no real understanding. She didn't involve, like, I guess, actual machine learning.

Scott Gordon:

Did she run off?

Mark Waldeck:

of punch cards. I'm thinking the computer was probably the size of a Buick Right.

Scott Gordon:

Yeah, probably A Buick showroom.

Amanda Knight:

Also true. So here's our second fun fact and this will surprise no one probably ChatGPT, maybe the most well-known form of AI that is kind of popular at the moment. Chatgpt reached 1 million users in five days less than a calendar week. So, in comparison, it took Instagram another highly popular app two and a half months to reach a million users, and it took Netflix three and a half years to reach a million users. So this makes ChatGPT one of the fastest growing consumer apps period in history.

Amanda Knight:

I mean, I was like that makes sense. I can see that. And then Erica I don't know. Like I think we could have a whole nother podcast on why we name all of these chatbots with women's names. We've got Eliza. Now we're going to talk about Erica. Erica, bank of America's AI assistant, was developed in 2018. And we actually discuss her in the article that goes along with this podcast. But Erica has had over 2 billion interactions. That's more than six times the population of the entire United States and she's managed all of them without human agents Just Erica the chatbot. And then we'll end our fun little fact-finding mission here with a rogue fact. One chatbot once claimed it wanted to be alive. Microsoft's AI chatbot Tay launched in 2016. It started out okay, but then quickly spiraled out of control after Tay started learning from Twitter. Which shocker right. Tay had to be shut down within 24 hours.

Scott Gordon:

I think I remember why, I think it was getting pretty anti-Semitic on the comments. Thanks, twitter. Thanks a lot.

Amanda Knight:

Twitter now known as X. So thanks a lot. Twitter now known as X. So I mean, these facts alone tell us that AI chatbots can change the game and definitely present some substantial risks. So now that we've all got some small talk fodder for our next cocktail hour, we can move on to the meat of this conversation with Mark. So, scott, I'm going to let you kick it off.

Scott Gordon:

What are you dying to know. Well, now that we've covered the really important stuff, Mark, can you give us a quick overview of how banks are currently leveraging AI and how fast it's moving?

Amanda Knight:

Does every bank have a chatbot, or is it just the big boys like Bank of America?

Mark Waldeck:

Well, it started with the big ones and now it's starting to drop down into the smaller, medium-sized banks, and I think the original intent was to offer that first layer of customer assistance. If it was a mundane issue or need that, the chatbot could handle that, and it's expanded to do more and more. You're not just checking your balance or following through to see if a specific check number is cleared. Now it's going probably half a dozen to a dozen steps beyond, and then at what point do you move from providing factual data versus interpretive results and commentary? And I think you get a different answer from every bank on what they're doing and how they're using it.

Mark Waldeck:

I think the challenge, though, for us as users if any of us that have accounts the concern is and I'll take it back to since we have an insurance audience when I have a really tough account, I want a human underwriter signing off on it, I want it documented, and I want to know that that guy's documentation is in my file if I ever find myself in a dispute with a claim. Is it covered? Is it not covered? Hey, we talked about this. I shared all this information, and I think the challenge for some of the banks now is that, due to the lack of guidance, they may have taken this in a direction that was probably further than we're ready as both a society, as well as how we're going to interact with our legal system. If something does go wrong, because it's no longer liability possessed by an individual across from you, it's the liability of the software's conclusions, or maybe it's even an error. I don't know.

Amanda Knight:

Well, and we've also talked about, or you just kind of mentioned you like having a human to sign off on things and sort of put the stamp of approval on. So what kind of conversations are you having with retail agents or their clients about AI-powered tools and their insurance implications? Are they, even at this point, savvy enough to come to you first? Are they pulling you in? Oh, by the way, we launched this chatbot six months ago and now we need to cover it. Are they doing a better job of being proactive? It seems like that can be challenging when it's something that's moving as quickly as AI is moving.

Mark Waldeck:

Unfortunately no, it's up to me to pull that out, so it's sort of a standard topic that I bring up at every meeting. Hey, where are we with this? Are you considering it? And if so, how? And then what fence have you put around those liabilities so that it's not open-ended? Because there aren't a lot of choices for these large commercial clients, in this case a commercial bank. So if there's only a couple of good tools that they can tap into, it's no different than doing business with the cloud or any other situation. You kind of have to take it and unfortunately a lot of that liability ends up back in your lap. You can't pass it up the food chain, unfortunately.

Scott Gordon:

Yeah, and technology. I remember remember how crazy it was the first time you realized you could deposit a check from your phone. It blew my mind. I know it's not exactly the AI subject, but technology in general and banks it's come so fast. It just seems like just the other day I was having to sign it and take it into the bank.

Amanda Knight:

Just the other day I was having to sign it and take it into the bank. I listened to a podcast the other day where they were talking about how they had to explain to the younger listeners about how in the drive-thru you had the tube that you put the check or the money in and it sucked it into the bank. I thought that was amazing as a kid, because then they sent you a sucker right when they knew there were kids in the car and the younger podcast listeners were like what are you talking about? Because they've never not had AI or a chatbot or an app to do all of these things. Some of them have never even been inside a bank.

Mark Waldeck:

And I think remote deposit's a good example in that to my earlier comment. If you have a $10 check, that's fine if you want to take a picture of it and deposit it. But if you had a $10,000 check, wouldn't you want to go to the teller line and get a receipt and know that that money was received and posted to your account, as opposed to did it go through? Did it not go through? I'm still old school. I don't know how someone in their 20s feels about that, but when you get into big dollars again, I want a human in the middle of that transaction.

Scott Gordon:

Yeah, sure, I was one of the last people to use like a paper check to pay. I forget there was one bill that I always paid with a check and my daughter made fun of me. And I was like OK, I need to get with the times, Sorry. So let's get back to the questions. One of the biggest concerns we're hearing now is how reliant banks have become on a few major tech providers. So, Mark, what are the risks there and who ultimately bears the liability in the event of, say, a data breach?

Mark Waldeck:

I sort of touched on it earlier. There's a short list of providers that let's use processing of bank statements. There's only about three or four good players out there and they know it and, as a result, if you want to hire them to process your bank statements, most of the liability is coming back to you. Fiserv is not going to take the exposure just because they're processing your statements for a fee. So if they make a mistake, for the most part it's your problem, it's your brand, it's your goodwill, it's your client base, and if those people want to sue you, good chance. Fiserv's not going to stand shoulder to shoulder with you. You're on your own.

Mark Waldeck:

So it is a very small universe and, as a result, it puts a lot of burden on us to at least review the contracts. And so, um, to answer the earlier question about well, are the banks contacting me or am I contacting them? That's how I engage them. Are you doing this? And if you are, I need to see those contracts. It doesn't mean I can fix them, but it does pivot the conversation from gosh. I didn't realize we had that much exposure. We were assuming, and now that we know this is the insurance that we're buying adequate for now the exposure we've been sort of awakened to. And I tell everybody let's go through and pretend you had a claim today, because it will be systemic, it won't be a one-off, it'll be. All your users, all your account holders are going to be hit, probably at the same time. So how might we go through this if we were doing a fictitious doomsday scenario, because that's what really gets people thinking.

Amanda Knight:

Well, and we've kind of touched on this next question when we're thinking about a chatbot that is powered by a third party, one of these major tech providers, it collects and it stores customer data, we have to think about how is a bank and their insurer staying ahead of compliance, contract risks, like you mentioned, coverage gaps, and it would seem to me that you kind of have to game it out, just like you're doing with people, right, because the risk is invisible to so many people. You assume and we all know what assuming does, right? You assume that if the chatbot exists and it's powered by one of the largest tech providers, et cetera, that it must be safe, it must be compliant, and I don't think that's always necessarily the smartest way to do this. So it sounds almost like you're saying they're not doing a great job of staying ahead of some of these things, unless you happen to be the one that they're dealing with and bringing it up.

Mark Waldeck:

Yeah, I think it's up to the broker and the general counsel to drive that conversation, because they see this as a marketing tool. If we fail to deliver these customer convenience options, those same customers will leave us and go somewhere else, and it could be a non-sanctioned bank. We have a lot of new entities that have been created in the last 10 to 15 years that are not what I would call licensed chartered banks, either at the federal or the state level, and, as a result, banks are competing with people with a lower cost structure. It used to be just credit unions and they're more loosely regulated, but now it's moved well beyond that and there's tons of payday lenders they're everywhere. So, as a result, I think banks feel a lot of pressure in trying to keep up.

Amanda Knight:

Oh, for sure.

Scott Gordon:

Let's talk about employment risk with banks using AI to reduce their headcount. How might that trigger EPL claims, especially around like age and wage discrimination?

Amanda Knight:

I mean I saw wasn't it in the news, like last week, guys, that Microsoft laid off like or maybe it wasn't Microsoft, maybe it was somebody else, but I feel like it was Microsoft laid off like 9000. I think it was Microsoft In their drive toward more AI implementation. It could be lay off a small number, could be a large number like that, but it seems like to me you could end up in some litigation over it, implementing something like this and eliminating. Are there rules about that, mark? Can you just do that?

Mark Waldeck:

You can do it, but the employment laws that exist are the same, whether it's AI-driven or any other reason. You have to go about it methodically and you need to provide severance. There needs to be proper messaging on the front end. And the way I kind of got on this platform originally was I was in a meeting with one of my clients and the CEO was saying in a very non-threatening way you know, we think we're going to improve our expense ratio because we'll be able to use AI chatbots more and that means we'll need fewer employees and of the employees we keep, we won't have to pay as much overtime to them because the chatbot will take care of that for us.

Mark Waldeck:

And the danger with that type of comment is it can be twisted quickly to oh, you're just going to use this to get rid of all your older employees who are too expensive. It may be an innocent attempt by the bank to become more competitive. It's going to be twisted by the plaintiff's bar as you use this as a weapon to get rid of our older employees or our minority employees or whoever. I just think banks have to be really protective of their brand when they're doing these things, because it'll get used against them quickly. Even if it was never the intent. We didn't employ AI to try and get rid of employees. It happens to be a byproduct of what we're doing and I think we, as our insureds, need to be more sensitive to that brand and the goodwill that goes with it, because it could really harm us and we might not recover from it.

Scott Gordon:

Well, and you mentioned Microsoft. Remember when Microsoft's, the extent of Microsoft's AI was Clippy Remember.

Amanda Knight:

Clippy yes, I remember Clippy when he'd wave at you, can I help you?

Scott Gordon:

Well, they've come a little bit further than that now, yeah. And we've seen the Microsoft and other AI chatbots give wildly inaccurate information, to put it mildly. And you know we're talking about E&O, which stands for what? Kids, errors and Omissions. So how do you see something like that these chatbots giving bad information? How does that intersect with traditional E&O coverage?

Mark Waldeck:

Great question. The policies are silent in this space, so as a result, it's very difficult to say where a banker's professional liability policy that's sort of a bank's E&O coverage versus their cyber protection might trigger. I still think it belongs within the banker's professional realm. That's where the coverage should reside. The fact that the medium in which it's being delivered shouldn't really matter in my opinion, but one could argue if there was a software error embedded within the chatbot, does that take us back towards a cyber discussion? Don't know the answer. The losses, historically, once we get there, will answer that question. There will answer that question.

Mark Waldeck:

But what I tell everybody is I can see chatbots moving in a direction where they either have a file on you because you've used it before as an account holder of Bank of America, and in that example Erica says hey, we've noticed that as long as your income hasn't changed much and you haven't taken on a lot of debt, you'd be eligible for a mortgage up to X. Or have you thought about a home equity loan? That's where I think we're going to find ourselves in trouble, because the chat bot is making assumptions on what your capacity might be to take on more debt. There's a million detours along that road, and most of them are bad. They're all going to lead to bad conclusions.

Amanda Knight:

Well, it kind of answers our next question I was going to ask so are we headed toward this wave of disputes and denied claims? Because as far as I understand it, it's not like we're seeing a huge wave of these claims yet. It's not like we're seeing a huge wave of these claims yet as far as chatbot errors or EPL claims, et cetera, but it sounds like they're coming, and until those are adjudicated or settled or whatever, I don't know, I guess the answer is no one really knows yet how policies or coverage or things are going to respond.

Mark Waldeck:

Yeah, and remember, there's no regulatory guidance either. So the bank regulators, they don't understand or they don't really know how to guide and advise all of their clients. So the banks are sitting back trying to develop best practices but there's no one really giving them good direction. So I've said to everyone you need to try and be best in class so that when the regs come out you're best positioned to already be in compliance.

Amanda Knight:

That kind of leads into your final question, Scott.

Scott Gordon:

I was going to say we covered this then. The regulatory side is still developing, but it's coming. So what should banks and their insurance partners be doing now to prepare?

Mark Waldeck:

I mean I would start with contract review. I would start with identifying where the liability resides unfortunately, mostly with the constituent banks and then I would encourage them to build out the most robust controls that they can do so that it's ready if and when regulatory requirements come out, because that's usually the way the game plays is by the time the regs come out. The banks have already been operating in an environment with that tool AI being one of them for a while, so they've developed a workflow, et cetera. Now they've got to change all that, otherwise they're not in compliance.

Mark Waldeck:

I still think there's a huge reluctance by our regulators to get anywhere near the tech sector. We've seen it in other ways when it just comes to even the placement of cyber insurance for the banks. That bank regulator typically is just checking a box. Do you have a policy, yes or no? Is it any good? Is the limit adequate? Is it covering? You know, maybe there's a ransomware supplement? I don't think the regulators are drilling very deep. They're just. This is part of their overall audit when they visit a bank and it's probably a very quick yeah. I looked at a deck page and I can see that they buy a policy from a carrier. That's all they need to know.

Scott Gordon:

I bet you could get lost in the weeds in that very quickly, though, when you drill down.

Mark Waldeck:

Yeah, yeah, it can be you get a little dizzy trying to sort through all the.

Amanda Knight:

But Mark is a specialist at that. He lives in the weeds with like he looks through all of this. I know he does, Cause we've talked about it.

Mark Waldeck:

Yeah, it keeps me up at night. But the good news is I think there's, uh, there's, there's plenty of work here to for us to do as brokers and, uh, I think we have the right culture at CRC that we're going to slow down and do it once and do it right, because I pick up a lot of programs from my competitors and I'm shocked sometimes that I'm surprised the cyber just doesn't get more scrutiny, and I view AI as an extension of that. People just don't. I think they're looking at it too much, as our placements are in silos and unfortunately, they're all interconnected and the banking industry is a great microcosm to that Everything that you're going to have one wrongful act but you're going to have multiple policies triggering, so I think that the AI discussion falls into that as well.

Amanda Knight:

And I'm sure it's something we'll talk about again with you. Mark, I have no doubt We'll see you again in another podcast.

Scott Gordon:

Along the way, Unless this all goes away in the next few months, right?

Amanda Knight:

Right, that's totally happening. Well, mark, before we let you go, it's time for a quick lightning round. We like to call it rapid fire. So we've got three rapid fire questions for you. All you have to do is say whatever comes to mind first. So question number one what's one piece of tech you can't live without? Is there one?

Mark Waldeck:

That's easy, just your iPhone.

Amanda Knight:

If you weren't in insurance, what job would you be doing?

Mark Waldeck:

I almost went to law school and this is a detour, so I'm glad I took the detour.

Amanda Knight:

Although, with your attention to detail, I'll just go ahead and say I think you would have been a great lawyer. All right, last one what's your go-to coffee order or workday ritual?

Mark Waldeck:

Yeah, I'm not really a coffee drinker, but I would say that that daily ritual is. I like to get started real early, so all the email traffic that's already backing up, I'm ready for it. And that way, by the time I get to the office, if there's a fire, I've already dealt with it, or at least I've already thought it through so during my commute with it, or at least I've already thought it through so during my commute I'm already, you know, letting the wheels turn, so I'm not panicked, whereas if I don't get started early, next thing you know you got a hundred emails and they're all rushes.

Amanda Knight:

Ease up people. They keep him awake at night, y'all Ease up on Mark.

Scott Gordon:

Okay, hold off on that email till the next day. I mean, don't ease up, just make it a little more pointed into the point. Mark doesn't have all day to read paragraphs full of info, that's right.

Amanda Knight:

Well, Mark, thanks for joining us and sharing your insights. We appreciate it.

Scott Gordon:

Appreciate it. Thanks for having me. The risks surrounding AI and banking are complex, but it's clear. Agents and brokers who stay informed will be better positioned to protect their clients.

Amanda Knight:

For more insights, check out our tools and intel articles on crcgroupcom and connect with your CRC specialty broker to stay ahead of emerging risks. Thanks for listening. We'll see you all next time.

People on this episode