ChatGPT now has more than
At many companies, employees also use the publicly available large language model in unapproved, unsupervised ways. Recent
This prolific use of gen AI means all companies, including banks, need AI-use policies that spell out where and how employees may use such technology, which includes Anthropic’s Claude, Microsoft Copilot, Meta’s Llama and Google Gemini, as well as ChatGPT.
“Every company at this point has to have some kind of AI policy,” said Aslam Rawoof, partner at Benesch Law in New York. This is more true for banks than for less regulated companies, he said, due to the risk that customer data could get fed into a large language model.
“People get excited, and maybe they start typing client information into ChatGPT,” Rawoof said. “But when you do that, ChatGPT takes all inputs it receives from anywhere in the world and trains itself. Even the people who designed it don’t know how it processes information. So the client information that you give here in New York could pop out in Tokyo tomorrow in response to some questions.”
The same is true for sensitive company information, he said. “You don’t want to upload any documents like the company’s financial documents, certainly not without the approval of your technology professionals,” Rawoof said. Communications with regulators and attorneys also fall into this category.
What’s off limits
A few months after OpenAI released ChatGPT to the public in November 2022, JPMorgan Chase, Citigroup, Bank of America, Deutsche Bank, Goldman Sachs and Wells Fargo all banned employees from using it.
Last year, some of those institutions
Today, the bans on public-facing generative AI have mostly been lifted — though some still ban DeepSeek’s model — and replaced with generative AI use policies.
Some uses of public gen AI are generally off limits in banks, said Chris Nichols, director of capital markets at SouthState Bank in Winter Haven, Florida.
“Most banks that I’ve spoken with restrict the use of public-facing generative AI with confidential information, so that is a clear line in the sand,” Nichols said. “Most banks came to the conclusion early on that you’re not going to put proprietary information from your bank or customer information out into the public version of ChatGPT.”
Other uses of generative AI are not threatening. For instance, using large language models to retrieve information from a database is generally considered benign. “There’s no great intelligence there, and all you really need to ensure is accuracy, and that it’s not toxic,” he said.
What’s in banks’ AI policies
“The AI policies that I have seen at most banks are pretty immature, and I don’t mean that pejoratively,” Nichols said. “AI is changing so rapidly that it’s hard to keep a policy up to date.”
Banks’ AI-use policies need to clearly delineate how employees can or can’t share sensitive information, such as customer and company data, with AI models, Rawoof said.
They should lay out objectives, the scope of the effort, roles and responsibilities, high-level standards and a cadence of AI policy committee meetings, Nichols said. (All other concerns should go into an AI model risk governance document, he said, which governs AI models the bank buys or builds itself.)
Rawoof recommends starting by conducting anonymous employee surveys to understand how people are using AI and obtaining input from stakeholders across the company.
“But don’t exhaust all possibilities and try to draft a very elaborate AI policy,” he said. “Just do something relatively simple to start, and then make sure that you review it on a periodic ongoing basis, and then add more detail to it as you learn more, because we’re still very much in the early stages of AI. Taking a year and having five AI subcommittees deliberate to write the perfect policy is silly because it’ll probably be obsolete the moment you produce it.”
The AI policy working group needs to report up to someone at the C level. There should be outreach to the board of directors as well, Rawoof said.
“The board should provide some sort of statement or support for the initiative,” Rawoof said. “Otherwise people won’t take it seriously.”
In Reddit discussions, users have reported that they successfully got ChatGPT to write a policy for them.
“I would not recommend that,” Rawoof said. “I would start by doing a survey to figure out how people at the company are actually using AI.” It needs to be anonymous, with no recriminations, because people might not be following existing company rules.
Publicly posting the AI policy is a dangerous practice, Rawoof said.
“Putting my lawyer hat on, if you have a policy and don’t follow it, you’re basically giving plaintiffs a roadmap to sue you, because in discovery, they’ll say, show me all of your corporate policies and this AI policy will show up,” he said. The plaintiff could easily ask what process the company used to enforce the policy. “If no one has a good answer, then you’ll lose the case right there because the company didn’t even follow its own policies.”
SouthState Bank has had an AI policy and an AI working group from the start, Nichols said.
AI policies can suffer death by committee, if too many people from different departments – risk, legal, sales, HR, etc. – are involved, each with their own point of view.
This is “exactly what most banks have gone through over the past two years,” Nichols said.
One issue that has come up in the last year and a half is that AI applications are getting embedded in hardware or existing software, without users being given a choice or even being told it’s happening.
“It’s showing up everywhere,” Nichols said. “Technically, per our policy, we need to approve that, it needs to go on an inventory sheet, the bank should test it out, see if it’s suitable, see if it gives you accurate answers.”
While these hidden AI modules may not matter in areas like information retrieval, they do matter for tasks like credit underwriting. “We want to know exactly what it’s doing, and it’s hard to figure that out,” Nichols said.
As a result, SouthState is adding several questions about AI and AI usage to its vendor onboarding process.
Enforcing an AI policy is similar to enforcing a code of ethics — a company does it by reminding employees of their obligations, perhaps making them sign an acknowledgement that they read it and training, Rawoof said.