Community Banks are in a unique, challenging position with respect to AI adoption. On one side, banks are being challenged by AI-native FinTechs offering better personalization, faster financial actions, and more. On another side, mega banks like JPM are building expensive solutions that will provide a further competitive and profit advantage over time. All the while community banks face a mix of regulations and fuzzy guidelines that they must adhere to at the Federal, State, and Local level. Enabling banks to experiment with and implement new AI solutions can unlock new value and protect against new risk.
Here are some thoughts on AI in Banking with a special focus on Community Banks who may be wondering where to get started. This is the first of a series of posts as I develop investment theses for the sector.
What does AI enable Financial Institutions to do better?
Fintechs are top-of-mind as a competitor set, especially with the looming change of open banking. They also provide inspiration for how AI can change the relationships between FIs and their customers as well as ways to operate more profitably.
Fraud Prevention with Real-Time AI
Machine learning systems detect and adapt to fraud patterns in real time, reducing false positives while stopping more fraud attempts. Stripe Radar and Affirm use AI fraud tools at scale, helping them secure transactions and win merchant and consumer trust.
Operations & Onboarding Automation (KYC, ID Verification, Loan Docs)
AI streamlines customer onboarding by automating identity verification, document processing, and compliance checks, cutting account setup from days to minutes. Fintechs like Plaid and Chime leverage AI-driven KYC and verification to grow customer bases quickly and lower acquisition friction
Alternative Credit Scoring & Underwriting
AI models analyze alternative data (like cash flow, utility payments, or employment history) to expand credit access beyond traditional FICO scoring. Fintechs like Upstart and Zest AI have built multi-billion-dollar platforms by offering banks higher approval rates with equal or lower default risk.
AI for banking leaders: JP Morgan
JP Morgan is a leader among large banks when it comes to AI adoption. At the heart of their strategy is Omni, an internal platform that functions as the operating system for AI across the organization. Omni is not a single application, but a platform that gives teams across the bank access to consolidated data, secure environments for testing and training models, and a set of standardized governance rules that ensure projects do not go off the rails. By centralizing these capabilities, the bank makes it possible for smaller groups within different business lines to experiment with AI while still working within a controlled, compliant structure.
The results are as much cultural as they are technical. Instead of treating AI as a specialized research project, JPM has embedded AI into the fabric of daily operations. Business units can move faster because they no longer need to build infrastructure from scratch or worry about conflicting protocols. Innovation happens in parallel across the bank, but it remains coordinated because Omni ties every effort back to a common foundation.
None of this came cheap. Jamie Dimon’s letters to shareholders indicate that thousands of employees are now working on AI initiatives. Furthermore, Omni has been enabled by a company-wide cloud migration which cost more than $2 billion.
This kind of investment is far out of reach for Community Banks. But the underlying principle is still relevant: success comes from building a common hub for AI development, not from scattering small pilots that never scale. The takeaway is not to replicate JP Morgan’s spending, but to borrow the logic of Omni. Even a modest effort to centralize data, create safe environments for testing, and enforce consistent standards can make the difference between an AI experiment that fizzles out and one that drives real impact.
Keeping an Eye on Risk and Regulations
As a regulated industry, Community Bankers must temper their enthusiasm for following market trends with the reactions or future reactions of regulators. Leaning into Gen AI and Agentic AI changes Community Banks’ risk exposure to traditionally monitored threats like data protection that may necessitate updated governance standards and increased security. While still nascent, there is also a rising tide of AI-specific controls coming from State and City levels of government that may be harbingers for Federal standards to come.
First, let’s take a moment to recognize how incorporation of newer flavors of AI (Gen AI and Agentic AI) create new risk vectors for banks:
- Fundamental opacity: Unsupervised learning makes most AI breakthroughs possible, but it’s inherently untraceable. Clearly articulating and managing processes is considerably harder.
- LLMs leak data. If users have enabled “Chat history & training” then start asking sensitive questions pertaining to company operations, upload documents featuring PII, or use a consumer AI to process data Banks could be unwittingly dragged into a big headache.
- Prosumer pressure: it’s likely your workforce is already using AI and will attempt to, even if explicitly forbidden, to use these tools in they day-to-day work. Most of these applications will be benign (help me write a better email). As Gen AI improves its analytical capabilities, it is inevitable that your sensitive data will be chunked into a model by a lazy worker.
- Organizations on the forefront of AI adoption are more likely to be using open source code and libraries that may include malicious code, backdoors, and/or prompt injection
- Agentic AI solutions create new challenges vis a vis permissions and credentials management. Now agents need to be monitored for permissions access the same way human workers do.
As for regulation, this is a new frontier for lawmakers. While the US currently has a laissez-faire attitude to AI regulation, there are a few regulatory tea leaves Community Bankers can attempt to read in hopes of developing sustainable policies that will allow AI to safely add value in today’s and tomorrow’s regulatory climate. On the Federal level, we’ve already seen a regulatory see-saw between the Biden and Trump 2 administrations (some regulations to no regulations). A few states and even cities are forging forward including California, Utah, Colorado and NYC. California’s CPRA-driven Automated Decision-Making Technology rules emphasize consumer rights and transparency in profiling, Utah’s AI Act zeroes in on disclosure in financial and high-risk contexts, Colorado’s AI Act builds a comprehensive framework for “high-risk” systems with fines of up to $20,000 per violation, and New York City’s Local Law 144 mandates annual bias audits for hiring tools. Together, they signal that while federal policy drifts, banks and fintechs must prepare to comply with overlapping and sometimes inconsistent AI obligations across multiple jurisdictions.
Clear? As Mud. So what can a Community Bank do to stay ahead of the regulatory curve?
1. Focus on explainability and interpretability
Any AI model that impacts customers should be able to show in plain English why it made a decision. Build in reports and tools that trace how data was used, what factors mattered most, and why the outcome makes sense so you can defend it to customers, management, and auditors.
2. Update Cybersecurity to control for AI-specific risks related to GenAI and AgenticAI
Expand your security program to handle new attack types like prompt injection, data leakage, or deepfake impersonation. Put guardrails around what AI systems can do automatically, add strong identity checks, and keep humans in the loop when sensitive actions are involved.
3. Adopt AI governance tooling
Use technology that provides out-of-the-box policies, central tracking of all AI models in use, and built-in monitoring so you know when something goes wrong. Good governance tools also let you turn off risky models quickly, keep an audit trail, and stay ahead of new regulations as they emerge.
Leave a comment