top of page

The Governance Gap: Why Agentic AI Is Breaking Every Procurement Framework Banks Have

  • 3 hours ago
  • 5 min read

On 2 August 2026, the EU AI Act reaches full enforcement. Every high-risk AI system operating inside a European financial institution, from automated credit scoring to AI-driven customer suitability assessments, will need to demonstrate structured risk management, explainability, and human supervision.


Meanwhile, agentic AI, systems that do not just recommend actions but execute them autonomously, is already moving from pilot to production across compliance, fraud detection, and customer operations in banks worldwide. The uncomfortable truth that innovation and strategy teams are wrestling with right now: most banks’ procurement and governance frameworks were never designed for technology that makes its own decisions.


The value is real, the governance layer is not


The case for agentic AI in financial services is no longer theoretical. Automating up to 70 per cent of manual compliance work while improving risk detection accuracy by as much as four times is exactly the kind of value proposition that gets executive attention, and budget. Oracle launched an enterprise agentic AI platform for banking in February. FIS partnered with Mastercard and Visa to enable agent-initiated commerce. Microsoft published a banking-specific blueprint for agentic customer experience. Accenture outlined the workforce implications across front and back office. The supply side is ready. The demand side is eager. The governance layer in between is not.


Why existing frameworks were not built for this


Every framework banks rely on to evaluate, procure, and govern technology, vendor due diligence, model risk management, third-party oversight, change management, was built for a world where software executes instructions. Agentic systems do not execute instructions. They interpret context, decide what to do, and act, often at machine speed, across multiple processes, with delegated authority. The difference between a rules-based fraud filter and an autonomous agent that triages alerts, investigates patterns, and escalates cases without human intervention is not a feature upgrade. It is a category shift that existing governance models do not cleanly accommodate.


This creates a governance gap at the intersection of three pressures financial institutions feel acutely. First, regulatory enforcement timelines that are no longer theoretical: DORA is now in active, punitive enforcement, and the AI Act reaches full applicability for high-risk systems in August. Second, competitive pressure from peers and vendors already deploying autonomous agents; 44 per cent of finance teams are expected to use agentic AI in 2026, representing a 600 per cent increase from the previous year. Third, an internal stakeholder landscape where innovation, risk, compliance, and procurement rarely agree on what "safe enough to deploy" actually looks like.


Consider the regulatory detail. DORA requires any institution using AI system providers to maintain minimum contractual requirements, conduct documented due diligence on the vendor’s operational resilience, and confirm that external products fit within the institution’s ICT risk management framework. The AI Act layers additional requirements for high-risk systems: structured risk management across the full model lifecycle, robust data quality controls, human supervision mechanisms, and ongoing validation. The European Banking Authority has already published guidance on how these obligations interact with existing banking supervision. For a procurement team evaluating an agentic AI vendor, meeting these requirements simultaneously means asking questions that most RFP templates were never designed to cover.


Three common approaches and their limits


The first instinct many institutions have is to pause. Wait for regulatory clarity. Let peers go first and learn from their mistakes. The problem with waiting is that it does not reduce the governance burden, it just compresses the timeline. Institutions that delay deployment still face the same framework gaps when they eventually move, but with less time to build the internal capability and institutional knowledge needed to govern agentic systems well. And regulators are unlikely to view an absence of AI governance capability as a neutral finding in 2027.


The second approach is to centralise. Create a dedicated AI governance function. Appoint a Chief AI Officer. Establish an AI committee that reviews every use case before deployment. About half of the European banks assessed by the ECB have taken some version of this path, drafting specialised guidelines, developing responsibility frameworks, and extending their three-lines-of-defence model to cover AI. It works, up to a point.


Centralised governance provides oversight but can create bottlenecks that slow deployment to a crawl, particularly when committee members lack hands-on experience with how agentic systems behave in production versus how they were described in a vendor presentation.


A third route is to lean heavily on the vendor. Let the AI provider handle explainability documentation, compliance mapping, and ongoing model validation. This is appealing in resource-constrained environments, but it carries serious risk. Concentration risk from dependence on a small number of external technology and cloud providers is already high on the ECB’s supervisory agenda. Outsourcing governance does not outsource accountability. When a regulator asks how your institution ensures that an autonomous agent’s decisions affecting customer financial positions are explainable and defensible, "our vendor handles that" is not an answer that survives scrutiny.


Building governance as a muscle, not a gate


The institutions making real progress tend to share a pattern: they build governance capability alongside deployment, not sequentially. They treat early agentic AI use cases, typically in compliance monitoring, transaction surveillance, or regulatory change management, as learning environments where governance muscles develop through practice. They structure small, contained deployments where the stakes are manageable and the lessons are transferable to higher-risk applications. And critically, they invest in understanding how comparable institutions are solving the same problems.


That last point is where most institutions hit a wall. Knowing how a peer bank in another market structured its AI governance committee, what contractual clauses it negotiated with an agentic AI provider, how it satisfied a regulator’s explainability requirements, or where it drew the line between autonomous action and human review, that kind of operational intelligence does not appear in vendor pitch decks, analyst reports, or conference keynotes. It surfaces in structured peer conversations where participants have enough shared context to speak candidly and enough regulatory distance to speak freely.


This is the gap that curated peer formats are designed to close. The Connector’s Peer Forum brings together innovation leads, heads of strategy, and transformation directors from financial institutions to compare governance approaches, vendor experiences, and regulatory interpretations in a setting designed for candour. Discovery Innovation Meetings create structured exposure to innovators whose solutions have been pre-screened for institutional relevance. And Finance X Magazine captures these operational perspectives in editorial form, giving teams a reference point between live conversations.


Why the window is closing in 2026


The convergence of regulatory timelines makes 2026 the year the governance gap becomes unmistakable. DORA enforcement is live. The AI Act reaches full applicability in August. National supervisory authorities, the FCA in the UK, BaFin in Germany, the DNB in the Netherlands, the ACPR in France, are issuing their own expectations on AI governance in financial services. And on the technology side, Google, Stripe, and Coinbase are building open payment protocols for agent-to-agent commerce, meaning agentic AI is not just entering back-office operations but moving toward customer-facing financial transactions.


The strategic risk is not whether agentic AI will transform financial services. That question is settled. The risk is whether your institution will build the governance capability to deploy it safely before competitors do, and before regulators start treating the absence of that capability as a supervisory concern. Ninety-six per cent of banks already cite regulatory and compliance challenges as the leading hurdle to AI agent deployment. The hurdle is real. But it is also solvable, provided institutions stop treating governance as a gate and start treating it as a muscle.


The banks that will lead in the agentic era are not necessarily those with the biggest AI budgets or the most aggressive deployment timelines. They are the ones that recognised early that governing autonomous technology requires a fundamentally different institutional capability, one built through structured exposure to peer practice, honest vendor evaluation, and continuous regulatory dialogue, not through policy documents written in isolation. The governance gap is real, and it is widening. The only question is whether your institution is closing it fast enough.

bottom of page