March 23, 2026 - 4 min read

When AI Must Be Defensible

ArcteraData Compliance
Headshot of Shilo Thomas, Product and Solutions Marketing, Data Compliance

Shilo Thomas

Product and Solutions Marketing, Data Compliance

When regulators review a surveillance program, questions about AI are no longer theoretical. They are central to how decisions are judged.

How was this alert generated?

What data influenced the outcome?

Can the decision be explained, audited, and defended?

According to the Regulatory Outlook 2025–2027 from Opimas, these questions are becoming standard. Regulators increasingly expect AI to play a role in surveillance, but only when its behavior is transparent and governed.

“Regulators now view explainable, auditable AI as essential to effective surveillance, not an optional enhancement.”
Anna Griem, Senior Analyst, Opimas

For teams responsible for supervision and monitoring, this marks a clear shift. AI is no longer an experimental capability. It is becoming part of the baseline expectation, with accountability attached.

From efficiency gains to regulatory scrutiny

Early adoption of AI in surveillance focused on efficiency. Reducing alert volumes. Accelerating review. Expanding coverage across growing communication channels.

Those benefits remain important, but they are no longer sufficient on their own.

As AI becomes embedded in surveillance workflows, attention turns to governance. Teams need to demonstrate not just that AI improves outcomes, but that its decisions can be understood, reviewed, and challenged when necessary.

In highly regulated environments, opaque models create friction. When reasoning cannot be surfaced, trust erodes and regulatory conversations become harder.

Explainability becomes a control mechanism

Explainable AI changes how surveillance programs operate.

When models are transparent, teams can see how signals are generated, understand why alerts are escalated, and document how decisions were reached. This clarity supports stronger oversight, more confident reviews, and clearer audit trails.

Explainability also enables flexibility. Teams can adjust how AI is applied, refine logic over time, and align usage with evolving regulatory expectations without losing control of the process.

In this context, AI clarifies compliance rather than obscuring it.

Choice matters in how AI is applied

AI adoption does not look the same across every organization or region.

Some teams move quickly. Others proceed more cautiously. Regulatory comfort, internal risk tolerance, and operating models all influence how AI is consumed.

Surveillance platforms need to support this variation. Teams should be able to decide where AI is applied, how outputs are used, and how human judgment remains part of the workflow. Responsible adoption depends on maintaining visibility and control at every step.

This approach allows organizations to gain the benefits of AI while preserving accountability.

What surveillance leaders are navigating now

These realities surfaced clearly in a recent conversation with Arctera’s Surveillance leader, Chris Stapenhurst.

The discussion focuses on practical deployment rather than hype. How AI can reduce noise while improving signal quality. How explainability supports regulator confidence. And why giving teams control over how AI is used is just as important as the models themselves.

The emphasis is on trust built through transparency.

Why this matters going forward

AI is becoming a standard expectation in surveillance programs, but expectations around governance are rising just as quickly. In this environment, the ability to defend AI-driven decisions matters as much as the ability to generate them.

Teams that rely on opaque models will face growing pressure to justify outcomes. Those that can demonstrate accuracy, traceability, and oversight will be better positioned to respond to regulators and adapt as requirements evolve.

As regulatory scrutiny increases, explainability has become a baseline requirement for using AI in surveillance.

Continue the conversation

Tech Insights: Surveillance Signals
Hear how surveillance leaders are approaching AI with a focus on transparency, accountability, and control, with insights from Arctera’s Surveillance leader, Chris Stapenhurst.



Explore the research
Read the Regulatory Outlook 2025–2027 to understand how regulators’ expectations around AI are reshaping surveillance across financial services.