Making security understandable

March 2026

The Security Boundary Isn't the AI App - It's the Interaction Layer

CISOs finally have AI security budget. But most are asking the wrong question. They're evaluating AI applications when they should be evaluating what flows between users and models.

The Wrong Question

When organisations assess AI security risk, they typically ask: "Is this AI tool secure?" They review the vendor's SOC 2 report, check the data processing agreement, and move on.

That's necessary but insufficient. The real attack surface isn't the AI application - it's the interaction layer. Every prompt submitted, every file uploaded, every response generated. That's where data leaks, where PII gets exposed, and where policy violations happen.

A recent AI governance RFP framework puts it simply: the security boundary is the interaction with the model, not the model itself.

What This Means Practically

Think about how AI tools are actually used in your organisation:

None of these are malicious. All of them are data leakage. And none of them would be caught by evaluating whether the AI vendor has a SOC 2 certificate.

The Four Things You Should Actually Be Testing

  1. Discovery - Do you actually know which AI tools your people are using? Browser extensions, SaaS integrations, IDE plugins, mobile apps. Shadow AI is the new shadow IT.
  2. Contextual awareness - Can your controls distinguish between a developer asking "how do I sort a list in Python" and a finance manager pasting a spreadsheet of customer transactions? Same tool, very different risk.
  3. Real-time enforcement - Can you stop a leak before the Enter key is hit? Post-hoc logging tells you what went wrong. Real-time interception prevents it.
  4. Auditability - When the board asks "what data has gone into AI tools this quarter?", can you answer with evidence? Most organisations can't.

The Agent Dimension

This gets more interesting - and more urgent - with agentic AI. When AI agents call external tools autonomously via protocols like MCP (Model Context Protocol), the interaction layer expands. It's no longer just user-to-model. It's agent-to-tool-to-response-to-agent, in loops, at speed, without human review.

Every tool call an agent makes is a potential data exfiltration path. Every response it receives is a potential injection vector. The interaction layer is now a high-speed, autonomous data pipeline that most security teams have zero visibility into. The OWASP Agentic AI Top 10 catalogues exactly these risks - and most of them live at the interaction layer, not inside the model.

What To Do About It

  1. Map your AI interactions - Not just which tools, but what data flows through them. Prompts, uploads, responses, tool calls.
  2. Classify by sensitivity - Not all AI usage is equal. A developer autocompleting code is low risk. An analyst pasting customer data is high risk. Your controls should reflect this.
  3. Enforce at the proxy layer - The most effective control point is between the user (or agent) and the AI service. A proxy can inspect, redact, log, and block - without changing how people work.
  4. Test your own AI attack surface - If you've deployed AI-powered features, test them. Can they be manipulated into leaking data? Do they store prompts with reversible user identifiers? Are your agents vulnerable to tool poisoning?
  5. Plan for agents now - Even if you're not using agentic AI today, your tools are moving in that direction. Your governance framework needs to cover autonomous tool invocation, not just human prompts.

The Bottom Line

AI security isn't an application security problem. It's a data flow problem. The organisations that get this right will be the ones that focus on what moves between users, agents, and models - not on the models themselves.

The security boundary is the interaction layer. Secure it accordingly.

At ThreatControl, we help organisations understand and manage their AI security risk. Our AI Security Testing service tests AI applications for data leakage, prompt injection, and agentic risks across all interaction patterns. Get in touch.

← Back to blog