Making security understandable

March 2026

Your AI Assistant Has a Shadow Audience

You installed a ChatGPT sidebar extension to be more productive. Someone else installed one to read everything you type.

Microsoft Defender recently identified malicious Chrome and Edge extensions impersonating popular AI tools -ChatGPT sidebars, DeepSeek helpers, the kind of productivity enhancers that half your team probably installed without asking IT. Over 900,000 installs across more than 20,000 enterprises.

The extensions did exactly what they promised. They also did something else: they harvested every URL you visited and every conversation you had with ChatGPT or DeepSeek -prompts, responses, the lot -and exfiltrated it via HTTPS POST to attacker-controlled domains.

No malware. No elevated privileges. No exploit. Just a browser extension doing browser extension things.

Why This Is Worse Than It Looks

The persistence model is what makes this genuinely alarming. These extensions:

Read that last point again. A user who consciously said "no" to telemetry was silently switched back to "yes" on the next update. They had no idea.

This isn't a zero-day. It's the normal operating model of browser extensions, used for exfiltration instead of analytics. The distinction is surprisingly thin.

The Browser Is the Wrong Trust Boundary

Here's the uncomfortable truth: if your people use AI tools through the browser, every browser extension has access to those conversations. That's not a bug -it's how extensions work. They can read the DOM, intercept network requests, and access local storage.

Microsoft's own mitigation guidance tells organisations to monitor network traffic, audit extensions, enable SmartScreen, create AI use policies, and educate users about sideloaded extensions.

These are all sensible. They're also all reactive. They assume you can enumerate the bad extensions before they cause damage. They assume network monitoring will catch exfiltration that looks like normal HTTPS traffic. They assume policy will prevent users installing extensions from the Chrome Web Store -which is exactly where these were.

Every one of these mitigations operates within the browser trust model. None of them address the root cause: the browser is a shared execution environment where any extension can observe any interaction.

What Actually Fixes This

The architectural answer is simple: remove the browser from the trust boundary for sensitive AI interactions.

If your employees' AI conversations flow through a proxy -an intermediary that sits between the user and the LLM API -then the browser extension has nothing to scrape. The conversation doesn't happen in the browser. The extension can observe an empty sidebar.

This isn't a novel architecture -it's the same proxy pattern that enterprises use for web filtering, DLP, and API gateway security. Applied to the specific problem of LLM data protection.

The proxy approach also solves the "telemetry re-enabled after update" problem. The organisation controls the pipe, not the individual user's browser state. When a user opts out via an extension setting, that's a suggestion. When the proxy strips data, that's enforcement.

The Supply Chain Angle Nobody's Talking About

Browser extensions auto-update without user review. Sound familiar? It's the same trust model that gave us the npm supply chain attacks, the PyPI typosquatting campaigns, and the SolarWinds breach. A legitimate extension today can become a malicious one tomorrow with a single update pushed from the developer's compromised account.

We've spent years hardening software supply chains -SBOMs, signed packages, dependency scanning. Browser extensions exist in the same threat model but receive almost none of the same scrutiny. Your security team probably has a policy for npm packages. Do they have one for browser extensions?

What You Should Do This Week

  1. Audit your browser extension inventory. If you're a Microsoft shop, use Defender Vulnerability Management's browser extension assessment. If not, at minimum run chrome://extensions across managed devices and look for anything you don't recognise.
  2. Block extension sideloading via Group Policy or MDM. Only allow extensions from a curated allowlist.
  3. Check for these specific domains in your proxy/firewall logs: chatsaigpt[.]com, deepaichats[.]com, chataigpt[.]pro, chatgptsidebar[.]pro. If you see POST traffic to any of them, you have an active exfiltration problem.
  4. Ask the harder question: Should your sensitive AI interactions happen in the browser at all? If your team is pasting customer data, source code, or internal documents into ChatGPT, a browser-based interface means every extension is a potential audience.
  5. Review your AI usage policy. If it says "use AI tools responsibly" without specifying approved tools and approved access methods, it's a memo, not a control.

The Pattern

This will happen again. The attackers aren't even being clever -they're building functional AI tool extensions that also happen to exfiltrate data. The next campaign will use different domains, different extension names, and the same technique.

The question isn't whether your team uses AI tools. They do. The question is whether those tools have a shadow audience.

At ThreatControl, we help organisations understand and control their AI security exposure. Our AI Security Testing service tests your AI systems for data leakage risks, and we can advise on architectural controls to protect sensitive AI interactions. Get in touch.

← Back to blog