April 2026
They Built a Fake Company to Hack One Developer. What Would You Do?
Last week we wrote about the axios npm supply chain attack: the technical payload, the self-destructing dropper, the fact that maintainers couldn't revoke their own attacker's access. That was the "what happened."
This is the "how it happened." And it's worse.
Five Months of Patience
The axios compromise wasn't a smash-and-grab. According to Gen Threat Labs' analysis of the infrastructure, the campaign started in November 2025, four months before the malicious npm packages appeared.
The domain registration timeline tells the story:
- November 2025: Infrastructure preparation begins
- January 2026: Fake Zoom delivery pages go live
- February 2026: VBS droppers loading PowerShell RATs, disguised as Chrome updates
- 25 March 2026: Fake Microsoft Teams delivery via a purpose-built domain
- 31 March 2026: axios npm supply chain attack
Each phase used different delivery mechanisms. When one approach ran its course, the operators rotated to the next. Same infrastructure backbone, same attack patterns, different front doors. This is not opportunism. This is a funded operation running a campaign with a quarterly roadmap.
The Social Engineering Was the Real Weapon
The technical payload, WAVESHAPER.V2, a cross-platform RAT, is sophisticated. But it's not what got the attacker onto the maintainer's machine. Social engineering did that.
Jason Saayman, the axios maintainer whose account was compromised, described the approach in his post-mortem. The attackers posed as the founder of a well-known, legitimate company. The impersonation was detailed enough that Saayman described the approach as tailored specifically to him.
They invited him into a Slack workspace. Not a Discord server, not a Telegram group, a Slack workspace, the platform that legitimate companies actually use. The workspace was populated. People were sharing LinkedIn posts. There were channels with staged activity. Fake profiles posed as employees and other open-source maintainers.
It looked like a real company's internal Slack, because someone had spent weeks making it look like one.
Then came the pivot. The conversation moved from Slack to Microsoft Teams for a scheduled meeting. During the call, Saayman encountered a prompt telling him something was out of date and needed updating. He installed what he believed was a Teams update.
It was the RAT.
Why the Platform Switch Matters
This detail deserves attention. Slack has video calling. If you're already in a Slack workspace with someone, why move to Teams?
Because Teams creates friction. And manufactured friction creates an excuse to install something.
"Teams isn't working properly. You need to update it. Install this." That's a sentence that every office worker has heard some version of. It's plausible. It's boring. It doesn't trigger alarm bells, because it sounds exactly like the kind of minor technical annoyance that happens in every meeting.
The platform switch wasn't a mistake. It was the delivery mechanism. The attackers needed a reason to put software on the target's machine, and "your video call tool needs an update" is one of the most trusted prompts in modern work.
Once the RAT was installed, the attackers had access to Saayman's machine, his npm credentials, and his authenticated sessions. Two-factor authentication couldn't help, the attacker was operating within an already-authenticated context.
It Wasn't Just One Developer
This is the part that scales the threat from "targeted incident" to "campaign."
Socket.dev's investigation revealed that Saayman was not the only target. The same operation approached multiple high-profile Node.js maintainers:
- Jordan Harband, maintains ECMAScript polyfills
- John-David Dalton, creator of Lodash
- Matteo Collina, lead maintainer of Fastify, Pino, and Undici
- Scott Motte, creator of dotenv
- Pelle Wessman, maintainer of mocha, neostandard, npm-run-all2
Different lures, same pattern. Collina was contacted via Slack. Wessman was invited to a fake podcast recording on a bogus streaming site, then shown a "technically plausible error message" prompting a native app download. Each approach was tailored to the individual.
Saayman was the one who fell for it. The others recognised it or weren't fully engaged. But consider the blast radius if two or three of those names had been compromised simultaneously. Lodash, Fastify, dotenv, mocha, that's a significant slice of the npm ecosystem's trust infrastructure, all targeted in the same campaign window.
Google's Threat Intelligence Group attributed the operation to UNC1069, a financially motivated North Korea-nexus threat actor active since at least 2018. This is a professional, state-backed unit with years of operational experience. They have the patience and the budget to build a fake company, staff a fake Slack workspace, and run it for months to compromise a single developer.
The Question You Need to Answer
Forget axios. Forget npm. Forget North Korea, even.
Ask yourself this: if someone spent two weeks building a relationship with one of your developers, over Slack, over LinkedIn, over a plausible professional pretext, and then got them to install something, how long before you'd know?
Most organisations have incident response plans for technical events. A SIEM alert fires. A vulnerability is disclosed. A credential is found in a public repo. There are runbooks for these.
But what's your runbook for "I had a weird video call and I think I installed something I shouldn't have"?
Most companies don't have one. The developer feels embarrassed. They're not sure anything actually happened. They close the tab, maybe run an antivirus scan, and move on. The RAT is already running. Their credentials are already exfiltrated. And nobody in the security team knows anything happened until the damage surfaces weeks later, if it surfaces at all.
What "Prepared" Actually Looks Like
Make reporting safe. Your developers need to know they can say "I think I was socially engineered" without it becoming a performance issue. If the cost of reporting is shame, the cost of not reporting is a breach. The axios maintainer published a full, honest post-mortem. That takes courage. Build a culture where that's normal, not exceptional.
Have a response plan for human compromise. Not just "compromised credentials", that's the technical layer. Plan for "a person was manipulated into installing something." That means: isolate the machine, revoke all sessions (not just passwords), audit what the compromised account had access to, check for lateral movement, and assume the attacker had time to establish persistence.
Test the human layer. You penetration test your infrastructure. You scan your code for vulnerabilities. Do you test whether your developers would install a fake update during a video call? Social engineering testing isn't about catching people out, it's about finding out whether your training and your culture actually work under pressure.
Harden the install surface. Your developers shouldn't be able to install arbitrary software on machines that have access to production credentials, package registry tokens, or customer data. This is basic endpoint management, but in many organisations, developer machines are the least locked-down endpoints on the network because "developers need flexibility." They do. They also need protection from a state-backed actor who will spend five months earning their trust.
Treat communication platform switches as a signal. "Let's move to Teams/Zoom/Meet" mid-conversation is normal in legitimate business. But it's also a known delivery technique. Security awareness training should include this pattern: an unexpected platform switch, followed by a prompt to install or update something, is a red flag.
This Will Happen Again
The axios attack was the rotation that went public. The fake Zoom pages in January, the Chrome update lures in February, those were earlier rotations that we only know about because of infrastructure analysis after the fact. The next rotation is likely already being prepared.
Different targets, different platforms, different pretexts. Same pattern: build trust over time, manufacture a reason to install something, and exploit the access that follows.
Your developers are targets. Not because of who they are, but because of what they have access to. The question isn't whether someone will try this on your team. The question is whether your team will recognise it, report it, and whether you'll know what to do next.
ThreatControl helps organisations test and strengthen their human security layer, from social engineering assessments to incident response planning for the attacks that don't trigger a SIEM alert. If you want to know whether your team would spot this, get in touch.