DuckDB NPM packages 1.3.3 and 1.29.2 compromised with malware
github.com390 points by tosh 4 days ago
390 points by tosh 4 days ago
This is critical infrastructure, and it gets compromised way too often. There are so many horror stories of NPM (and similar) packages getting filled with malware. You can't rely on people not falling for phishing 100% of the time.
People who publish software packages tend to be at least somewhat technical people. Can package publishing platforms PLEASE start SIGNING emails. Publish GPG keys (or whatever, I don't care about the technical implementation) and sign every god damned email you send to people who publish stuff on your platform.
Educate the publishers on this. Get them to distrust any unsigned email, no matter how convincing it looks.
And while we're at it, it's clear that the current 2FA approach isn't good enough. I don't know how to improve on it, but it's clear that the actions in this example were suspicious: user logs in, changes 2FA settings, immediately adds a new API token, which immediately gets used to publish packages. Maybe there should be a 24 hour period where nothing can be published after changing any form of credentials. Accompanied by a bunch of signed notification emails. Of course that's all moot if the attacker also changes the email address.
Disclosure: I’m the founder of https://socket.dev
We analyzed this DuckDB incident today. The attacker phished a maintainer on npmjs.help, proxied the real npm, reset 2FA, then immediately created a new API token and published four malicious versions. A short publish freeze after 2FA or token changes would have broken that chain. Signed emails help, but passkeys plus a publish freeze on auth changes is what would have stopped this specific attack.
There was a similar npm phishing attack back in July (https://socket.dev/blog/npm-phishing-email-targets-developer...). In that case, signed emails would not have helped. The phish used npmjs.org — a domain npm actually owns — but they never set DMARC there. DMARC is only set on npmjs.com, the domain they send email from. This is an example of the “lack of an affirmative indicator” problem. Humans are bad at noticing something missing. Browsers learned this years ago: instead of showing a lock icon to indicate safety, they flipped it to show warnings only when unsafe. Signed emails have the same issue — users often won’t notice the absence of the right signal. Passkeys and publish freezes solve this by removing the human from the decision point.
Some registrars make this easy. Think it was cloudflare that has a button for "Do not allow email from this domain". Saw it last time I set up a domain that I didn't want to send email from. I'm guessing you get that question if there is no MX records for the domain when you move to cloudflare.
I think you just have to distrust email (or any other "pushed" messages), period. Just don't ever click on a link in an email or a message. Go to the site from your own previously bookmarked shortcut, or type in the URL.
I got a fraud alert email from my credit card the other day. It included links to view and confirm/deny the suspicious charge. It all looked OK, the email included my name and the last digits of my account number.
I logged in to the website instead. When I called to follow up I used the phone number printed on my card.
Turns out it was a legit email, but you can't really know. Most people don't understand public key signing well enough to rely on them only trusting signed emails.
Also, if you're sending emails like this to your users, stop including links. Instead, give them instructions on what to do on your website or app.
There is companies that send email with invoices where you have to click a link. There is no way of logging in on their site to get to the invoice. It is an easy fix for them (we use the same invoicing company as they do so I know). All they need to do is click "Allow sending bills directly to customers bank". Every month I get the email, I use the included chat function on the webpage to ask when they will enable this and it's always not possible. Mabe some day.
I wish we could stop training people to click links in random messages just because we want to be able to track their movements online.
I get Coinbase SMS all the time with a code not to share. But also… “call this phone number if you did not request the code”.
This does nothing for the case of receiving a fake coinbase sms with a fake contact phone number.
I have had people attempt fraud in my work with live calls as follow up to emails and texts. I only caught it because it didn't pass the smell test so I did quite a bit of research. Somebody else got caught in the exact same scam and I had to extricate them from it. They didn't believe me at first and I had to hit them over the head a bit with the truth before it sank in.
Yes, this is a classic scam vector. We really should stop training users to click links / call phonenumbers in sms and emails.
> it's clear that the current 2FA approach isn't good enough. I don't know how to improve on it
USE PASSKEYS. Passkeys are phishing-resistant MFA, which has been a US govt directive for agencies and suppliers for three years now[1]. There is no excuse for infrastructure as critical as NPM to still be allowing TOTP for MFA.
[1]https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-0...
Use WebAuthn as the second factor. Passkeys are a single factor authentication, and a downgrade from password+WebAuthn.
Depends on where you store them. If they're in TPM (like WHFB) it's two-factor (because you need the TPM itself, something you have, and PIN or biometric to unlock it, something you know/are). But if you're just loading keys into a software password manager, yes, it's single factor.
At this point, we have passkey support integrated in both major desktop OSes (Windows, macOS) and both major mobile OSes (Android, iOS). All of them require both the physical device and either PIN or biometric unlock.
This is the way! Passkeys or FIDO2 (yubikey) should be required for supply chain critical missions like this.
Yes, use FIDO, you'll be better off, but no, passkeys aren't immune to account takeover. E.g. not only does GitHub support OAuth apps, it supports device code flow, and thus: https://www.praetorian.com/blog/introducing-github-device-co....
> Can package publishing platforms PLEASE start SIGNING emails
I am skeptical this solves phising & not add to more woes (would you blindly click on links if the email was signed?), but if we are going to suggest public key cryptography, then: NPM could let package publishers choose if only signed packages must be released and consumers decide if they will only depend on signed packages.
I guess, for attackers, that moves the target from compromising a publisher account to getting hold of the keys, but that's going to be impossible... as private keys never leave the SSM/HSM, right?
> Get them to distrust any unsigned email, no matter how convincing it looks.
For shops of any important consequence, email security is table stakes, at this point: https://www.lse.ac.uk/research/research-for-the-world/societ...
I don't think signed email would solve phishing in general. But for a service by-and-for programmers, I think it at least stands a chance.
Signing the packages seems like low hanging fruit as well, if that isn't already being done. But I'm skeptical that those keys are as safe as they should be; IIRC someone recently abused a big in a Github pipeline to execute arbitrary code and managed to publish packages in that way. Which seems like an insane vulnerability class to me, and probably an inevitable consequence of centralising so many things on github.
* passkeys
* signed packages
enforce it for the top x thousand most popular packages to start
some basic hygiene about detecting unique new user login sessions would help as well
Requiring signed packages isn't enough, you have to enforce that signing can only be done with the approval of a trusted person.
People will inevitably set up their CI system to sign packages, no human intervention needed. If they're smart & the CI system is capable of it they'll set it up to only build when a tag signed by someone approved to make releases is pushed, but far too often they'll just build if a tag is pushed without enforcing signature verification or even checking which contributors can make releases. Someone with access to an approved contributor's GitHub account can very often trigger the CI system to make a signed release, even without access to that contributor's commit signing key.
The email was sent from the 'npmjs dot help' domain. I'm not saying you're wrong, but also basic due diligence would have prevented this. If not by email, the maintainer may have been able to be compromised over text or some other medium. And today maintainers of larger projects can avoid these problems by not importing and auto-updating a bunch of tiny packages that look like they could have been lifted from stack overflow
Re: "npmjs dot help", way too many companies use random domains -- effectively training their users to fall for phishing attacks.
This exactly. It's actually wild how much valid emails can look like phishing emails, and how confusing it is that companies use different domains for critical things.
One example that always annoys me is that the website listing all of Proton's apps isn't at an address you'd expect, like apps.proton.me. It's at protonapps.com. Just... why? Why would you train your users to download apps from domains other than your primary one?
It also annoys me when people see this happening and point out how the person who fell for the attack missed some obvious detail they would have noticed. That's completely irrelevant, because everyone is stupid sometimes. Everyone can be stressed out and make bad decisions. It's always a good idea to make it harder to make bad decisions.
I can answer why this is at the company I work at right now:
It's a PITA to coordinate between teams, and my team doesn't control the main domain. If I wanted my team's application to run on the parent domain, I would have to negotiate with the crayon eaters in IT to make a subdomain, point it at whatever server, and then if I want any other changes to be made, I'd have to schedule a followup meeting, which will generate more meetings, etc.
If I want to make any changes to the mycompany.othertld domain, I can just do it, with no approval from anyone.
Are you arguing that it’s a good idea for random developers to be able to set up new subdomains on the company domain without any oversight?
Do they work there or not? I deeply appreciate that everyone's threat model is different, but I'd bet anyone that wants to create a new DNS record also has access to credentials that would do a ton more actual damage to the company if they so chose
Alternatively, yup, SOC2 is a thing: optionally create a ticket tracking the why, then open a PR against the IaC repo citing that ticket, have it ack-ed by someone other than the submitter, audit trail complete, change managed, the end
What's your threat model that says they shouldn't? If you don't trust your senior devs, you're already pwned.
Too many services will send you 2FA codes from different numbers per request.
Spf/dkim already authenticates the sender. But it doesn't help if the user doesn't check who the email is from. But in that case gpg would not help that much either.
SPF & DKIM are all but worthless in practice, because so many companies send emails from garbage domains, or add large scale marketing platforms (like mailchimp) to their SPF records.
Like Citroen sends software update notifications for their cars from mmy-customerportal.com. That URL looks and sounds like a phisher's paradise. But somehow, it's legit. How can we expect any user to make the right decision when we push this kind of garbage in their face?
The problem is there is no continuity. An email from an organisation that has emailed you a hundred times before looks the same as an email from somebody who has never emailed you before. Your inbox is a collection of legitimate email floating in a vast ocean of email of dubious provenance.
I think there’s a fairly straightforward way of fixing this: contact requests for email. The first email anybody sends you has an attachment that requests a token. Mail clients sort these into a “friend request” queue. When the request is accepted, the sender gets the token, and the mail gets delivered to the inbox. From that point on, the sender uses the token. Emails that use tokens can skip all the spam filters because they are known to be sent by authorised senders.
This has the effect of separating inbound email into two collections: the inbox, containing trustworthy email where you explicitly granted authorisation to the sender; and the contact request queue.
If a phisher sends you email, then it will end up in the new request queue, not your inbox. That should be a big glaring warning that it’s not a normal email from somebody you know. You would have to accept their contact request in order to even read the phishing email.
I went into more detail about the benefits of this system and how it can be implemented in this comment:
You don't need complex token arrangements for this. You can just filter emails based on their from addresses.
Unfortunately, it’s not that simple. It’s extremely common for the same organisation to send emails from different addresses, different domains, and different servers, for many different reasons.
You can just filter emails based on their from addresses.
So if an organisation emails you from no-reply@notifications.example.com, mailing-list@examplemail.com, and bob.smith@examplecorp.com, and the phisher emails you from support@example.help, which filter based on their from addresses makes all the legitimate ones show up as the same sender while excluding the phishing email?
Why should we expect companies to be able to reuse the correct token if they can't coordinate on using a single domain in the first place?