We all dodged a bullet
xeiaso.net822 points by WhyNotHugo 4 days ago
822 points by WhyNotHugo 4 days ago
Related: NPM debug and chalk packages compromised - https://news.ycombinator.com/item?id=45169657
The nx supply chain attack via npm was the bullet many companies did not doge. I mean, all you needed was to have the VS Code nx plugin installed — which always checked for the latest published nx version on npm. And if you had a local session with GitHub (eg logged into your company’s account via the GH CLI), or some important creds in a .env file… that was exfiltrated. This happened even if you had pinned dependencies and were on top of security updates. We need some deeper changes in the ecosystem. https://github.com/nrwl/nx/security/advisories/GHSA-cxm3-wv7... > We need some deeper changes in the ecosystem. I avoid anything to do with NPM, except for the typescript compiler, and I'm looking forward to the rewrite in Go where I can remove even that. For this reason. As a comparison, in Go, you have minimum version spec, and it takes great pains to never execute anything you download, even during compilation stage. NPM will often have different source then the github repo source. How does anyone even trust the system? Before we all conclude that supply chain attacks only happen on NPM, last time I used VS Code I discovered that it auto-installed, with no apparent opt-out, Python typing stubs for any package (e.g., Django in my case) from whatever third-party, unofficial PyPI accounts it saw fit. (Yes, this is why it was the last time I used VS Code.) The obscurity of languages other than JavaScript will only work as a security measure for so long. I've never seen Pylance automatically install anything. Are you talking about the stubs that come packaged with Pylance, which Microsoft maintains? It was the Microsoft’s official Python extension, as far as I recall. It was possible to use some other extension for typechecking but there were some other issues with it. (Now everything works perfectly in Neovim, and my setup respects to only use typing stubs I specify in the project.) The Microsoft official Python extension uses Pylance, which is a closed-source extension of Pyright with additional features such as built-in type stubs. This is probably what you saw. If they were truly built-in I would not mind, but I found a bunch of third-party stubs in a dependency tree of the virtual Python environment VSC created (and ran obviously unsandboxed). What’s worse is that stubs, in addition to not being specified by me, were often pulled at the wrong version compared to the package I was using, leading to typing mismatches and runtime errors. It's already solved by pnpm, which refuses to execute any postinstall scripts except those you whitelist manually. In most projects I don't enable any and everything works fine, in the worst case I had to enable two scripts (out of two dozen or so) that download prebuilt native components, although even those aren't really necessary and it could have been solved through other means (proven by typescript-go, swc, and other projects led by competent maintainers). None of it will help you when you're executing the binaries you built, regardless of which language they were written in. I could be wrong but I believe Pnpm would not have helped with the supply chain attach that brings us here. It's simply a problem with deploying new code rapidly and automatically without verification to a billion machines at a time. That’s my read. Even if there was some other logistical barrier, updating a bunch of external dependencies as most people do it unavoidably involves pre-trusting code you’ve never seen. I don’t think there’s any way around that, and given that, I don’t think there’s a purely technical solution. This requires having more vetting within package manager, but that’s not an easy lift. That doesn't help you if anyone on your team installs a vscode plugin which uses npm in the background & executes postinstall scripts. > None of it will help you when you're executing the binaries you built Lavamoat would, if you get to the point of running your program with lavamoat-node or built with the lavamoat webpack plugin: https://lavamoat.github.io/guides/getting-started/ > None of it will help you when you're executing the binaries you built, regardless of which language they were written in. Sure it would... isn't that the whole point of Deno? The binary can't exfiltrate anything if you don't let it connect to the net. I am now using the type remover in Node to run TupeScript natively. It’s great and so fast. Even still I continue to include the TypeScript compiler in my projects so that I can run TSC with the no compile option just for the type auditing. You are lying to yourself. In this attack, nothing was executed by npm; it "just" replaced some global functions. A Go package can't do that, but you can definitely execute malware at runtime anyway. It can also expose new imports that will be imported by mistake when using an IDE. Fucking this. I have seen so many takes lamenting how this kind of supply chain attack is such a difficult problem to fix. No it really isn't. It's an ecosystem and cultural problem that npm encourages huge dependency trees that make it impractical to review dependency updates so developers just don't. > It's an ecosystem and cultural problem that npm encourages huge dependency trees It is an ecosystem and culture that learned nothing from the debacle of left pad. And it is an affliction that many organizations face and it is only going to get worse with the advent of AI assisted coding (and it does not have to be). There simply arent enough adults in the room with the ability to tell the children (or VC's and business people) NO. And getting an "AI" to say no is next to impossible unless you're probing it on a "social issue". The thing is, having access to such dependencies is also a huge productivity boost. It's not by accident that every single language whose name isn't C or C++ has pretty much moved to this model (or had it way before npm, in the case of Perl or Haskell). The alternative is C++, where every project essentially starts by reinventing the wheel, which comes with its own set of vulnerabilities. I'm saying this without a clear idea of how to fix this very real problem. It's more like capex vs opex. Some languages and frameworks - you have to maintain the same level of effort, just to keep your apps working. > The alternative is C++, where every project essentially starts by reinventing the wheel Sure, in 1995. Most C++ projects nowadays belong to some fairly well understood domain and for every broad domain there is usually one or two large 'ecosystem' libraries that come batteries included. Huge monolithic dependency with well stablished governance instead of 1000 small ones. Examples of such ecosystems are Qt, LLVM, ROOT, tensorflow, etc. For smaller projects that want something slightly more than a standard library but not belonging to a clear ecosystem like the above you have boost, folly, abseil, etc. Most of these started by someone deciding to reinvent the wheel decades ago, but there's no real reason to do that in 2025. “It’s not difficult to fix, just change the entire culture” The difficulty comes in trying to change the entire culture. “Doctor, it hurts when I do this!” “Stop doing that!” “But I wanna!” That is valid though, if someone says "It hurts when I walk" its not reasonable to tell them to not walk, you try to figure out why it hurts and if it can be fixed. Other languages has similar package managers as npm, but with much less issues, so it can be fixed without changing the package manager completely. I would say Javascript's lack of a standard library is at least in part responsible for encouraging npm use, things just spiraled out of control from there. [not a dev] why isn't there the equivalent of "Linux distributions" for npm? I know I know: because developers all need a different set of libs. But if there were thousands of packages required to provide basic "stdlib-like functionality" couldn't there be an npm distribution that you can safely use as a starting point, avoiding importing asinine stuff like 'istrue' (yea I'm kinda joking there). Or is that just what bloated Frameworks all start out as? There could, this would essentially be in the form of a standard library. That would work until someone decides they don't like the form/naming conventions/architecture/ideology/lack of ideology/whatever else and then reinvent everything to do the same, but in a slightly different way. And before you know it, you have a multitude of distributions to choose from, each with their own issues... Who is shipping/maintainig this ? Even node itself is maintained by OSS. That's one of the advantages of Microsoft .NET ecosystem - you can do a lot of stuff without pulling anything not shipped by Microsoft. I don't know of any other ecosystem that's as versatile with so much first party support. Source available beats open source from a security perspective. Honestly, the same is true in a lot of other areas of computing. Whenever you download an open-source program and you don't have to compile it first, you're at risk of running code that is not necessarily what's in the publicly-available source code. This can even apply to source code itself when distributed through two different channels, as we saw in the xz backdoor attempt. (The release tarball contained different code to the repository.) Yeah, Editor extensions are both auto-updated and installed in high risk dev environments. Quite a juicy target and I am surprised we haven’t seen large scale purchases by bad actors similar to browser extensions yet. However, I remember reading that the VsCode team puts a lot of effort in catching malware. But do all editors (with auto-updates) such as Sublime have such checks? The key thing needed is a standard library which includes 100000 of these tiny one function libraries (has-ansi, color-name). I checked has-ansi. What's the reason that this library would exist and be popular? Most of the work is done by the library it imports, ansi-regex and then it just return ansi-regex.test(string), yet it has 5% of the weekly downloads of ansi-regex. ansi-regex also has fewer than 10 lines of code. I don't know anything about the npm ecosystem, what's the benefit of importing these libraries compared to including these code in the project? The benefit is getting your secrets stolen and pointing the blame at someone else? Yeah... The VS Code ecosystem has too much complexity for my tastes. I do keep a copy around with a few code formatting plugins installed but I feel more comfortable with Emacs (or Vim for my friends who are on that side of the fence). I am a consumer of apps using npm, not a developer, and I simply don’t like the auto updates and seeing a zillion things updated. I use uv and Python a lot, and I get a similar uneasy feeling there also, but (perhaps incorrectly) I feel more in control. I usually make sure all the packages and db are local, so my dev machine can run in Airplane mode. And only turn on internet when use git push All docs are local too, like we used to do with man pages and paper reference books or do you use another system for them? A second computer, a tablet, a phone? wow. i uninstalled the nx plugin a few weeks ago after completing the migration to pnpm. >Saved by procrastination! Seriously, this is one of my key survival mechanisms. By the time I became system administrator for a small services company, I had learned to let other people beta test things. We ran Microsoft Office 2000 for 12 years, and saved soooo many upgrade headaches. We had a decade without the need to retrain. That, and like other have said... never clicking links in emails. This is how I feel about my Honda, and to some extent, Kubernetes. In the former case I kept a 2006 model in good order for so long I skipped at least two (automobile) generation's worth of car-to-phone teething problems, and after years of hearing people complain about their woes I've found the experience of connecting my iphone to my '23 car pretty hassle-free.
In the latter, I am finally moving a bunch of workloads out of EC2 after years of nudging from my higher-ups and, while it's still far from a simple matter I feel like the managed solutions in EKS and GKE have matured and greatly lessen the pain of migrating to K8S. I can only imagine what I would have gotten bogged down with had I promptly acted on my bosses' suggestion to do this six or seven years ago. (I also feel very lucky that the people I work for let me move on these things in my own due time.) In the meantime you had for years a car without connecting your iphone, so you completely didn't have that feature!
There are pros and cons everywhere, but I'm more prone to change often and fix things that wait for feature to be stable and meantime do without them.
Of course, when I can afford it, e.g. not in changing my car every two years :') > In the meantime you had for years a car without connecting your iphone, so you completely didn't have that feature! Such a feature can be added. This. At $PAST_DAYJOB we've adopted Docker "only" around 2016, and importantly, we've used it almost identically to how we used to deploy "plain" uWSGI or Apache apps: a bunch of VMs, run some Ansible roles, pull the code (now image), restart, done. The time to move to k8s is when you have a k8s-sized problem. [Looks at Github: 760 releases, 3866 contributors.] Yeah, not now. Not in the "npm ecosystem". You're hopelessly behind there if you haven't updated in the last 54 seconds. Sorry, the "npm ecosystem" command has been deprecated. You can instead use npm environment (or npm under-your-keyboard because we helpfully decided it should autocorrect and be an alias) "Just wait 2 weeks to use new versions by default" is an amazing defense method against supply chain attacks. Is there some sort of easy operational way to do this? There are well known tech companies that do this internally but afaik this isn't a feature of OSS registries like verdaccio Renovate is a great (and free) tool to update your dependencies. By default it will update packages in the hours (often minutes) of their release but you can change that behavior with the minimumReleaseAge parameter. https://docs.renovatebot.com/configuration-options/#minimumr... Yep, Renovate's `minimumReleaseAge` is what you want here Dependabot has recently added this functionality too - it's called `cooldown` https://docs.github.com/en/code-security/dependabot/working-... (I'm soon to be working at Mend on Renovate full time, but have been a big fan of Renovate over other tools for years) For anyone following, we (Renovate maintainers) are making this an inbuilt "best practice" that users who already opt into using the `config:best-practices` preset will start getting for free! The one big problem Renovate brings is when it automerges and breaks everything with e.g. a TypeScript upgrade. It's simple enough to handle and prevent but has required quite a lot of developer education for those who are not particularly frontend-focused in my experience. Interesting, so you've enabled Renovate's automerge functionality for dependencies? Renovate uses signals like your CI to work out whether things break before an automerge occurs - does that mean your CI didn't catch the breakage? Or something I've missed? (there's also the "merge confidence" that can help here) (I'm soon to be working at Mend on Renovate full time) There are dependency firewalls that let you enforce this (e.g. https://docs.bytesafe.dev/policies/delay-upstream/). Don't know any OSS solutions though. Its also really ineffective defense against 0 days! In the context of a single system, there is no such thing as an "effective defense against 0 days" - that's marketing babble. A zero day by definition is an exploit with no defense. That's literally what that means. That doesn't sound right. > A zero-day exploit is a cyberattack vector that takes advantage of an unknown or unaddressed security flaw in computer software, hardware or firmware. "Zero day" refers to the fact that the software or device vendor has zero days to fix the flaw because malicious actors can already use it to access vulnerable systems. If I never install the infected software, I'm not vulnerable, even if no one knows of its existence. That said, you could argue that because it's a zero day and no one caught it, it can lie dormant for >2 weeks so your "just wait awhile" strategy might not work if no one catches it in that period. But if you're a hacker, sitting on a goldmine of infected computers... do you really want to wait it out to scoop up more victims before activating it? It might be caught. Yeah but zero days usually refers to some software which is commonly installed. E.g. a zero day in the version of windows or mac os that most people are using. No one bothers finding 0-days in software which no one has installed. Sadly we don't have any defense against 0 days if an emergency patch is indistinguishable from an attack itself. Better defense would be to delete or quarantine the compromised versions, fail to build and escalate to a human for zero-day defense. > Sadly we don't have any defense against 0 days if an emergency patch is indistinguishable from an attack itself. Reading the code content of emergency patches should be part of the job. Of course, with better code trust tools (there seem to have been some attempts at that lately, not sure where they’re at), we can delegate that and still do much better than the current state of things. IF I put my risk management hat on - 0 days in npm ecosystem are not that much of a problem. They stop working before can use them. I ran office xp on my desktop and 2000 on my laptop until I got to college and _needed_ to upgrade so I could do work with others. Block it with the firewall and you're good. Now I mostly use WordPad, and use a recent (but rarely updated) version of open office on the rare occasions I actually need an office suite or spreadsheet. If you're worried about vulnerabilities in older software these days, Windows has built-in security features that can help with that, from the sandbox to controlled folders access (intended for ransomware protection, I believe; I use it to prevent my media server from modifying tags) Works great for new exploited packages. Not so great for already compromised software getting hit by a worm. I'll reply to you tomorrow ...by then it might be working again anyway, or the user figured out what they were doing wrong. "Hey, is it still broken? No? Great!" That post fails to address the main issue, its not that we don't have time to vet dependencies, its that nodejs s security and default package model is absurd and how we use it even more. Even most deno posts i see use “allow all” for laziness which i assume will be copy pasted by everyone because its a major pain of UX to get to the right minimal permissions. The only programming model i am aware if that makes it painful enough to use a dependency, encourages hard pinning and vetted dependency distribution and forces explicit minimal capability based permission setup is cloudflares workerd. You can even set it up to have workers (without changing their code) run fully isolated from network and only communicate via a policy evaluator for ingress and egress. It is apache licensed so it is beyond me why this is not the default for use-cases it fits. Another main issue is how large (deep and wide) this "supply chain" is in some communities. JavaScript and python notable for their giant reliance on libs. If I compare a typical Rust project, with a same JavaScript one, JavaScript project itself often has magnitudes more direct dependencies (wide supply chain?). The rust tool will have three or four, the JavaScript over ten, sometimes ten alone to help with just building the typescript in dev. Worsened by the JavaScript dependencies own deps (and theirs, and theirs, all the way down to is_array or left_pad). Easily getting in the hundreds. In rust, that graph will list maybe ten more. Or, with some complex libraries, a total of several tens. This attitude difference is also clear in Python community. Where the knee-jerk reaction is to add an import, rather than think it through, maybe copy paste a file, and in any case, being very conservative. Do we really need colors in the terminal output? We do? Can we not just create a file with some constants that hold the four ANSI escape codes instead? I'm trying to argue that there's also an important cultural problem with supply chain attacks to be considered. > [...] python notable for their giant reliance on libs. I object. You can get a full-blown web app rolling with Django alone. Here's it's list of external dependencies, including transitive: asgiref, sqlparse, tzdata. (I guess you can also count jQuery, if you're using the _builtin_ admin interface.) The standard library is slowly swallowing the most important libraries & tools in the ecosystem, such as json or venv. What was once a giant yield-hack to get green threads / async, is now a part of the language. The language itself is conservative in what new features it accepts, 20yro Python code still reads like Python. Sure, I've worked on a Django codebase with 130 transitive dependencies. But it's 7yro and powers an entire business. A "hello world" app in Express has 150, for Vue it's 550. > If I compare a typical Rust project, with a same JavaScript one, JavaScript project itself often has magnitudes more direct dependencies (wide supply chain?). This has more to do with the popularity of a language than anything else, I think. Though the fact that Python and JS are used as "entry level" languages probably encourages some of these "lazy" libraries cough cough left-pad cough cough. To be fair, the advantage of Deno here is really the standard library that includes way more functionality than Node. But in the end, we should all rely on fewer dependencies. It's certainly the philosophy I'm trying to follow with https://mastrojs.github.io – see e.g. https://jsr.io/@mastrojs/mastro/dependencies Dodged a bullet indeed I find it insane that someone would get access to a package like this, then just push a shitty crypto stealer. You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit? You can exfiltrate API keys, add your SSH public key to the server then exfiltrate the server's IP address so you can snoop in there manually, if you're on a dev's machine maybe the browser's profiles, the session tokens common sales websites? My personal desktop has all my cards saved on Amazon. My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either. You don't even need to do anything with those, there's forums to sell that stuff. Surely there's an explanation, or is it that all the good cybercriminals have stable high paying jobs in tech, and this is what's left for us? > You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit? Because the way this was pulled off, it was going to be found out right away. It wasn't a subtle insertion, it was a complete account take over. The attacker had only hours before discovery - so the logical thing to do is a hit and run. They asked what is the most money that can be extracted in just a few hours in an automated fashion (no time to investigate targets manually one at a time) and crypto is the obvious answer. Unless the back doors were so good they weren't going to be discovered even though half the world would be dissecting the attack code, there was no point in even trying. "found out right away"... by people with time to review security bulletins. There's loads of places I could see this slipping through the cracks for months. I'm assuming they meant the account takeover was likely to be found out right away. You change your password on a major site like that and you're going to get an email about it. Login from a new location also triggers these emails, though I admit I haven't logged onto NPM in quite a long time so I don't know that they do this. It might get missed, but I sure notice any time account emails come through even if it's not saying "your password was reset." There's probably already hundreds of thousands of Jira tickets to fix it with no sprint assigned.... I feel attacked. And very, very happy that we're proxying all access to npm through Artifactory, which allowed us to block the affected versions and verify that they were in fact never pulled by any of our builds. Only problem is the artifactory instance is on the other side if the world instead of behind the convenient npmjs CDN, so installing packages takes 5x longer.. About to say, if you're in a company of any size and you're not doing it this way, you're doing it wrong. Ugh, have some respect. Some of us have PTSD dealing with security issues where the powers that be prevented us dealing with them due to them deprioritizing them during backlog grooming. My last company literally refused to do any security work except CVE turndowns - because it was contractually promised via a customer contract. Yes, but this is an ecosystem large enough to include people who have that time (and inclination and ability); and once they have reported a problem, everyone is on high alert. If you steal the cookies from dev machines or steal ssh keys along with a list of recent ssh connections or do any other credential theft there are going to be lots of people left impacted. Yes, lots of people reading tech news or security bulletins is going to check if they were compromised and preemptively revoke those credentials. But that's work, meaning even among those informed there will be many who just assume they weren't impacted. Lots of people/organisations are going to be complacent and leave you with valid credentials If a dev doesn't happen to run npm install during the period between when the compromised package gets published and when npm yanks it (which for something this high-profile is generally measured in hours, not days), then they aren't going to be impacted. So an attacker's patience won't be rewarded with many valid credentials. Dev, or their IDE, agent, etc. Their build chain, CI environment, server... npm ci wouldn't trigger this, it doesn't pick up newly published package versions. I suppose if you got a PR from Dependabot updating you to the compromised package, and happened to merge it within the window of vulnerability, then you'd get hit, but that will likewise not affect all that many developers. Or if you'd configured Dependabot to automatically merge all updates without review; I'm not sure how common that is. But that is dumb luck. Release an exploit, hope you can then gain further entry into a system at a company that is both high value and doesn't have any basic security practices in place. That could have netted the attacker something much more valuable, but it is pure hit or miss and it requires more skill and patience for a payoff. VS blast out some crypto stealing code and grab as many funds as possible before being found out. > Lots of people/organisations are going to be complacent and leave you with valid credentials You'd get non-root credentials on lots of dev machines, and likely some non-root credentials on prod machines, and possibly root access to some poorly configured machines. Two factor is still in place, you only have whatever creds that NPM install was ran with. Plenty of the really high value prod targets may very well be on machines that don't even have publicly routable IPs. With a large enough blast radius, this may have worked, but it wouldn't be guaranteed. The window of installation time would be pretty minimal, and the operating window would only be as long as those who deployed while the malicious package was up waited to do another deploy. If they'd waited a week before using their ill-gotten credentials to update the packages, would they have been detected in that week? That is what the tj-actions attacker did: https://unit42.paloaltonetworks.com/github-actions-supply-ch... > it was a complete account take over is that so? from the email it looks like they MITM'd the 2FA setup process, so they will have qix's 2FA secret. they don't have to immediately start taking over qix's account and lock him out. they should have had all the time they need to come up with a more sophisticated payload. > They asked what is the most money that can be extracted in just a few hours in an automated fashion (no time to investigate targets manually one at a time) and crypto is the obvious answer. A decade ago my root/123456 ssh password got pwned in 3-4 days. (I was gonna change to certificate!) Hetzner alerted me saying that I filled my entire 1TB/mo download quota. Apparently, the attacker (automation?) took over and used it to scrape alibaba, or did something with their cloud on port 443. It took a few hours to eat up every last byte. It felt like this was part of a huge operation. They also left a non-functional crypto miner in there that I simply couldn't remove. So while they could cryptolock, they just used it for something insidious and left it alone. To be fair, this wasn't a super demanding 0-day attack, it was a slightly targeted email phish. Maybe the attacker isn't that sophisticated and just went with what is familiar? Stolen cryptocurrency is a sure thing because fraudulent transactions can't be halted, reversed, or otherwise recovered. Things like a random dev's API and SSH keys are close to worthless unless you get extremely lucky, and even then you have to find some way to sell or otherwise make money from those credentials, the proceeds of which will certainly be denominated in cryptocurrency anyway. Agreed. I think we're all relieved at the harm that wasn't caused by this, but the attacker was almost certainly more motivated by profit than harm. Having a bunch of credentials stolen en masse would be a pain in the butt for the rest of us, but from the attacker's perspective your SSH key is just more work and opsec risk compared to a clean crypto theft. Putting it another way: if I'm a random small-time burglar who happens to find himself in Walter White's vault, I'm stuffing as much cash as I can fit into my bag and ignoring the barrel of methylamine. Ultimately, stolen cryptocurrency doesn't cause real world damage for real people, it just causes a bad day for people who gamble on questionable speculative investments. The damage from this hack could have been far worse if it was stealing real money people rely on to feed their kids. You have the context sort of wrong. To do a comparable “real money” heist en masse, you would be stealing from the banks or from the customers of one, or via debit or credit cards. It’s real enough money, but those fraudulent transactions would be covered by existing protections, like FDIC insurance or chargebacks. I don’t think anyone could steal much cash from a single heist from a bank or other hard target, so your analogy is confusing. There is no analogous situation in which “real money” could be stolen from customers or financial institutions or the interchange system that would impinge end users. That’s the whole reason people use them. Even in friendly fraud situations, the money isn’t gone, it’s just frozen, so you might have to wait a month or so to get it unfrozen after the FBI et al clear the source of funds. Sure, if someone takes my grocery money, that’s a real loss, and that’s why I don’t carry large sums of cash. But that isn’t what happened here. Can you explain what you meant so I can understand? I think you had a point, I just don’t think that the risk of the kind of attack in TFA is comparable to someone getting their grocery money stolen, because the financial situation for that individual in-person theft can’t really occur on the same scale as the attack in TFA, and even if it could, that’s kind of on the end user for carrying more cash than they can defend. Unless they've changed something, I know at least at the very beginning Zelle had no fraud protection. https://techcrunch.com/2018/02/16/zelle-users-are-finding-ou... It appears they still have issues with (more advanced forms of) fraud: https://thecyberexpress.com/zelle-lawsuit-2025-scam-hit-us-f... (this page won't stop reloading, but I think it's my adblock configuration)
https://www.morningstar.com/news/marketwatch/20241221198/mor... > It’s real enough money, but those fraudulent transactions would be covered by existing protections, like FDIC insurance or chargebacks. Not always. Many banks will claim e.g. they don't have to cover losses from someone who opened a phishing email, never mind that the bank themselves sends out equally suspicious "real" emails on the regular. Also even if it's covered that money comes from somewhere - ultimately out of the pockets of regular folks who were just using their bank accounts, even if the insurance mechasims mean it's spread out more widely. Good points all around. I don’t mean to blame the victim, as they usually don’t know what they don’t know and aren’t party to the fraud, so they couldn’t begin to know, but informed users ought to know the failure modes. Insurance rates are surely a factor in the industry push for KYC, which is mandated federally for good reasons, but in edge cases like loss of funds, the little people are often blamed for being victims by faceless corporations because they aren’t able to say what caused the issue, due to federal regulations against fraud. It’s a conundrum. Get in, steal a couple hundred grand, get out, do the exact same thing a few months later. Repeat a few times and you can live worry free until retirement if you know to evade the cops. Even if you steal other stuff, you're going to need to turn it all into cryptocurrency anyway, and how much is an AWS key really going to bring in. There are criminals that focus on extracting passwords and password manager databases as well, though they often also end up going after cryptocurrency websites. There are probably criminals out there biding their time, waiting for the perfect moment to strike, silently infiltrating companies through carefully picked dependencies, but those don't get caught as easily as the ones draining cryptocurrency wallets. Earlier this year, a crypto app web UI attack stole $1.5 billion. A couple hundred grand is not what these attackers are after. > if you know to evade the cops. step 1: live in a place where the cops do not police this type of activity step 2: $$$$ The pushed payload didn't generate any new traffic. It merely replaced the recipient of a crypto transaction to a different account. It would have been really hard to detect. Ex-filtrating API keys would have been picked up a lot faster. OTOH, this modus operandi is completely inconsistent with the way they published the injected code: by taking over a developer's account. This was going to be noticed quickly. If the payload had been injected in a more subtle way, it might have taken a long time to figure out. Especially with all the levenshtein logic that might convince a victim they'd somehow screwed up. Not only that, but it picked an address from a list which had similar starting/ending characters so if you only checked part of the wallet address, you'd still get exploited. It is not a one-in-a-million opportunity though. I hate to take this to the next level, but as criminal elements wake up to the fact that a few "geeks" can possibly get them access to millions of dollars expect much worse to come. As a maintainer of any code that could gain bad guys access, I would be seriously considering how well my physical identity is hidden on-line. This is why banks make you approve transactions on your phone now. The fact that a random NPM package can redirect your money is a massive issue I just made a very similar comment. Spot on. It's laughable to think that this trivial opportunity that literally any developer could pull off with a couple of thousand dollars is a one-in-a-million. North Korea probably has enough money to buy up a significant percentage of all popular npm dependencies and most people would sell willingly and unwittingly. In the case of North Korea, it's really crazy because hackers over there can do this legally in their own country, with the support of their government! And most popular npm developers are broke. actually, unless you are billionaire or high profile individual You wouldn't get targeted not because they cant but its not worth it many state sponsored attack is well documented in a lot of book that people can read
they don't want to add much record because its create buzz You give an example of an incredibly targeted attack of snooping around manually on someone's machine so you can exfiltrate yet more sensitive information like credit card numbers (how, and then what?) But (1) how do you do that with hundreds or thousands of SSH/API keys and (2) how do you actually make money from it? So you get a list of SSH or specific API keys and then write a crawler that can hopefully gather more secrets from them, like credit card details (how would that work btw?) and then what, you google "how to sell credentials" and register on some forum to broker a deal like they do in movies? Sure sounds a hell of a lot more complicated and precarious than swapping out crypto addresses in flight. > You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit? The plot of Office Space might offer clues. Also isn't it crime 101 that greedy criminals are the ones who are more likely to get caught? API/SSH keys can easily be swapped, it's more hassle than it's worth. Be glad they didn't choose to spread the payload of one of the 100 ransomware groups with affiliate programs. > My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either. What gets me is everyone acknowledges this, yet HN is full of comments ripping on IT teams for the restrictions & EDR put in place on dev laptops. We on the ops side have known these risks for years and that knowledge of those risks are what drives organizational security policies and endpoint configuration. Security is hard, and it is very inconvenient, but it's increasingly necessary. I think people rip on EDR and security when 1. They haven’t had it explained why it does what it does or 2. It is process for process sake. To wit: I have an open ticket right now from an automated code review tool that flagged a potential vulnerability. I and two other seniors have confirmed that it is a false alarm so I asked for permission to ignore it by clicking the ignore button in a separate security ticket. They asked for more details to be added to the ticket, except I don’t have permissions to view the ticket. I need to submit another ticket to get permission to view the original ticket to confirm that no less than three senior developers have validated this as a false alarm, which is information that is already on another ticket. This non-issue has been going on for months at this point. The ops person who has asked me to provide more info won’t accept a written explanation via Teams, it has to be added to the ticket. Stakeholders will quickly treat your entire security system like a waste of time and resources when they can plainly see that many parts of it are a waste of time and resources. The objection isn’t against security. It is against security theater. This sounds sensible for the “ops person”? It might not be sensible for the organization as a whole, but there’s no way to determine that conclusively, without going over thousands of different possibilities, edge cases, etc. What about this sounds sensible? I have already documented, in writing, in multiple places, that the automated software has raised a false alarm, as well as providing a piece of code demonstrating that the alert was wrong. They are asking me to document it in an additional place that I don't have access to, presumably for perceived security reasons? We already accept that my reasoning around the false alarm is valid, they just have buried a simple resolution beneath completely stupid process. You are going to get false alarms, if it takes months to deal with a single one, the alarm system is going to get ignored, or bypassed. I have a variety of conflicting demands on my attention. At the same time, when we came under a coordinated DDOS attack from what was likely a political actor, security didn't notice the millions of requests coming from a country that we have never had a single customer in. Our dev team brought it to their attention where they, again, slowed everything down by insisting on taking part in the mitigation, even though they couldn't figure out how to give themselves permission to access basic things like our logging system. We had to devote one of our on calls to walking them through submitting access tickets, a process presumably put in place by a security team. I know what good security looks like, and I respect it. Many people have to deal with bad security on a regular basis, and they should not be shamed for correctly pointing out that it is terrible. If your sufficiently confident there can be no negative consequences whatsoever… then just email that person’s superiors and cc your superiors to guarantee in writing you’ll take responsibility? The ops person obviously can’t do that on your behalf, at least not in any kind of organizational setup I’ve heard of. As the developer in charge of looking at security alerts for this code base, I already am responsible, which is why I submitted the exemption request in the first place. As it is, this alert has been active for months and no one from security has asked about the alert, just my exemption request, so clearly the actual fix (disregarding or code changes) are less important than the process and alert itself. So the solution to an illogical, kafkaesque security process is to bypass the process entirely via authority? You are making my argument for me. This is exactly why people don’t take security processes seriously, and fight efforts to add more security processes. So you agree with me the ops person is behaving sensibly given real life constraints? Edit: I didn’t comment on all those other points, so it seems irrelevant to the one question I asked. Absolutely not. Ops are the ones who imposed those constraints. You can't impose absurd constraints and then say you are acting reasonable by abiding by your own absurd constraints.
anon7000 - 4 days ago
kardianos - 4 days ago
strogonoff - 4 days ago
maleldil - 3 days ago
strogonoff - 3 days ago
maleldil - 2 days ago
strogonoff - 13 hours ago
homebrewer - 4 days ago
jvanderbot - 4 days ago
DrewADesign - 3 days ago
danielheath - 4 days ago
ryukafalz - 4 days ago
hdjrudni - 4 days ago
austin-cheney - 4 days ago
h1fra - 4 days ago
RVuRnvbM2e - 4 days ago
zer00eyz - 4 days ago
Yoric - 4 days ago
fdsfdsfdsaasd - 4 days ago
fooker - 3 days ago
alehlopeh - 4 days ago
aspenmayer - 4 days ago
Jensson - 3 days ago
WD-42 - 4 days ago
raffraffraff - 4 days ago
chha - 3 days ago
rafaelmn - 3 days ago
Sophira - 3 days ago
captn3m0 - 4 days ago
oezi - 4 days ago
lynnharry - 4 days ago
herewulf - 4 days ago
mark_l_watson - 3 days ago
zenmac - 4 days ago
pmontra - 4 days ago
edem - 3 days ago
mikewarot - 4 days ago
mesofile - 4 days ago
cirelli94 - 4 days ago
rascul - 3 days ago
rollcat - 3 days ago
nottorp - 4 days ago
ohdeargodno - 4 days ago
blamestross - 4 days ago
booi - 4 days ago
fclairamb - 4 days ago
jamietanna - 3 days ago
jamietanna - 2 days ago
Apfel - 3 days ago
jamietanna - 3 days ago
wallrat - 4 days ago
kevinrineer - 4 days ago
easterncalculus - 4 days ago
hdjrudni - 4 days ago
saberience - 4 days ago
blamestross - 4 days ago
minitech - 4 days ago
ozim - 4 days ago
efreak - 2 days ago
pixl97 - 4 days ago
RedShift1 - 4 days ago
TYPE_FASTER - 4 days ago
jFriedensreich - 4 days ago
berkes - 4 days ago
rollcat - 3 days ago
hylaride - 3 days ago
mb2100 - 4 days ago
sebstefan - 4 days ago
com2kid - 4 days ago
pluto_modadic - 4 days ago
andrewstuart2 - 4 days ago
benoau - 4 days ago
brazzy - 4 days ago
Aeolun - 4 days ago
pixl97 - 4 days ago
hylaride - 3 days ago
zahlman - 4 days ago
wongarsu - 4 days ago
ameliaquining - 4 days ago
giveita - 4 days ago
komali2 - 4 days ago
ameliaquining - 4 days ago
com2kid - 4 days ago
joshuat - 4 days ago
bobbylarrybobby - 4 days ago
captn3m0 - 4 days ago
nialv7 - 4 days ago
nurettin - 4 days ago
jowea - 4 days ago
root_axis - 4 days ago
buu700 - 4 days ago
babypuncher - 4 days ago
aspenmayer - 4 days ago
efreak - 2 days ago
lmm - 4 days ago
aspenmayer - 4 days ago
jeroenhd - 4 days ago
spir - 4 days ago
dylan604 - 4 days ago
WhyNotHugo - 4 days ago
SchemaLoad - 4 days ago
boznz - 4 days ago
SchemaLoad - 4 days ago
jongjong - 4 days ago
tonyhart7 - 4 days ago
hombre_fatal - 4 days ago
balls187 - 4 days ago
alexvitkov - 4 days ago
thewebguyd - 4 days ago
dghlsakjg - 4 days ago
MichaelZuo - 4 days ago
dghlsakjg - 4 days ago
MichaelZuo - 4 days ago
dghlsakjg - 4 days ago
MichaelZuo - 4 days ago
dghlsakjg - 4 days ago