Twin brothers wipe 96 government databases minutes after being fired
arstechnica.com318 points by jnord a day ago
318 points by jnord a day ago
> [Opexus] said that “the individuals responsible for hiring the twins are no longer employed by Opexus.”
Getting close to the classic Monty Python line: "Those responsible for sacking the people who have just been sacked, have been sacked."
Jokes aside, stuff like this sucks because I suspect many employers will take from it the most extreme, dehumanizing lessons, e.g.: (a) make firings [edit: including lay-offs] as abrupt as possible including terminating all access immediately, (b) never give second chances to anyone with any sort of criminal record (even say decades old marijuana posession or something).
I'd prefer a more balanced version: limit unilateral access to sensitive systems in general (not just of recently-fired employees), when someone is fired immediately shut off particularly sensitive credentials if they do exist (but not their general-purpose login/email account), avoid hiring people convicted of wire fraud as sysadmins, hash your @!#$ing passwords, etc.
Terminating access and rotating passwords (if needed) while the person is in the meeting but has not yet found out they are being let go has been SOP for at least the last 20 years
My first task at my last job was removing access to an employee being let go. I had just gone through onboarding so I knew every (documented) service we needed to handle. We live tested it on my own accounts, measured the time before I noticed, and then proceeded to successfully go through the checklist.
Except not everything was properly documented, and it turned out the employee had given admin rights on some resources to a contractor which proceeded to wreak havoc on their behalf (the 'rm -rf' kind). Eh!
Amateurs. My employer does mass layoffs by terminating access to everything except their email account at 3am, and then sending an email to the victim saying “you were let go at 3am”. Managers get to figure out who’s left on their team by pinging everyone when they learn about it at work.
If you're talking about Oracle, the large round previous to that they did had individual meetings with employee, manager, and HR. With so many layoffs it took a week+ to do, effectively torturing an entire set of employees who had no idea if they'd have a job by the end of the hour, let alone week.
I'm not sure there's any good way to lay off large amounts of staff (besides not getting yourself into the situation in the first place where you have to)
>I'm not sure there's any good way to lay off large amounts of staff
Someone on HN once wrote that after the dot.com bust, Yahoo! HR had 1-1 meetings with every single employee that was part of the mass layoffs back then, and they did this for hundreds of workers. Boy what I wouldn't give to go back to such state of affairs, even though I wasn't yet part of the workforce back then.
An older family friend of mine who started working in tech around 2003-2005, told me "back in my day, to get a job, you'd just send your CV to HR@corpo.com, and in 2-3 days you'd get a call asking you when you're free to come over for an interview". Now today you're lucky you get an automated reply back from 50 CVs sent, just for the opportunity to do an impersonal take home assessment as part of the seven stage interview process. It's like screaming into the void of AI bots and automated CV screening systems, while you spin the barrel of the revolver to play the next round of Russian roulette.
And the crazy part is, that when people talk about "the good old days", we're talking about events from recent history, just 10-25 years ago, that a lot of current workers experienced in their lifetime, not stuff from when boomers were kids.
The massive sudden shift in the commoditization of human workers and turning them into faceless labor resources that can be inhumanely disposed of with a keystroke, is real and noticeable to everyone, that I'm envious for you guys who are set to retire soon out of this shitshow.
What comes after this? Have we reached rock bottom, or will it get even worse?
Either your dates or experiences are off, because I've been working in software since 1999 and the easiest time to get a job was quite recent, in the back-half of COVID. The early 2000's were decent, but I didn't experience - or know anyone who did - any sort of "free jobs' period. Also pay was relatively decent but much less than what you saw even 5 years ago. It's only the in the past year or so that the world has appeared to be ending for developers, and I think that pronouncement is premature.
Annecdata: 1996-1999 was super easy, one round start next Monday. 2000-2003 difficult. Easy again until 2008. Hard till 2013. No data since then.
What I hear about today seems crazy hard.
>Also pay was relatively decent but much less than what you saw even 5 years ago.
IDK, I'm not from the US/Bay-Area, nor does my country have any big-tech/FANG jobs to distort the market for what constitutes a "high wage" in tech, it's all the same.
>in the back-half of COVID.
Sure, but Covid was only a short blip, a temporary exception, not a baseline norm for wage/job growth, like the years prior which was a longer period of getting a job was easy, like 2012-2020.
For me where I live now, the career depression I saw came in 2023 already when jobs become less abundant and harder to get, and it only got worse later when mass layoff started. So we're already 3 years in the decline, longer than the Covid boom lasted and things aren't going better yet.
I entered the workforce in around 2012-2014 and it was significantly easier to get a callback from sending a resume than it is now where it's mostly automated rejections. When I say "easy" I also mean you didn't need 7 stages of interviews to get a job back then, you'd have 2 stages and those were pretty chill and get a call back from every 2-3 resumes sent. Now you need to send dozens. I guess "easy" is relative.
>Also pay was relatively decent but much less than what you saw even 5 years ago.
Inflation also happened in that time.
There's the classic article by Matt Ringel and Tom Limoncelli back from 1999:
https://www.usenix.org/legacy/event/lisa99/full_papers/ringe...
When you are talking about access like they had "make firings as abrupt as possible including terminating all access immediately" not doing this is incompetence. This is absolutely a standard and has to be for these kinds of positions. I've never worked anywhere where it wasn't for the majority of IT staff. You meet with HR, someone clears your desk, and security walks you out.
There is a middleground, but it requires conscious effort to prop-up, support, and maintain over the long haul: off-boarding centers.
I worked for a Big Tech company that actually did this, and it made the transition a lot easier. You could still access corporate resources necessary for the transition (HR, benefits, internal job postings, training offerings, expense reporting, etc), check-in with colleagues 1:1 (who would be warned this person was no longer part of the org, attachments could be blocked to prevent exfil, etc), and still send/receive email internally (though external was blocked by default and required justification).
You can safeguard your corporate infrastructure without actually cutting everything off entirely and sending someone home to stew angrily about it. In fact, there might be (as yet undocumented) advantages to letting folks exist in that transition period on that segmented infrastructure, so as to identify potentially bad actors before they can do harm and see about mending bridges.
Of course all of that requires conscious investment in projects with no clear quarterly/yearly KPIs to measure cost or success against, so most employers will never remotely consider it.
Your last sentence sums it up. I was blown away by the system you described that would allow for such a humane transition through such a difficult time. At least process wise it seems like a good place to work.
It really was. I’d gladly go back, too, but they’re not hiring IT folks with my skills atm.
you left out the people who enjoy the suffering and pain of the person it is being done to, while they supervise (and film it, in some cases)
> When you are talking about access like they had "make firings as abrupt as possible including terminating all access immediately" not doing this is incompetence.
You're proving my point—employers take the most extreme lesson and it's considered expected practice. They absolutely should have immediately terminated the credentials that granted unilateral access to sensitive databases. (Ideally those would never exist in the first place—there are two-person schemes. A pair of bad actors...well apparently happens according to this article...but is far more unusual.) But employers regularly (but shouldn't) terminate all access including credentials that allow last email to colleagues exchanging personal contact info or something.
For most of my career (over 30 years now) where I've had sufficient access privileges to matter, I've fairly diligently maintained a "Important credentials and access" list, which I've sent to my employer when leaving, strongly advising them of the need for them to disable or rotate those credentials.
This especially includes creds like root or admin level access to AWS/GCP/whatever-cloud-or-hosting-service, and other critical creds like user/password management, domain name registrations, AppleStore and GooglePlay accounts, source code repos, documentation and internal tooling, external services like observability/analytics/crash-trcking. It also keeps a current(ish) list of all clients/projects where I've had any access at all, listing things like API keys, ssh keys and bastion hosts, project or platform admin creds, as well as systems like databases (SQL and KV caches), firewall rule specific to me.
I also try to list anything else I could, if I were a malicious disgruntled ex employee, use to cause grief to the employer or their clients.
I point out in this email that if I were to be rouge, I'd most likely have intentionally left something out or left behind backdoors or timebombs, and while I am not that kind of person and I have not done those things, they owe it to themselves and their clients to have someone else senior and experienced enough to carefully audit everything to ensure I cannot access anything.
I send this from a personal email account, so I still have timestamped records of having sent it. If an ex employer ever gets hacked shortly after I leave, I want evidence I did everything I reasonably could to remind them to lock me out.
(Writing this down reminds me it's been a while since I updated this - I guess thats something I'll ned to get on to soon.)
The first option is flipping one switch. The second option is flipping some switches now, and flipping the rest later. Of course the safest (first) option is the correct option from a liability standpoint, which is all a company should operate on since it's first responsibility is to protect the company for those that are still there. There's plenty of ways to communicate with ex-colleagues that don't involve company resources or opening the company up to liability.
Let’s not forget the third option: proper security practices and principle of least privilege. No one should have been able to do this in the first place. Why were they able to get plaintext passwords with a simple query? Why did they have delete permissions on production db tables? Why were they able to modify system logs and delete backups?
I'd argue that failing to segregate things so that there's a switch for the sensitive stuff and a separate switch for the not-sensitive stuff is an operational failure. A rank and file employee having access to his email account should never pose a serious liability to the business.
> Of course the safest (first) option is the correct option from a liability standpoint, which is all a company should operate on since it's first responsibility is to protect the company for those that are still there.
Isn't this an unrealistically black-and-white mode of thinking? Humans are complicated and have many values and perceived responsibilities. It's not healthy for them to throw them all out and act as if they only have one responsibility that needs to be maximally upheld at all costs. They should balance their actions thoughtfully.
System security is not a human value. Access key rotation effective immediately is a compliance requirement, and completely orthogonal to human decency, which is delivered trough garden leave or severance, not extended system access
So, never lived in corp land? Healthy isn’t on most corporations radars except where it causes liability to them.
I haven't, but the parent said that this is what a company "should" do, not just what they do do.
Yeah I don't see why that's necessary. I'm sure you can always reach out to HR and ask (I have facilitated this in the past, pulling contact lists and phone numbers) but that also gives them ways to exfiltrate data. It's company data. Just think of all the info you have in your inbox. Unless you've managed offboarding for high level IT positions it seems harsh, but the risk is just too high to allow the user to do that stuff themselves.
> Just think of all the info you have in your inbox.
Meh? Sure, stuff that would help assemble a credible phishing attack, but not customer SPII or huge amounts of intellectual property or anything. If the assumption is that employees' inboxes are full of dangerous things, I would focus on fixing that.
No you don't get it, we have to take a harsh approach to firing people because we keep pallets of high explosive in the break room and management doesn't want to change that. /s
High level IT positions are not risky. This is the db admin who can do most of the damage.
Last time I was laid off they let me keep my laptop for the rest of the day. I gave it to them immediately to avoid any accusations of sabotage.
Eventually I tried to log into one of my old cloud accounts, to find it was only disabled since 9 days after my layoff. Pretty sloppy.
I suppose that's a very powerful way of preventing "accidents" on termination. But isn't that just theatre? I mean - as though termination is the one and only case where an employee with the power to destroy the company gets angry and might do something really stupid?!
It's not theater, it's defense against aggrievement. Termination is a traumatic event that threatens your ability to exist or provide for dependents. People [rightfully] don't handle exile well.
Someone with an interest in scuttling your company could just as easily maintain a low profile and do it at any time. Termination forces execution into a more-predictable timeframe. Once notified, the malevolent only have opportunity to exfiltrate or sabotage whatever they can reach in the time it takes to walk them out the door.
European laws require us to give people something like two months' notice. Even then we don't trust them; we pay them their salary and tell them to stay home.
Yeah but if you defense against somebody erasing a database is "we remove their access when they're fired" then your defense is garbage.
Like there's so many other attack vectors besides an upset ex-employee.. Like all those articles about NK employees who presumably are trying very hard not to be fired. Or employees using company provided insecure email software leaving them vulnerable to ransomware et al.
I'm talking about off-boarding not general day to day security.
But I'm talking about general day-to-day security as well as off-boarding. What stops a single disgruntled employee from doing this before being fired? And if you have a good story there, why do you need the most extreme approach to "off-boarding"?
It makes sense to terminate someone's high-risk credentials immediately when they're fired. But it's extremely worrying if every credential held by every employee is considered high-risk. It suggests a bigger failure. "Unilateral access to a database filled with plain-text passwords" shouldn't ever exist. "Email account filled with dangerous stuff" should at least be unusual.
Having people with that level of access without some form of two-person-control is already a sign of incompetence.
Twins can defeat two-person control (okay I know one of them was locked out).
You always have to be careful about overfitting to a specific scenario like "this but if they had also forgotten to lock out the other evil twin". I'd prefer a system that is robust to a malicious employee (more likely: compromise of an employee's credentials) but has a slight gap in the "evil twins" scenario over one that prevents all post-firing malicious access from twins but doesn't consider at all what happens if a current employee's credentials are compromised.
Maybe they did, but since they were twins...
This takes the whole "you must mean my evil twin" to an actual example. Maybe this is more "you must mean my other evil twin". Part of me really wishes their names were Daryl
If you don't trust your people so much, why to hire them in a first place?
Looking at it from Europe - it is such a weird inhumane practice.
Someone decided your position is redundant. Okay, shit happens, economic downturn, etc. Then you have extra 3-6 months of work to pass your knowledge, train replacement and document everything.
Looking at it from Europe, this definitely also happens. It depends on the situation. I know of ppl who were kept bcs the parting was in good faith (which was less a firing and more an agreement that parting is in everyone's interest), but I also know of ppl who had their access revoked before firing bcs it wasn't. The latter had unilateral system access as well, which added to it. It's not about humane or inhumane, it's about risk. The 3-6 months being nice is also a fairytale that I have only ever heard in a positive light from employees who are not particularly ambitious or awake or in any way satisfied with their jobs or the prospect of a future job. On the other hand from the perspective of employers it's consistently hard to effectively restructure, it's expensice and awkward to have to pretend to want to keep someone around that you or they don't want around.
It's just one of these rules that unfortunately in Europe allow people to view life purely as the time between jobs. I'd never tell that to someone's face but it's simply a fact that the world stops of people don't work and no matter what the ideal world looks like in your dreams, working is the only real way forward for anything. It's part of the reason why Europe is falling behind on everything.
> It's part of the reason why Europe is falling behind on everything.
I read a news article that Orange Telecom in France was being sued by a woman they had on payroll for the last 20 years doing nothing, because due to a medical condition she suffered, she became unable to do her job, and since they couldn't fire her due to France unions and labor laws, nor did they have any available job that could fit her current condition, they just kept paying her for 20 years to do nothing at work, and now she's suing them for the depression she got to get paid for no work.
It felt like reading a Monty Python skit.
But Europe is failing due to a myriad of compounding issues and structural deficits, not just because firing workers can be a Kafkaesque nightmare in some countries. European workers' unions and labor protections were even stronger 20-25 years ago and in 2004 the Euro stock market was worth more than the US stock market, while now it's worth half the US one. But that's whole different discussion where pages have to be written to encompass the whole context and cover all aspects of European economic decline. Boiling it down to crazy labor protections would be reductionist and incorrect.
>Looking at it from Europe - it is such a weird inhumane practice.
Pretty standard practice in many technology(not just IT) and finance companies in Europe as well.
>If you don't trust your people so much, why to hire them in a first place?
It's not about trust, it's about risk, and most companies operate on liability and risk mitigation. If society ran on trust alone, we wouldn't need contracts, door locks, passwords, IDs, judges, security cameras, jails, police, etc.
You can verify someone's performance at the job interview, you can't verify their trustworthiness, especially once they've learned they lost their job, even trustworthy people react irrational once emotions hit making snap decisions they'll later regret without thinking of the consequences on the spot, and you see innocent people suddenly turn vengeful or violent and break the law (just look at relationship breakups and domestic violence).
You can't predict such reactions, so best to prevent them instead of chasing damages from them later through the court system.
Put yourself in a business owner's position for a minute. Nobody wants to be the "this former employee set my building on fire after I gave his notice, by leaving him in the flammable material warehouse unsupervised, because I wanted to show him that despite the layoff I still trust him".
For some businesses and jobs the trust alone is enough, for other jobs that involve access to sensitive data or money, it's straight to paid garden leave because nobody wants to risk it.
>Then you have extra 3-6 months of work to pass your knowledge, train replacement and document everything.
Yeah, that happens sometimes like for CxO's, managers, execs who get generous golden parachutes/severance packages, but for rank and file workers in the trenches, having to show up to a workplace you know you'll soon loose, for several more months of work till it's finally over, feels like torture unless you're getting a crazy severance package. That's like your wife telling you "honey, I'm divorcing you, but I still want you to live with me for 3-6 more months, and perform your regular duties".
All the couples I know who are divorced did continue living together after one of them said it was over, I think the longest time actually was about 6 months.
Yeah but did they still keep banging and cuddling like before the divorce announcement? They probably weren't doing much of that anyway if they got divorced but you get my point.
I work in government. If you think that is incompetence, then I have stories that could make your skin crawl.
They do all of that now though...
In the US, they'll terminate your access while you're on the Teams Meeting behind the scenes and if you have any gaps, issues, blips, or smudges in your resume it gets thrown into the recycle bin by some AI agent.
In an age of malicious agentic AI, this level of access is negligent. A lack of engineering controls preventing this from happening at all means that a simple phishing or supply chain attack could easily have resulted in the same outcome or worse.
Jokes aside, stuff like this sucks because I suspect many employers will take from it the most extreme, dehumanizing lessons, e.g.: (a) make firings [edit: including lay-offs] as abrupt as possible including terminating all access immediately
The employee is always the last to know. This is standard fare.
Then Opexus fired the one who said it.
Leaving no one to say anything anymore on their behalf.
> a more balanced version: <bunch of weedy ACLs, judgement calls, liability/>
Too complicated and subjective, stinks of more risk.
Also, I don't think it's dehumanizing it all (having been on the receiving end of it way back when during a layoff, and involved in the process more times than I care to count). It's standard practice for involuntary terms at all companies we work with, whether employee is IT or not. If a company is not doing this already, I'd encourage them to.
> Too complicated and subjective, stinks of more risk.
I actually think there's less risk, because it's not as narrowly focused on what a just-fired employee can do. That's not the only scenario of concern.
> Also, I don't think it's dehumanizing it all (having been on the receiving end of it way back when during a layoff, and involved in the process more times than I care to count).
Interesting. Thanks for the perspective. I've been fortunate enough to not be on the receiving end of a lay-off, knock on wood. It's happened to my teammates/reports though. Wasn't my decision. :-(
I'm just amused how these people were even hired to begin with ? They don't seem to be Americans? How were they even allowed to work on sensitive systems? Why was this even allowed? So many questions.
At 4:58 pm, he wiped out a Department of Homeland Security database using the command “DROP DATABASE dhsproddb.”
At 4:59 pm, he asked an AI tool, “How do i clear system logs from SQL servers after deleting databases?” He later asked, “How do you clear all event and application logs from Microsoft windows server 2012?”
In the space of a single hour, Muneeb deleted around 96 databases with US government information.They were born in Maryland, and apparently quite skilled (or at least skilled at cheating their way through their studies, if not genuinely technically skilled).
https://www.somdnews.com/archive/news/19-year-old-twins-high...
> I'm just amused how these people were even hired to begin with? They don't seem to be Americans?"
according to this poster, non-Americans should not be hired to do computer stuff by American orgs.
tell that to the millions of non-Americans who work in America, legally doing computer stuff.
That was not AT ALL what was implied and you know it.
I too am shocked at the level of federal access that was afforded to these non-Americans that clearly also hold a disdain for the country.
I would imagine they lied about having a felony conviction on their job applications, and that for whatever banal reason any background check service they used didn't flag it, or the contractor was so grossly incompetent they didn't even check.
>> They don't seem to be Americans? How did you conclude that? Just their names?
A few other circumstantial things lightly hint at the twins not being typically American:
1. Obliviousness to local laws and oversight (and the combination of severity of punishment + likelihood of getting caught); most Americans of their intelligence would be aware, and would not engage in the sort of hijinks they did.
2. Working with sibling (anecdotal, but seems slightly more common among immigrant families than locals, which would make sense since, on average, immigrants have fewer local connections than locals so the likelihood of working with siblings increases)
3. Loyalty to family (evidenced through the brazenness in the way they helped each other in criminal acts without a second thought). Americans, on average, are more individualist and hesitate more when asked by family to do something criminal
4. A lot of immigrants eventually adopt anglicised names, which neither of these two did
If a detective looked at these facts, they'd keep an open mind as there's nothing definitive above, but it would be equally ignorant to ignore the circumstantial evidence.
Having said all this, do we care where they're from? (unless it's a potential case of foreign interference or theft from an untouchable overseas company, which doesn't seem to be the case here)
> "most Americans of their intelligence would be aware"
that would still leave up to 49% Americans not being aware. so how did you conclude that they were not Americans? Also, how did you measure their intelligence?
> "slightly more common among immigrant families than locals"
even if true, how did you conclude that these were not Americans?
> "Americans, on average, are more individualist and hesitate more when asked by family to do something criminal"
even if on average Americans are more so, how did you conclude that these were not Americans?
> "A lot of immigrants eventually adopt anglicised names"
from your sentence it seems a lot of them don't. so how did you conclude that these were not Americans?
It would be a disaster for immigrants in your area if you were ever hired into some kind of investigative/law enforcement role.
> that would still leave up to 49% .... how did you conclude?
This fundamentally misunderstands how predictive models work. A parameter is a potentially useful predictor if it's better than excluding it 50.000001% of the time (high frequency trading is good evidence of this).
> conclude
Conclusions? Absolutely not. Higher statistical probability? Yes. Based on evidence. To state your point, which is bleeding obvious, of course you cannot know how recently someone or their family came to America based only on their behaviour with regard to the law. But their behaviour with regard to the law absolutely can be a useful predictor.
(in fact, it's precisely this rationale that justifies in some cases giving foreigners lighter sentences where 'their culture' allowed for xyz but the local jurisdiction doesn't - roadrules in the UK is a pretty good example: local truck drivers get the full penalty; those from continental Europe often do not, since road rules are less likely to be known by them)
Alternate example: ask a Western drug smuggler busted in Indonesia or Vietnam how long they expected their jail sentence to be if caught (spoiler - it's a trick question: they'll say 3-5 years and instead are met with the death penalty; whereas most locals are well aware of this) - ignorance to local laws and customs does correlate with how long someone (or their family) has lived in the area, if at all.
These are not remotely indicators that someone is an American citizen.
I take it you're not in HR at the CIA or FBI, which vet applicants' families for a reason. i.e. how long ago applicants and/or their families came to the US does help predict their loyalties... It might not be a strong nor fair predictor, but it's not zero either.
> On March 12, 2025, a search warrant was executed at Sohaib’s home in Alexandria. Agents grabbed plenty of tech gear but also turned up seven firearms and 370 rounds of .30 caliber ammunition. Given his former crimes, Sohaib should have had none of this.
For god's sake, don't commit crimes while you're committing crimes.
I was kind of hoping he sprinted out his back door which happened to be on a state line and then mailed his guns back to his house, just to try to cover everything.
> At 4:58 pm, he wiped out a Department of Homeland Security database using the command “DROP DATABASE dhsproddb.”
This article is hilarious. The two bickering brothers remind me of the guys in the Oceans movies played by Casey Affleck and Scott Caan. It’s amazing they got this close to sensitive data.
> At 4:59 pm, he asked an AI tool, “How do i clear system logs from SQL servers after deleting databases?” He later asked, “How do you clear all event and application logs from Microsoft windows server 2012?”
So many red flags, I can't even.
> In the space of a single hour, Muneeb deleted around 96 databases with US government information. He downloaded 1,805 files belonging to the EEOC and stashed them on a USB drive, then grabbed federal tax information for at least 450 people.
Maybe whoever runs infosec at that place should also be fired?
Elon's brother's landscaper's nephew's girlfriend was sacked along with Elon, so nobody was filling that role in the government.
Which MAGAts applaud. Emptying the swamp!
I love how this leaks out the fact that the DHS is running production databases on operating systems that are months away from end of extended support.
Windows Server has 5 years of mainstream support, 5 years of extended support, and then an extra 3 years paid Extended Security Updates (ESU) support. For 2012 and 2012 R2 that ends in October 2026.
The three years of ESU exists only for organisations like government departments that would rather pay Microsoft millions of dollars for patches than pay a competitive wage and hire competent IT staff that can complete upgrade projects on time.
> The three years of ESU exists only for organisations like government departments that would rather pay Microsoft millions of dollars for patches than pay a competitive wage and hire competent IT staff that can complete upgrade projects on time.
I'm not going to say the wages are fine but the issue is likely not to be the competence of the IT staff, but rather the overbearing IT management processes the U.S. Federal government uses. "Enterprise change management" processes separate from the already-long cybersecurity review processes can add weeks or even months to system updates.
In that kind of construct, you optimize for fewer but larger changes and then it's no surprise to see that there's no time in the project update schedule to update the OS in addition to making all the other long-overdue library / middleware / application changes that also are pending once a change finally can be made.
It can be quite politically valuable to kick the can to the next administration.
The day-to-day operation of large government bureaucracies is surprisingly immune to elections. The same people stay in the same job for decades, the "churn" only happens at the highest levels, and even those positions tend to outlast changes in the current political party in charge.
Did you not see what happened in the last year to federal workers?
Unfortunately this is a good example of kicking the can. Not to the next administration but to after the next elections. Some aspects are felt already but not all.
It's a good time to be kind to your neighbors. No matter their background, they're almost certainly not the ones to be upset at.
It’s a contractor to the DHS, but I’m not sure that makes it worse or better.
To be fair, this transpired last year, so they actually had one year and some months before losing extended support.
That said, they should have migrated it years ago.
Ready access to AI tools sure makes vandalism easy.
This vandalism is a joke. You could find the method in an XKCD comic.
The fact that they didn't already know how to do it is the crazy part.
Ai is just a tool. You can kill with hammer, doesn't mean you ban hammers. And they could have used stack overflow instead of ai.
The tools we use are not neutral. A sword can be made to work like an axe, but we use axes for chopping wood because a sword makes a shitty axe. A sword is designed to kill people. The handle, the mass, the weight distribution, and every other aspect I am not qualified to get in to, means swords are designed to kill. They are a tool, and their use is not neutral.
This is a clear example, but I don't believe any tools are neutral. Your immediate fallback was to a hammer, not a mouse, with the obvious corrollary being to bludgeon, but the same line applies. Tools are not neutral, and that's why when you looked for something that causes harm, you grabbed something that's objectively been serving a dual-purpose for hundreds of years. Nobody's using a computer mouse to bludgeon someone to death; it makes a shitty bludgeon, and the design of the tool reflects that.
That's also why these comparisons always fall back to knives, or hammers, or the AK-47: they are dangerous tools that are designed to make killing easier. Nobody is making these comparisons to more benign tools, like desk lamps, coffee cups, or car stereos, and it's because tools are not neutral, and none of my examples are designed to make direct, bodily harm, easier.
Murder by computer keyboard: https://www.deseret.com/1997/7/6/19322063/mother-charged-wit...
Murder by ethernet cable: https://www.gainesvilletimes.com/news/dead-woman-found-in-pa...
Murder by laptop: https://www.riverfronttimes.com/william-lynn-gunter-sentence...
Murder by cellphone charger: https://lawandcrime.com/crime/pennsylvania-man-admits-to-str...
Murder by desk lamp: https://www.pressdemocrat.com/2009/01/08/man-beaten-to-death...
Stabbing by coffee mug: https://www.muscalaw.com/blog/north-port-two-women-attack-co...
My larger point is that nobody - nobody - defaults to telling us the coffee mug is unregulated, as AI allegedly ought to be. They always compare it to something much more commonly used as a weapon; something that, when asked to name a household object likely to be used as a weapon, the average person would guess.
Your point is that people make a stronger argument even when a weaker one would be sufficient?
Instead of comparing AI to any other tool, especially one closer to "useful with a computer", the common comparison is always a weapon of some kind.
If the design of tools are neutral, one tool should do as well as another in this common comparison. But the useful application of tools is inherent in their design.
If tools were neutral, as so many on this site claim, why is AI only ever compared to knives and hammers?
Parent has lots of links to other common objects causing harm, why are they never used as the example when tools are allegedly neutral? That would be a stronger argument opposing AI regulation - ethernet has less regulations that knives, but can still be used as a murder weapon
My god, they didn't say ban ai they said it makes vandalism easy.
No need to knee jerk react to an argument that hasn't been made.
It's not knee jerk to respond to an obvious contextual implication.
Absolutely wasn’t where I was going with that.
I was sort of admiring the devastation a malignant actor can cause with a good tool.
It’s usually used for morally neutral neutral or good work.
Fair enough. I guess an LLM in an IT administration role could be aptly compared to a bulldozer.
You are the first person in this conversation to mention banning. I am not sure what your comment has to do with anything.
Those two in the movies were always a highlight for me, especially when the one joins the other in the Mexican factory riot.
One of my favorite lines "Peligroso es mi nombre medio" (which of course is not grammatically correct in Spanish) and then his short inspirational speech invoking general Zapata were great.
I don’t know where to start with this other than to point out that there is no way in hell these two clowns had the security clearance necessary to access a prod DB at DHS. I can only assume they stole creds from another employee who had that level of clearance. Also, tax records are not stored in a DHS domain .
I think this story has been sanitized to mask some details which is ok I guess but I ain’t buying the back story.
How did they get access to 5k passwords? Are they being sent/stored in cleartext? This is the most baffling part of the article for me.
The second part I'm unclear about is how you could pass SOC2 when you aren't terminating account access simultaneously with the employment termination.
From the article, it sounds like the passwords are indeed stored in cleartext:
> On Feb. 1, 2025, Muneeb Akhter asked Sohaib Akhter for the plaintext password of an individual who submitted a complaint to the Equal Employment Opportunity Commission’s Public Portal, which was maintained by the Akhters’ employer. Sohaib Akhter conducted a database query on the EEOC database and then provided the password to Muneeb Akhter. That password was subsequently used to access that individual’s email account without authorization.
It still blows my mind. Shouldn't the government audit their contracting companies for egregious issues like this? Seems extremely reckless not to.
I've been through a handful of SOC2 audits and they've never asked us to _prove_ that we aren't storing passwords in plaintext or with reversible encryption (we weren't).
This is why so much of vetting & compliance is toothless. You can have robust change management, physical security, network security, identity management, etc. policies but absolutely nobody wants to spend enough on audit & enforcement to make them meaningful.
The gov't will make you _claim_ that you do all of these things before awarding a contract, but they won't ever check.
Good actors will do the right thing regardless because they know the consequences of cutting corners.
I'm pretty shocked as well. I thought every company stopped doing this like 20 years ago? Even for a legacy system that is a long time to continue storing credentials like that.
My wife works in IT at a mid sized city. They still store credentials in source control.
20 years is rookie numbers in these systems. I guarantee it’s been at least 40 years since a single fuck was given.
Policy and practice might not be the same thing. The company and the entire management staff should be on somebody’s blacklist for future procurement.
The whole point of stuff like SOC2 and audit to verify that policy is actually implemented. Seems like nobody actually checked.
SOC2 requires an audit. But one of the weaknesses of SOC2 is that the audit mostly checks to determine that you are following whatever your policy is. It doesn't verify that your policy is rigorous.
Depends on what their offboarding policy is. If it's 72 hours or something they would not breach policy.
And how exactly do you want to store passwords if not in plain text (and then encrypted of course)? 5k is a lot, the authorization process is broken, but this is not related to how the passwords are stored.
The only solution is correct access segregation and a bastion
You should never store passwords in plain-text, encrypted or not, you should always use a one-way cryptographic hash like bcrypt [0], scrypt [1], or PBKDF2 [2], combined with a single use salt [3] and optionally a pepper [4], and then store the output of the hash in the database.
To confirm a user supplied password matches you run input into the same hash function again with the salt+pepper and compare it to the value in the database.
That way if the database is stolen, the attacker cannot recover the contents of the passwords without brute forcing them. Encrypting passwords is not recommended because too often attackers are able to recover the encryption keys during the same attack where the password data is extracted.
[0] https://en.wikipedia.org/wiki/Bcrypt
[1] https://en.wikipedia.org/wiki/Scrypt
[2] https://en.wikipedia.org/wiki/PBKDF2
You speak very authoritatively on something you don’t know.
Hashing passwords has been a thing for at least 50 years now. V3 unix had /etc/passwd which hashed all user passwords. Notably, these hashed passwords in early unix have been cracked: https://arstechnica.com/information-technology/2019/10/forum...
I guess you got your answer.
Hashed, you store them hashed (and salted). A breach should never reveal passwords.
Typically you store a hash of user passwords instead, then when logging in you hash the user password client-side and compare the hashes. This acts like a one-way function that protects the password while letting the user authenticate themselves.
Also, you need to add salt. Otherwise every person using "Password123" has the exact same hash. Before they broke their search engine, it was common to google the MD5/MD4 hashes to "decrypt" or "unhash" them.
Hashing passwords client-side is generally a bad idea, since it means that the hash effectively becomes the password. For example, if I have a database row that has the hash of the password and a bad-guy gets access to the database, they will get the hash. The benefit of a hash is that it is a one-way operation, I can't figure out the plaintext from the hash, so my account is safe. If the password is hashed on the client, and sent to the server the attacker doesn't need to reverse the hash, they can just send the hash in the request. Instead, you should send the password to the server (using TLS encryption) and do the hash and compare on the server.
You actually want to one-way passwords both client-side, for transport, and again server-side, for storage/comparison.
Otherwise, there's a hole, between the end of the TLS connection and where the server-side encryption happens, where the password is in plain text. Think logs and load-balancers and proxies.
While the client-side hashing doesn't help protect your site a lot (as you say, the hashed value the client sends effectively becomes the password), it helps protect the users who use the same password across multiple sites.
Notice in this case, that's exactly what the brothers are accused of doing: using credentials harvested from their site to log into other, potentially more lucrative accounts.
I didn't see if that's the hole the brothers exploited but it very well could have been.
The client-side encryption may have been all that was missing in this case.
Hashing client side is sufficient because the only service you can breach with the hash is the one you already had to breach in order to read the database.
Of course performing an additional server side hash on top of the client side one is good defense in depth because there's at least some chance that it might make things more difficult for a rogue insider and doing so costs approximately nothing. But it certainly isn't critical because by the time you're dealing with a rogue insider things are already looking quite bad.
People shouldn't be downvoting this...
Hashing client-side is a good idea. You must also hash server-side, for storage/comparison.
Otherwise, an insider may be able to harvest the original password, from logs, proxies, load balancers, etc. that requests pass through after the end of the TLS connection, on the way to the db.
They can then try the credentials on other, perhaps more lucrative sites. That's what the brothers are accused of doing here, so client-side hashing (or just simple encryption) may have been the missing piece of security that would have thwarted the credential stealing.
I wonder how common are setups where an internal person has access to the TLS private key part of the certificate or access to a network equipment that all traffic passes through, yet they cannot access the inputs required for hashing/encryption client-side?
This seems to mostly prevent accidental logging and is thus a matter of defense in depth, stopping malicious actors from exploiting it later — but an actively malicious IT person would not be deterred.
> This seems to mostly prevent accidental logging
Yes, and that's not uncommon, IME. There's generally a lot of logging that's at least potentially available, and it gets turned on, and the logs shared when there's a problem that needs to be fixed (especially when it needs to be fixed quickly, which is usual).
This is going to make more sense for "enterprise"-type deployments, where there's a significant distinction between the people who might have access to request logs at times, and the people who can push code to production.
Yes limited protection against insiders is good defense in depth but not the primary purpose which is to protect end user accounts on other services in the event that you are breached.
My question still stands: how do you disallow cleartext password extraction if you are breached, assuming all your IT infrastructure and code is now accessible to an attacker?
I am talking about not logging them ever, using internal TLS and strong hashing in general, and wondering what exact value is added on top with client side hashing.
There are substantial differences between database access, snooping the logs, internal (no TLS) wiretap, and full MITM of the frontend.
Hashing client side minimizes the risk of any blast radius exceeding the bounds of your own service. There's obviously no way to prevent an adversary who achieves full MITM from gradually harvesting credentials over time. The only solution there is to use keys instead of passwords.
We are not disagreeing, but I am not getting my answer: how is client side hashing really helping, what are the circumstances it helps with if you do have the basics right?
In your enumeration, what is breached for this to be meaningfully impactful for other services where customers might be reusing credentials?