I found a Vulnerability. They found a Lawyer

dixken.de

182 points by toomuchtodo 3 hours ago


janalsncm - an hour ago

Three thoughts from someone with no expertise.

1) If you make legal disclosure too hard, the only way you will find out is via criminals.

2) If other industries worked like this, you could sue an architect who discovered a flaw in a skyscraper. The difference is that knowledge of a bad foundation doesn’t inherently make a building more likely to collapse, while knowledge of a cyber vulnerability is an inherent risk.

3) Random audits by passers-by is way too haphazard. If a website can require my real PII, I should be able to require that PII is secure. I’m not sure what the full list of industries would be, but insurance companies should be categorically required to have an cyber audit, and laws those same laws should protect white hats from lawyers and allow class actions from all users. That would change the incentives so that the most basic vulnerabilities are gone, and software engineers become more economical than lawyers.

Hnrobert42 - 12 minutes ago

I use a different email address for every service. About 15 years ago, I began getting spam at my diversalertnetwork email address. I emailed DAN to tell them they'd been breached. They responded with an email telling me how to change my password.

I guess I should feel lucky they didn't try to have me criminally prosecuted.

stevage - 2 hours ago

Since the author is apparently afraid to name the organisation in question, it seems the legal threats have worked perfectly.

general1465 - 7 minutes ago

One way how to improve cybersecurity is let cyber criminals loose like predators hunting prey. Companies needs to feel fear that any vulnerability in their systems is going to be weaponized against them. Only then they will appreciate an email telling them about security issue which has not been exploited yet.

MrQuincle - 6 minutes ago

There should exist a vulnerability disclosure intermediary. They can function as a barrier to protect the scientist/researcher/enthousiast and do everything by the book for the different countries.

vaylian - 3 hours ago

> Instead, I offered to sign a modified declaration confirming data deletion. I had no interest in retaining anyone’s personal data, but I was not going to agree to silence about the disclosure process itself.

Why sign anything at all? The company was obviously not interested in cooperation, but in domination.

0sdi - 2 hours ago

Is this Divers Alert Network (DAN) Europe, and it's insurance subsidiary, IDA Insurance Limited?

- 10 minutes ago
[deleted]
paxys - an hour ago

When you are acting in good faith and the person/organization on the other end isn't, you aren't having a productive discussion or negotiation, just wasting your own time.

The only sensible approach here would have been to cease all correspondence after their very first email/threat. The nation of Malta would survive just fine without you looking out for them and their online security.

undebuggable - 2 hours ago

> the portal used incrementing numeric user IDs

> every account was provisioned with a static default password

Hehehe. I failed countless job interviews for mistakes much less serious than that. Yet someone gets the job while making worse mistakes, and there are plenty of such systems on production handling real people's data.

xvxvx - 3 hours ago

I’ve worked in I.T. For nearly 3 decades, and I’m still astounded by the disconnect between security best practices, often with serious legal muscle behind them, and the reality of how companies operate.

I came across a pretty serious security concern at my company this week. The ramifications are alarming. My education, training and experience tells me one thing: identify, notify, fix. Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.

Anytime I see an article about a data breach, I wonder how long these vulnerabilities were known and ignored. Is that just how business is conducted? It appears so, for many companies. Then why such a focus on security in education, if it has very little real-world application?

By even flagging the issue and the potential fallout, I’ve put my career at risk. These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.

viccis - 2 hours ago

This is somewhat related, but I know of a fairly popular iOS application for iPads that stores passwords either in plaintext or encrypted (not as digests) because they will email it to you if you click Forgot Password. You also cannot change it. I have no experience with Apple development standards, so I thought I'd ask here if anyone knows whether this is something that should be reported to Apple, if Apple will do anything, or if it's even in violation of any standards?

snowhale - 18 minutes ago

the NDA demand with a same-day deadline is such a classic move. makes it clear they were more worried about reputation than fixing anything.

kazinator - 2 hours ago

> vulnerability in the member portal of a major diving insurer

What are the odds an insurer would reach for a lawyer? They probably have several on speed dial.

estebarb - an hour ago

If this was in Costa Rica the appropiate way was to contact PRODHAB about the leak of personal information and Costa Rica CSIRT ( csirt@micitt.go.cr ).

Here all databases with personal information must be registered there and data must be secure.

- 2 hours ago
[deleted]
hbrav - an hour ago

This is extremely disappointing. The insurer in question has a very good reputation within the dive community for acting in good faith and for providing medical information free of charge to non-members.

This sounds like a cultural mismatch with their lawyers. Which is ironic, since the lawyers in question probably thought of themselves as being risk-averse and doing everything possible to protect the organisation's reputation.

Buttons840 - an hour ago

I've said before that we need strong legal protections for white-hat and even grey-hat security researchers or hackers. As long as they report what they have found and follow certain rules, they need to be protected from any prosecution or legal consequences. We need to give them the benefit of the doubt.

The problem is this is literally a matter of national security, and currently we sacrifice national security for the convenience of wealthy companies.

Also, we all have our private data leaked multiple times per month. We see millions of people having their private information leaked by these companies, and there are zero consequences. Currently, the companies say, "Well, it's our code, it's our responsibility; nobody is allowed to research or test the security of our code because it is our code and it is our responsibility." But then, when they leak the entire nation's private data, it's no longer their responsibility. They're not liable.

As security issues continue to become a bigger and bigger societal problem, remember that we are choosing to hamstring our security researchers. We can make a different choice and decide we want to utilize our security researchers instead, for the benefit of all and for better national security. It might cause some embarrassment for companies though, so I'm not holding my breath.

projektfu - 2 hours ago

Another comment says the situation was fake. I don't know, but to avoid running afoul of the authorities, it's possible to document this without actually accessing user data without permission. In the US, the Computer Fraud and Abuse Act and various state laws are written extremely broadly and were written at a time when most access was either direct dial-up or internal. The meaning of abuse can be twisted to mean rewriting a URL to access the next user, or inputting a user ID that is not authorized to you.

Generally speaking, I think case law has avoided shooting the messenger, but if you use your unauthorized access to find PII on minors, you may be setting yourself up for problems, regardless if the goal is merely dramatic effect. You can, instead, document everything and hypothesize the potential risks of the vulnerability without exposing yourself to accusation of wrongdoing.

For example, the article talks about registering divers. The author could ask permission from the next diver to attempt to set their password without reading their email, and that would clearly show the vulnerability. No kids "in harm's way".

desireco42 - 3 hours ago

I think the problem is the process. Each country should have a reporting authority and it should be the one to deal with security issues.

So you never report to actual organization but to the security organization, like you did. And they would be more equiped to deal with this, maybe also validate how serious this issue is. Assign a reward as well.

So you are researcher, you report your thing and can't be sued or bullied by organization that is offending in the first place.

josefritzishere - 2 hours ago

I find these tales of lawyerly threats completley validate the hackers actions. They reported the bug to spur the company to resolve it. Their reaction all but confirms that reporting it to them directly would not have been productive. Their management lacks good stewardship. They are not thinking about their responsibility to their customers and employees.

cptskippy - 2 hours ago

Maintaining Cybersecurity Insurance is a big deal in the US, I don't know about Europe. So vulnerability disclosure is problematic for data controllers because it threatens their insurance and premiums. Today much of enterprise security is attestation based and vulnerability disclosure potentially exposes companies to insurance fraud. If they stated that they maintained certain levels of security, and a disclosure demonstratively proves they do not, that is grounds for dropping a policy or even a lawsuit to reclaim paid funds.

So it sort of makes sense that companies would go on the attack because there's a risk that their insurance company will catch wind and they'll be on the hook.

FurryEnjoyer - 2 hours ago

Malta has been mentioned? As a person living here I could say that workflow of the government here is bad. Same as in every other place I guess.

By the way, I had a story when I accidentally hacked an online portal in our school. It didn't go much and I was "caught" but anyways. This is how we learn to be more careful.

I believe in every single system like that it's fairly possible to find a vulnerability. Nobody cares about them and people that make those systems don't have enough skill to do it right. Data is going to be leaked. That's the unfortunate truth. It gets worse with the come of AI. Since it has zero understanding of what it is actually it will make mistakes that would cause more data leaks.

Even if you don't consider yourself as an evil person, would you still stay the same knowing real security vulnerability? Who knows. Some might take advantage. Some won't and still be punished for doing everything as the "textbook way".

- 2 hours ago
[deleted]
durzo22 - 27 minutes ago

This is LLM slop

refulgentis - 3 hours ago

Wish they named them. Usually I don't recommend it. But the combination of:

A) in EU; GDPR will trump whatever BS they want to try B) no confirmation affected users were notified C) aggro threats D) nonsensical threats, sourced to Data Privacy Officer w/seemingly 0 scruples and little experience

Due to B), there's a strong responsibility rationale.

Due to rest, there's a strong name and shame rationale. Sort of equivalent to a bad Yelp review for a restaurant, but for SaaS.

clarabennett26 - 2 hours ago

[dead]

aicodereview42 - 2 hours ago

[dead]

cynicalsecurity - 2 hours ago

[flagged]

anonymous908213 - 3 hours ago

[flagged]

kazinator - 2 hours ago

Why does someone with a .de website insure their diving using some company based in Malta?

Based on this interaction, you have wonder what it's like to file a claim with them.