Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work in infosec, and this sounds like a communication failure on the whistleblower's part.

Contrary to what many people believe, the profits should be prioritized over security for the most companies, that's only natural (after all, they don't generate any profits themselves, typically). The key is finding the right balance for this tradeoff.

Business leaders are the ones that are responsible for figuring out the acceptable risk level. They already deal with that every day, so it's nonsensical to claim they aren't capable of understanding risk. InfoSec's role for the most part is being a good translator, by identifying the technical issues (vulnerabilities, threats, missing best practices) that go beyond the acceptable risk profile and to present these findings to the business stakeholders, using the language they understand.

Either the guy wasn't convincing enough, or he failed to figure out the things business cares about & present the identified risk in these terms.



This is framing the story as a simple interaction (or interactions) between Harris and business leaders at Microsoft. It wasn't. Microsoft has a team responsible for translating between security researchers like Harris and its product teams/leadership. That team dismissed Harris because that team's priority was to ignore or downplay issues that were brought to it. Harris went around them and was still ignored. It seems like he tried everything short of calling the press directly to get someone to pay attention. Even after the issue was made public by other security researches, MS did nothing.

What happened here was a systematic failure on MS' part to address a fundamental flaw in one of the most critical pieces of security infrastructure at the entire company.

Companies like MS (and everyone else it seems) need to get out of this Jack Welsh mindset of the only thing that matters is the shareholders. MS acts as the gatekeeper of the most valuable organizations and governments on the planet. Their profits have to take a backseat to this type of thing or they shouldn't be allowed to sell their products to critical organizations and governments.


I might be misunderstanding, but from Andrew's Linkedin it looks like he wasn't a security researcher at MS, he was actually the person responsible for translating between security researchers and the upper management:

> Evangelize security services, practices, products, both internally and externally.

> Leading technical conversations around strategy, policy and processes with FINSEC and DoD/IC executive staff.


>he was actually the person responsible for translating between security researchers and the upper management:

According to the article, the group in charge of taking input from security researchers and deciding which vulnerabilities need to be addressed was Microsoft Security Response Center (MSRC), and Andrew Harris wasn't a member of it.


Why not go even further? Why not say that the whistleblower was wrong and Microsoft business leadership was right? Maybe their profits from ignoring this issue have been fantastic, and the externalities from e.g. mass theft of national security secrets are not Microsoft's problem.


Well, because as a security person I can only evaluate his actions from the point of security. Evaluating actions of MS business leadership is beyond my expertise.

I highly doubt that the senior leadership would willingly accept this kind of liability. But you need to put it into right terms for them to understand. Politics play important role at that level as well. There are ways of putting additional pressure on the c-suite, such as making sure certain keywords are used in writing, triggering input from legal or forcing stakeholders to formally sign off on a presented risk.

Without insight knowledge, it's impossible to figure out what went wrong here, so I'm not assigning blame to the whistleblower, just commenting that way too often techies fail to communicate risks effectively.


During my Master's, security was one of the subjects I took. It started with an equation that related risk (how much you'd lose if something bad happened), the probability of that risk, and the cost of mitigating that risk. The instruction being, one tries to find a mitigation that costs less than the exploitation of the risk. And note here that "cost" does not refer to just money, but could be computational cost, energy consumed, etc.


For the MS size entities, the risk calculation is way more complicated. The 1:1 between cost of mitigation vs cost of exploitation only applies to opportunistic attacks, really. At the level where APTs get involved, the data / access might be so valuable that they'd gladly outspend blue team's budget by a factor of 10-100.


But wouldn't the value of data be reflect in the cost of exploitation? (By cost of exploitation, I don't mean to say the resources needed to exploit, but what a company would stand to lose if exploited). The values of the variables, sure, can be different. I don't see why the equation has to be.


Microsoft was specifically told by the US Cyber Safety Review Board that they cross the line of risk vs. profit earlier this year. https://edition.cnn.com/2024/06/13/tech/microsoft-president-...

I seem to recall from another article that Microsoft as told by the review board that they need to start focusing on security, rather than work on new feature.

A company like Microsoft shouldn't need a whistleblower to know to focus on security. It seemed like Microsoft was on the right track to becoming a better company for a good number of years, but for the past year or two everything seems to fall a part again.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: