Defenders have threat modeling on their side. With access to source code and design docs, configs, infra, actual requirements and ability to redesign / choose the architecture and dependencies for the job, etc - there's a lot that actually gives defending side an advantage.
I'm quite optimistic about AI ultimately making systems more secure and well protected, shifting the overall balance towards the defenders.
The real world use cases for LLM poisoning is to attack places where those models are used via API on the backend, for data classification and fuzzy logic tasks (like a security incident prioritization in a SOC environment). There are no thumbs down buttons in the API and usually there's the opposite – promise of not using the customer data for training purposes.
CrowdStrike does this trick where it replaces the file (being transferred over a network socket) with zeroes if it matches the malware signature. Assuming that these are the malware signature files themselves, a match wouldn't be surprising.
This actually makes the most sense, and would help explain how the error didn't occur during testing (in good faith, I assume it was tested).
In testing, the dev may have worked from their primary to deploy the update to a series of secondary drives, then sequential performed a test boot from each secondary drive configured for each supported OS version. A shortcut/quick way to test that would've bypassed how their product updates in customer environments, also bypassing checks their software may have performed (in this case, overwriting their own file's contents).
It's heartwarming to believe in the "family" narrative some companies promote, but it's important to remain pragmatic. This narrative is just a motivational tool to encourage employees to go above and beyond their compensated duties. It's not a binding contract though.
In reality, the power dynamic inherently favors the employer. Once an employee has invested their time and energy, the company holds all the leverage. There's little incentive for the company to uphold their end of this unspoken "deal."
Leadership changes, company priorities shift, and the "family" narrative can quickly fade when faced with financial realities or strategic decisions.
I work in infosec, and this sounds like a communication failure on the whistleblower's part.
Contrary to what many people believe, the profits should be prioritized over security for the most companies, that's only natural (after all, they don't generate any profits themselves, typically). The key is finding the right balance for this tradeoff.
Business leaders are the ones that are responsible for figuring out the acceptable risk level. They already deal with that every day, so it's nonsensical to claim they aren't capable of understanding risk. InfoSec's role for the most part is being a good translator, by identifying the technical issues (vulnerabilities, threats, missing best practices) that go beyond the acceptable risk profile and to present these findings to the business stakeholders, using the language they understand.
Either the guy wasn't convincing enough, or he failed to figure out the things business cares about & present the identified risk in these terms.
This is framing the story as a simple interaction (or interactions) between Harris and business leaders at Microsoft. It wasn't. Microsoft has a team responsible for translating between security researchers like Harris and its product teams/leadership. That team dismissed Harris because that team's priority was to ignore or downplay issues that were brought to it. Harris went around them and was still ignored. It seems like he tried everything short of calling the press directly to get someone to pay attention. Even after the issue was made public by other security researches, MS did nothing.
What happened here was a systematic failure on MS' part to address a fundamental flaw in one of the most critical pieces of security infrastructure at the entire company.
Companies like MS (and everyone else it seems) need to get out of this Jack Welsh mindset of the only thing that matters is the shareholders. MS acts as the gatekeeper of the most valuable organizations and governments on the planet. Their profits have to take a backseat to this type of thing or they shouldn't be allowed to sell their products to critical organizations and governments.
I might be misunderstanding, but from Andrew's Linkedin it looks like he wasn't a security researcher at MS, he was actually the person responsible for translating between security researchers and the upper management:
> Evangelize security services, practices, products, both internally and externally.
> Leading technical conversations around strategy, policy and processes with FINSEC and DoD/IC executive staff.
>he was actually the person responsible for translating between security researchers and the upper management:
According to the article, the group in charge of taking input from security researchers and deciding which vulnerabilities need to be addressed was Microsoft Security Response Center (MSRC), and Andrew Harris wasn't a member of it.
Why not go even further? Why not say that the whistleblower was wrong and Microsoft business leadership was right? Maybe their profits from ignoring this issue have been fantastic, and the externalities from e.g. mass theft of national security secrets are not Microsoft's problem.
Well, because as a security person I can only evaluate his actions from the point of security. Evaluating actions of MS business leadership is beyond my expertise.
I highly doubt that the senior leadership would willingly accept this kind of liability. But you need to put it into right terms for them to understand. Politics play important role at that level as well. There are ways of putting additional pressure on the c-suite, such as making sure certain keywords are used in writing, triggering input from legal or forcing stakeholders to formally sign off on a presented risk.
Without insight knowledge, it's impossible to figure out what went wrong here, so I'm not assigning blame to the whistleblower, just commenting that way too often techies fail to communicate risks effectively.
During my Master's, security was one of the subjects I took. It started with an equation that related risk (how much you'd lose if something bad happened), the probability of that risk, and the cost of mitigating that risk. The instruction being, one tries to find a mitigation that costs less than the exploitation of the risk. And note here that "cost" does not refer to just money, but could be computational cost, energy consumed, etc.
For the MS size entities, the risk calculation is way more complicated. The 1:1 between cost of mitigation vs cost of exploitation only applies to opportunistic attacks, really. At the level where APTs get involved, the data / access might be so valuable that they'd gladly outspend blue team's budget by a factor of 10-100.
But wouldn't the value of data be reflect in the cost of exploitation? (By cost of exploitation, I don't mean to say the resources needed to exploit, but what a company would stand to lose if exploited). The values of the variables, sure, can be different. I don't see why the equation has to be.
I seem to recall from another article that Microsoft as told by the review board that they need to start focusing on security, rather than work on new feature.
A company like Microsoft shouldn't need a whistleblower to know to focus on security. It seemed like Microsoft was on the right track to becoming a better company for a good number of years, but for the past year or two everything seems to fall a part again.
At this point you should just leave this dumpster fire of an organization and find a more reasonable place to work. I can't relate to the people who keep inventing atrocious workarounds ignoring the problem that they work in a hostile work environment.
I work in security and can't relate to banning Python & replacing it with Microsoft crap either.
No, there is no confusion here at all (for a Python developer). I would consider it a code smell though as the whole problem is completely avoidable by better naming.
I'm quite optimistic about AI ultimately making systems more secure and well protected, shifting the overall balance towards the defenders.
reply