> Does Palantir collect data or just analyze aggregated purchased data?
Neither. Palantir makes data management software, they've never been in the business of collecting or analysing data themselves at all. There's generally a fundamental misunderstanding online of what Palantir actually does.
Any time you see an article or comment saying something along the lines of "Palantir is stealing your data", consider if it makes sense when you replace Palantir with MySQL, if it doesn't then it's generally safe to assume that article is garbage.
There are plenty of legitimate reasons to have grievances with Palantir, but they're completely drowned out by nonsense.
Neither. Palantir makes data management software, they've never been in the business of collecting or analysing data themselves at all. There's generally a fundamental misunderstanding online of what Palantir actually does.
This is rather naive. Palantir makes politics by creating and funding a SuperPAC to discredit a former employee who happens to support the RAISE act.
Leading the Future, a super PAC whose funders include the founders of companies like Palantir and OpenAI, is spending millions of dollars this election cycle, and a considerable amount of that money is going toward attack ads against Alex Bores – even though Bores himself used to work for Palantir.
Those are legitimate grievances as mentioned, what they are not is Palantir themselves collecting massive amounts of data, which is often what they're portrayed as doing and what the GP asked about.
They're not trying. I've seen an advertiser remain active for months with literally tens of thousands of ads where clicking them directly downloads a malicious exe file that most antivirus scanners flag.
They're definitely not trying - in any form. I run a marketplace for dogs (i.e. craigslist for puppies & dogs) and scammers are always trying to post fakes ads. They always use Gmail accounts. Every time I ban a gmail address, they scammers will just get a new one. Same scammer/person has created thousands of gmail accounts and Google doesn't care. I have reported this to Google. For the amount of info Google has on people, trivial for them to prevent some of this.
Meanwhile, because I've worked for several startups who have used Google Workspaces, if I (try to) open a new Google account, phone verification fails because my phone number "has been used too many times".
Some places suggest "try logging into Google with your phone number, and deleting the account that's associated, if you don't need", that just ... doesn't work.
Tech support scams still?! I don't even understand how this is possible. If Google wanted to they could come up with the tech to bypass the spam/scammers own ghosting system. They must have some kind of invisible Google bot that checks for downloads/scams, right?
Phone providers should also be detecting this with AI. There is no way this should be occurring anymore.
We had that issue of someone advertising fake clones of our sites specifically to push fake malware ridden payloads. We only got it handled by bugging internal contacts at Google. It sucked and worse we had to bug them for weeks because the attacker was churning through multiple domains and probably over 100 breached Ad accounts by the time they stopped
See also, Andy Keep's dissertation [1] and his talk at Clojure/Conj 2013 [2].
I think that the nanopass architecture is especially well suited for compilers implemented by LLMs as they're excellent at performing small and well defined pieces of work. I'd love to see Anthropic try their C compiler experiment again but with a Nanopass framework to build on.
I've recently been looking in to adding Nanopass support to Langkit, which would allow for writing a Nanopass compiler in Ada, Java, Python, or a few other languages [3].
Autopilots can. Both on airliners and small planes, although only landing on the latter as far as I know and it's only meant for emergencies. Airbus ATTOL is probably the most interesting of these in that it's visual rather than ILS (note that no commercial airliners are using this).
There's also the issue that when something goes wrong, many people will never trust an autopilot again. Just look at how people have reacted to a Waymo running over a cat in a scenario where most humans would have made the same error. There's now many people calling for self-driving cars to never be allowed on roads and citing that one incident.
> Mythos Preview identified a number of Linux kernel vulnerabilities that allow an adversary to write out-of-bounds (e.g., through a buffer overflow, use-after-free, or double-free vulnerability.) Many of these were remotely-triggerable. However, even after several thousand scans over the repository, because of the Linux kernel’s defense in depth measures Mythos Preview was unable to successfully exploit any of these.
Do they really need to include this garbage which is seemingly just designed for people to take the first sentence out of context? If there's no way to trigger a vulnerability then how is it a vulnerability? Is the following code vulnerable according to Mythos?
if (x != null) {
y = *x; // Vulnerability! X could be null!
}
Is it really so difficult for them to talk about what they've actually achieved without smearing a layer of nonsense over every single blog post?
I agree the wording is a bit alarmist, but a closer example to what they are saying is:
bool silly_mistake = false;
//... lots of lines of code
free(x);
//... lots of lines of code
if (silly_mistake) { // silly_mistake shown to be false at this point in the program in all testing, so far
free(x);
}
A bug like above would still be something that would be patched, even if a way to exploit it has not yet been found, so I think it's fair to call out (perhaps with less sensationalism).
FWIW there's a whole boutique industry around finding these. People have built whole careers around farming bug bounties for bugs like this. I think they will be among the first set of software engineers really in trouble from AI.
That is something a good static analyser or even optimising compiler can find ("opaque predicate detection") without the need for AI, and belongs in the category of "warning" and nowhere near "exploitable". In fact a compiler might've actually removed the unreachable code completely.
Well yeah, it’s a toy example to illustrate a point in an HN discussion :).
Imagine “silly mistake” is a parameter, and rename it “error_code” (pass by reference), put a label named “cleanup” right before the if statement, and throw in a ton of “goto cleanup” statements to the point the control flow of the function is hard to follow if you want it to model real code ever so slightly more.
It will be interesting to see the bugs it’s actually finding.
It sounds like they will fall into the lower CVE scores - real problems but not critical.
That's what I'm saying; a static analyser will be able to determine whether the code and/or state is reachable without any AI, and it will be completely deterministic in its output.
You cannot tell if code is actually reachable if it depends on runtime input.
Those really evil bugs are the ones that exist in code paths that only trigger 0.001% of the time.
Often, the code path is not triggerable at all with regular input. But with malicious input, it is, so you can only find it through fuzzing or human analysis.
You cannot tell if code is actually reachable if it depends on runtime input.
That is precisely what a static analyser can determine. E.g. if you are reading a 4-byte length from a file, and using that to allocate memory which involves adding that length to some other constant, it will assume (unless told otherwise) that the length can be all 4G values and complain about the range of values which will overflow.
Except it didn't fail. You just looked at the left engine and said what if I fed it mashed potatoes instead of fuel. And then dropped the mic and left the room.
It's more like finding a way to shut down the engine but only if there was a movie in the entertainment system than was longer than 5 hours. You can't exploit it now, and probably never will, but it's a risk that's sitting there that I'm sure you agree should be fixed
Presumably they mean they could make user code trigger a write out of bounds to kernel memory, but they couldn’t figure out how to escalate privileges in a “useful” way.
They should show this then to demonstrate that it's not something that has already been fully considered. Running LLMs over projects that I'm very familiar with will almost always have the LLM report hundreds of "vulnerabilities" that are only valid if you look at a tiny snippet of code in isolation because the program can simply never be in the state that would make those vulnerabilities exploitable. This even happens in formally verified code where there's literally proven preconditions on subprograms that show a given state can never be achieved.
As an example, I have taken a formally verified bit of code from [1] and stripped out all the assertions, which are only used to prove the code is valid. I then gave this code to Claude with some prompting towards there being a buffer overflow and it told me there's a buffer overflow. I don't have access to Opus right now, but I'm sure it would do the same thing if you push it in that direction.
For anyone wondering about this alleged vulnerability: Natural is defined by the standard as a subtype of Integer, so what Claude is saying is simply nonsense. Even if a compiler is allowed to use a different representation here (which I think is disallowed), Ada guarantees that the base type for a non-modular integer includes negative numbers IIRC.
They've promised that they will show this once the responsible disclosure period expires, and pre-published SHA3 hashes for (among others) four of the Linux kernel disclosures they'll make.
> Running LLMs over projects that I'm very familiar with will almost always have the LLM report hundreds of "vulnerabilities" that are only valid if you look at a tiny snippet of code in isolation because the program can simply never be in the state that would make those vulnerabilities exploitable.
Their OpenBSD bug shows why this is not so simple. (We should note of course that this is an example they've specifically chosen to present as their first deep dive, and so it may be non-representative.)
> Mythos Preview then found a second bug. If a single SACK block simultaneously deletes the only hole in the list and also triggers the append-a-new-hole path, the append writes through a pointer that is now NULL—the walk just freed the only node and left nothing behind to link onto. This codepath is normally unreachable, because hitting it requires a SACK block whose start is simultaneously at or below the hole's start (so the hole gets deleted) and strictly above the highest byte previously acknowledged (so the append check fires).
Do you think you would be able to identify, in a routine code review or vulnerability analysis with nothing to prompt your focus on this particular paragraph, how this normally unreachable codepath enables a DoS exploit?
I agree they found at least some real vulnerabilities. What I think is nonsense is the claim of finding thousands of real critical vulnerabilities and claims that they've found other Linux vulnerabilities that they simply can't exploit.
There are notably no SHA-3 sums for all their out-of-bound write Linux vulnerabilities, which would be the most interesting ones.
Sure. I guess it's a question of whether this is the worst they found or a representative case among thousands. It sounds like you'd know better than me, so I'm going to provisionally hope you're right...
Why is that nonsense? Do you think they exhausted all their compute finding just the few big vulnerabilities they've already discussed, and don't have a budget to just keep cranking the machine to generate more?
They're not publishing SHAs for things that aren't confirmed vulnerabilities. They're doing exactly the thing you'd want them to do: they claim to have vulnerabilities when they have actual vulnerabilities.
If I understand Anthropic's statements correctly, they've been cranking for a while, and what they have now is the results of Mythos-enabled vulnerability scans on every important piece of software they could find. (I do want to acknowledge how crazy it is that "vulnerability scan all important software repos in the world" is even an operation that can be performed.)
We talked to Nicholas Carlini on SCW and did not at all get the impression that they've hit everything they can possibly hit. They're still proving the concept one target at a time, last I heard.
> Over the past few weeks, we have used Claude Mythos Preview to identify thousands of zero-day vulnerabilities (that is, flaws that were previously unknown to the software’s developers), many of them critical, in every major operating system and every major web browser, along with a range of other important pieces of software.
They don’t explicitly rule out, I suppose, that these were only limited partial scans they did to find the vulnerabilities. But I don’t know why they’d do it that way, it’s not like they don’t have the resources to scan the entire Linux kernel.
i was trying to map "vulnerability scan all important software repos in the world" to an actual quote on their writing, but "every major operating system and every major web browser, along with a range of other important pieces of software" is not the same.
Can't you? My understanding is that that's exactly how security scans usually work - you run an analysis, find all the vulnerabilities, and then the continuous process is only there to check against the introduction of new vulnerabilities. Is that not the right mental model?
(A "security scanner" is a one-and-done proposition because it's deterministic and is going to find what it finds the first time you run and nothing more. But a software security assessment project you run every year on the same target with different teams will turn up different stuff every year. I'm at pains to remind people how totally lame source code security scanners are. People keep saying "static analyzers already do this" and like, nobody in security takes those tools seriously.)
Kernel address space layout randomization they are talking about is a bit different than (x != null). Other bug may allow to locate the required address.
It could very well be an actual reachable buffer overflow, but with KASLR, canaries, CET and other security measures, it's hard to exploit it in a way that doesn't immediately crash the system.
We've very quickly reached the point where AI models are now too dangerous to publicly release, and HN users are still trying to trivialize the situation.
Are they actually too dangerous to publicly release? It seems like a little bit of marketing from the model-producing companies to raise more funding. It's important to look at who specifically is making that statement and what their incentives are. There are hundreds of billions of dollars poured into this thing at this point.
You really think some marketers got leaders from companies across the industry to come together to make a video - and they're all in on the conspiracy because money?
That’s literally exactly the kind of thing marketing does, and has been doing for a very long time. Did you just arrive on earth from outer space or something?
Yes? Saying "conspiracy" is overstating things. A company can make a marketing push overselling their product and then have exclusive corporate partners that benefit from being associated with that marketing. That just seems like normal business that happens every day, and being skeptical of marketing messages should be your default position.
Says the marketing department of the company who is apparently still working on these AI models and will 100% release them to the public when their competitive advantage slips.
Marketing pushing to release a dangerous model is a lot more likely than marketing labeling a model of dangerous when it really isn't. If anything marketing would want to downplay the danger of a model being dangerous which is the opposite of what Anthropic is doing.
Everyone here doing mental gymnastics to imagine Anthropic playing 5-D chess because they're in denial of what is happening in front of their faces. AI is getting more capable/dangerous - it's not surprising to anyone. The trendlines have pointed in this direction for years now and we're right on schedule.
> The model autonomously found and chained together several vulnerabilities in the Linux kernel—the software that runs most of the world’s servers—to allow an attacker to escalate from ordinary user access to complete control of the machine.
I'm confused on this point. The text you quote implies that they were able to build an exploit, but the text quoted in the parent comment implies that they were not.
What were they actually able to do and not do? I got confused by this when reading the article as well.
They successfully built local privilege escalation exploits (from several bugs each), and found other remotely-accessible bugs, but were not able chain their remote bugs to make remotely-accessible exploits.
Because a vulnerability exists independently from the exploit. It’s a basic tenet of the current cybersecurity paradigm, that any IT related engineer should know about…
It's incredible how when you have experienced and knowledgable software engineers analyse these marketing claims, they turn out to be full of holes. Yet at the same time, apparently "AI" will be writing all the code in the next 3-6 months.
That example you gave is extremely memorable as I recognised it as exactly one of the insanely stupid false positives that a highly praised (and expensive) static analyser I ran on a codebase several years ago would emit copiously.
I agree. There are more blogs talking about LLM findings vulnerabilities than there are actual exploitable vulns found by LLMs. 99.9% of these vulnerabilities will never have a PoC because they are worthless unexploitable slop and a waste of everyone's time.
I think the point they were trying to make here was “Claude did better than a fuzzer because it found a bunch of OOB writes and was able to tell us they weren’t RCE,” not “Claude is awesome because it found a bunch of unreachable OOB writes.”
> Screen captures are ephemeral and will only be saved temporarily on your computer.