I think there's room for a distinction between "not using metrics" and "not using data".
Unthinkingly leaning on metrics is likely to help you build a faster, stronger horse, while at the same time avoiding building a car, a bus or a tractor.
>> I was in Ukraine drone HQ last year and they were using Palantir tech to blow up Russian tanks dude
> And I am a Ukrainian drone pilot on the frontline. We use the Delta Battlefield Management System, fully developed in Ukraine. Not American Peter “Antichrist” Thiel bullshit.
Not really comparable perhaps - but I had a Ericsson t18s or similar that went through a full 60C cotton wash cycle (being on at the start of the wash) and was fine after drying off.
The thing is - if the battery had been destroyed, that could have been replaced...
I think it might have to do with how models work, and fundamental limits with them (yes, they're stochastic parrots, yes they confabulate).
Newer (past two years?) models have improved "in detail" - or as pragmatic tools - but they still don't deserve the anthropomorphism we subject them to because they appear to communicate like us (and therefore appear to think and reason, like us).
But the "holes" are painted over in contemporary models - via training, system prompts and various clever (useful!) techniques.
But I think this leads us to have great difficulty spotting the weak spots in a new, or slightly different model - but as we get to know each particular tool - each model - we get better at spotting the holes on that model.
Maybe it's poorly chosen variable names. A tendency to write plausible looking, plausibly named, e2e tests that turns out to not quite test what they appear to test at first glance. Maybe there's missing locking of resources, use of transactions, in sequencial code that appear sound - but end up storing invalid data when one or several steps fail...
In happy cases current LLMs function like well-intentioned junior coders enthusiasticly delivering features and fixing bugs.
But in the other cases, they are like patholically lying sociopaths telling you anything you want to hear, just so you keep paying them money.
When you catch them lying, it feels a bit like a betrayal. But the parrot is just tapping the bell, so you'll keep feeding it peanuts.
> Meta CEO Mark Zuckerberg could soon have an AI clone of himself to interact with and provide feedback to employees, according to a report from the Financial Times.
Tunnel vision? If your model can handle big context, why divide into lesser problems to conquer - even if such splitting might be quite trivial and obvious?
It's the difference of "achieve the goal", and "achieve the goal in this one particular way" (leverage large context).
I meant, if the claim here is that small models can accomplish the same things with good scaffolding, why didn’t they demonstrate finding those problem with good scaffolding rather than directly pointing them at the problem?
Lot of people in this thread don't seem to be getting that.
If another model can find the vulnerability if you point it at the right place, it would also find the vulnerability if you scanned each place individually.
People are talking about false positives, but that also doesn't matter. Again, they're not thinking it through.
False positives don't matter, as you can just automatically try and exploit the "exploit" and if it doesn't work, it's a false positive.
Worse, we have no idea how Mythos actually worked, it could have done the process I've outlined above, "found" 1,000s of false positives and just got rid of them by checking them.
The fundamental point is it doesn't matter how the cheap models identified the exploit, it's that they can identify the exploit.
When it turns out the harness is just acting as a glorified for-each brute force, it's not the model being intelligent, it's simply the harness covering more ground. It's millions of monkeys bashing type-writers, not Shakespeare at one.
It’s strange to see this constant “I could do that too, I just don’t want to” response.
Finding an important decades-old vulnerability in OpenBSD is extremely impressive. That’s the sort of thing anyone would be proud to put on their resume. Small models are available for anyone to use. Scaffolding isn’t that hard to build. So why didn’t someone use this technique to find this vulnerability and make some headlines before Anthropic did? Either this technique with small models doesn’t actually work, or it does work but nobody’s out there trying it for some reason. I find the second possibility a lot less plausible than the first.
From the article:
>At AISLE, we've been running a discovery and remediation system against live targets since mid-2025: 15 CVEs in OpenSSL (including 12 out of 12 in a single security release, with bugs dating back 25+ years and a CVSS 9.8 Critical), 5 CVEs in curl, over 180 externally validated CVEs across 30+ projects spanning deep infrastructure, cryptography, middleware, and the application layer.
They have been doing it (and likely others as well), but they are not anthropic which a million dollar marketing budget and a trillion dollar hype behind it, so you just didn't hear about it.
> If another model can find the vulnerability if you point it at the right place, it would also find the vulnerability if you scanned each place individually.
They didn't just point it at the right place, they pointed it at the right place and gave it hints. That's a huge difference, even for humans.
I mean definitely a good starting point is a share-nothing system, but then it becomes impossible to use tools (no shared filesystem, no networking), so everything needs to happen over connections the agent provides.
MCP looks like it would then fit that purpose, even if there was an MCP for providing access to a shell. Actually I think a shell MCP would be nice, because currently all agent environments have their own ways of permission management to the shell. At least with MCP one could bring the same shell permissions to every agent environment.
Though in practice I just use the shell, and not MCP almost at all; shell commands are much easier to combine, i.e. the agent can write and run a Python program that invokes any shell command. In the "MCP shell" scenario this complete thing would be handled by that one MCP, it wouldn't allow combining MCPs with each other.
That is fine, but you give up any pretence of security - your agent can inspect your tool's process, environment variables etc - so can presumably leak API keys and other secrets.
Other comments have claimed that tools are/can be made "just as secure" - they can, but as the saying goes: "Security is not a convenience".
Good question! I've never tried. The NT driver makes use of some of the more advanced features of the networking stack, so possibly not. But you never know. I'd love a Wg4React.
ReactOS was, at one time, targeting a Windows Server 2003-level of compatibility. With that in mind I can't imagine current Wireguard would have even a shred of hope of working on ReactOS.
Unthinkingly leaning on metrics is likely to help you build a faster, stronger horse, while at the same time avoiding building a car, a bus or a tractor.
reply