Hacker Newsnew | past | comments | ask | show | jobs | submit | bri3d's commentslogin

I'm disappointed that both the article and comments don't go into the actual differences between how these adapters work and the overhead incurred by USB.

At a high level, I'm pretty sure Thunderbolt will be significantly better in all situations:

Thunderbolt is PCIe; depending on the way the network card driver works, the PCIe controller will usually end up doing DMA straight into the buffers the SKB points to, and with io_uring or AF_XDP, these buffers can even be sent down into user space without ever being copied. Also, usually these drivers can take advantage of multiple txqueues and rxqueues (for example, per core or per stream) since they can allocate whatever memory they want for the NIC to write into.

USB is USB; the controller can DMA USB packet data into URBs but they need to be set up for each transaction, and once the data arrives, it's encapsulated in NCM or some other USB format and the kernel usually has to copy or move the frames to get SKBs. The whole thing is sort of fundamentally pull based rather than push based.

But, this is just scratching the surface; I'm sure there are neat tricks that some USB 3.2 NIC drivers can do to reduce overhead and I'd love to read an article where I learned more about that, or even saw some benchmarks that analyzed especially memory controller utilization, kernel CPU time, and performance counters (like cache utilization). Especially at 10G and beyond, a lot of processing becomes memory bandwidth limited and the difference can be extremely significant.


> At a high level, I'm pretty sure Thunderbolt will be significantly better in all situations:

None of my devices support thunderbolt; so not all situations.


ACK. From some cursory experimentation, my laptop can roughly saturate 1G via USB, but on 2.5G things get wonky above roughly 1.9G unidirectional or 2.9G bidirectional.

> Thunderbolt is PCIe

Nit: Thunderbolt isn't PCIe, it tunnels PCIe. Depending on chips used, there's bandwidth limits; I vaguely remember 22.5G on older 40G TB Intel chips.


> Thunderbolt is PCIe

Thunderbolt allows PCIe tunneling, but it has some overhead over raw PCIe. That's why Thunderbolt eGPU setups don't perform as well as plugging the GPU directly into a PCIe slot.

> USB is USB

Until you get to USB4, when USB 4 supports Thunderbolt 4.


> That's why Thunderbolt eGPU setups don't perform as well as plugging the GPU directly into a PCIe slot.

The bigger factor is probably that PCI-e tunnelling at most a ×4 link, while when you plug a GPU in you are generally doing so into a ×16 or at least ×8 slot, and very few GPUs target ×4.


Fair; I should have said "from the standpoint of the driver."

> USB 4 supports Thunderbolt 4

It's the opposite! I hate to get into it as I saw the USB naming argument pretty thoroughly enumerated in the comments here already, but the pedantic interpretation is "Thunderbolt 4 is a superset of USB4 which requires implementation of the USB4 PCIe tunneling protocol which is an evolution of the Thunderbolt 3 PCIe tunneling protocol."

From the standpoint of USB-IF a "USB4" host doesn't need to support PCIe tunneling, but Microsoft also (wisely, IMO) put a wrench into this classic USB confusion nightmare by requiring "USB4" ports to support PCIe tunneling for Windows Logo.


DPF (Diesel Particulate Filter) and SCR (Selective Catalytic Reduction, which uses DEF, Diesel Exhaust Fluid) are two mostly different systems. DPF traps soot in a filter which then burns the soot off into gas later (regen). SCR reduces NOx using urea.

This is important to know in the context of tractors because in the US, 25-74hp tractors generally need only DPF without SCR (there are basically three bins depending on horsepower level). This makes these midsized tractors a bit of a sweet spot for a lot of tasks; of course, you still have to deal with regen (which is where the DPF gets heated up to convert trapped soot into gas), which is annoying, but you at least don't have to fill up with DEF or risk the DEF injection system failing.


I wonder by what mechanism they plan to import these into the US. This seems like a emissions regulation end-run like glider trucks, but my understanding of the EPA import rules doesn't really leave any room for this type of game.

Yes, a lot of modern tractors are locked down due to predatory dealer service lock-in, but they're also complex and locked down due to emissions regulations, which are ostensibly a net societal gain. The classic HN "everything should be totally open and free" conversation really needs to happen through this lens IMO.


0 chance these make it stateside. The company appears to be extremely misleading as to their Built in Alberta moniker. Im in the province and this article is making its rounds.

Seems to be that they are just importing chinese built tractors and rebranding them: https://www.hanwoagri.com/tractor/all-purpose-medium-tractor...

Their facility in bowden AB is basically a tiny garage.


I strongly disagree with this take, and frankly, this reads like the state of "research" pre-LLMs where people would run fuzzers and scripted analysis tools (which by their nature DO generate enormous amounts of insidiously wrong false positives) and stuff them into bug bounty boxes, then collect a paycheck when one was correct by luck.

Modern LLMs with a reasonable prompt and some form of test harness are, in my experience, excellent at taking a big list of potential vulnerabilities and figuring out which ones might be real. They're also pretty good, depending on the class of vuln and the guardrails in the model, at developing a known-reachable vulnerability into real exploit tooling, which is also a big win. This does require the _slightest_ bit of work (ie - don't prompt the LLM with "find possible use after free issues in this code," or it will give you a lot of slop; prompt the LLM with "determine whether the memory safety issues in this file could present a security risk" and you get somewhere), but not some kind of elaborate setup or prompt hacking, just a little common sense.


"More efficient" of course has many axes (cost, energy consumption, manual labor requirement vs cost of human, time, quality, etc.). However, as a long-time reverse engineer and exploit developer who has worked in the field professionally, I would say LLMs are now useful; their utility exceeds that which was previously available. That is, LLM assisted exploit discovery and especially development is faster, more efficient, and ultimately cheaper than non-LLM assisted processes.

What commenters don't seem to understand is that especially CVE spam / bug bounty type vulnerability research has always been an exercise in sifting through useless findings and hallucinations, and LLMs, used well, are great at reducing this burden.

Previously, a lot of "baseline" / bottom tier research consisted of "run fuzzers or pentest tools against a product; if you're a bottom feeder just stuff these vulns all into the submission box, if you're more legit, tediously try to figure out which ones are reachable." LLMs with a test harness do an _amazing_ job at reducing this tedium; in the memory safety space "read across 50 files to figure out if this UAF might be reachable" or in the web space, "follow this unsanitized string variable to see if it can be accessed by the user" are tasks that LLMs with a harness are awesome. The current models are also about 50% there at "make a chain for this CVE," depending on the shape of the CVE (they usually get close given a good test harness).

It seems that the concern with the unreleased models is pretty much that this has advanced once again from where it is today (where you need smart prompting and a good harness) to the LLM giving you exploit chains in exchange for "giv 0day pl0x," and based on my experience, while this has got an element of puffery and classic capitalist goofiness to it ("the model is SO DANGEROUS only our RICHEST CUSTOMERS can have it!"), I believe this is just a small incremental step and entirely believable.

To summarize: "more efficient than all but the best" comes with too many qualifiers, but "are LLMs meaningfully useful in exercising vulnerabilities in OS kernel code," or "is it possible to accelerate vulnerability research and development with LLMs" - 100% absolutely.

And you don't have to believe one random professional (me); this opinion is fairly widespread across the community:

https://sockpuppet.org/blog/2026/03/30/vulnerability-researc...

https://lwn.net/Articles/1065620/

etc.


> Electron apps all look the same without actually making much efforts to make them "personal" -- they just want to release an app ASAP so they chose Electron.

Which is the most goofy thing about the whole situation! I would argue that the push for “visual identity” was largely responsible for the drive towards web apps vs. native apps in the early 00s. In exchange we got all of these tortured UI frameworks built to paper over hypertext abstractions that weren’t well suited to application development to start with. And now we use these frameworks to make bland applications again!


It's the worst of both worlds. Not only are the bland, but they don't even follow the platform's conventions, have horrible accessibility (because they no longer get it for free from the platform), and don't respect my desktop environment's theming, fonts, colors, and so on.

For what it’s worth, Jetson at least has documentation, front ported / maintained patches, and some effort to upstream. It’s possible with only moderate effort and no extensive non-OEM source modification to have an Orin NX running an OpenEmbedded based system using the OE4T recipes and a modern kernel, for example, something that isn’t really possible on most random label SBCs.

I wouldn’t really call that a “complete crack” (although it IS cool). There’s an _awful_ lot more firmware in a car or tractor than the display unit, and arguably it’s one of the less important modules in most architectures. Cracked versions of Deere Service Advisor are much more meaningful to the kinds of repairs farmers perform than firmware exploits are.


I think the point they were trying to make here was “Claude did better than a fuzzer because it found a bunch of OOB writes and was able to tell us they weren’t RCE,” not “Claude is awesome because it found a bunch of unreachable OOB writes.”


This is not how first party vulnerability research with LLMs go; they are incredibly valuable versus all prior tooling at triage and producing only high quality bugs, because they can be instructed to produce a PoC and prove that the bug is reachable. It’s traditional research methods (fuzzing, static analysis, etc.) that are more prone to false positive overload.

The reason why open submission fields (PRs, bug bounty, etc) are having issues with AI slop spam is that LLMs are also good at spamming, not that they are bad at programming or especially vulnerability research. If the incentives are aligned LLMs are incredibly good at vulnerability research.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: