Vague. What's pretty close? I mean, even for IO bound tasks you can pretty quickly validate that the performance between languages is not close at all - 10 to 100x difference.
I'm saying that the Rust might execute in 50ms and the Python in 150ms. You are the one not making sense, we are talking about application performance, why are you not measuring that in milliseconds.
That is assuming Rust is 100x faster than Python btw, 49ms of I/O, 1ms of Rust, 100ms of Python.
> I'm saying that the Rust might execute in 50ms and the Python in 150ms.
Okay, so the Rust code would be 3x as fast. Feels arbitrary, but sure.
> You are the one not making sense, we are talking about application performance, why are you not measuring that in milliseconds.
I explained why your post made no sense already...
> That is assuming Rust is 100x faster than Python btw, 49ms of I/O, 1ms of Rust, 100ms of Python.
That's not how anything works. Different languages will perform differently on IO work, different runtimes will degrade under IO differently, etc. That's why even basic echo HTTP servers perform radically differently in Python vs Rust.
This isn't how computers work and it's not even how math works.
This conversation has become nonsensical. The thing we can agree with is this - no, uv would not be as fast if it were written in Python.
> That's not how anything works. Different languages will perform differently on IO work, different runtimes will degrade under IO differently, etc. That's why even basic echo HTTP servers perform radically differently in Python vs Rust.
> This isn't how computers work and it's not even how math works.
What are you disagreeing with? There's some baseline amount of I/O that the kernel does for you, that's what I'm assuming is 50ms, and everything else like runtime degrading is overhead due to the language/platform choice. I'm saying Rust is upwards of 100x faster in that regard thanks to its zero cost abstraction philosophy. You can't just include the I/O baseline in a claim about Rust's performance advantage. You'll be really disappointed when Rust doesn't download your files 100x as fast as the Python file downloader.
Anyway, I'm sorry I provoked your antagonism with my terse messages, I wasn't trying to be blase. I believe uv is the sort of tool that wouldn't suffer much from the downsides of Python and that in most situations the reduced runtime overhead of Rust would have a negligible impact on the user experience. I'm not arguing that they shouldn't build uv in Rust. Most situations is not all situations, and when a tool is used so widely you'll hit all edge cases, from the point where the 10s of milliseconds of startup time matters to the point where Pythons I/O overhead matters at scale.
I think a missing piece here is that you think that Rust won't download a file faster than Python but it absolutely can. This seems to just be a misconception people have about IO, like "download a file" is a thing that exists wholly outside of your process.
I know it can, but it can't download it faster than the network card can write it into its buffers. That's the part I would count as the 50ms that both can't improve upon.
Of course. But why would that matter if Python can't get there to begin with? You're not going to hit NIC bottlenecks with Python, not without a ton of work and tradeoffs at least.
> Different languages will perform differently on IO work,
IO is executed by kernel, file system or network drivers. IO performance is not dependent at all on which language makes the syscalls.
> The thing we can agree with is this - no, uv would not be as fast if it were written in Python.
In this thread, we are talking about the speed of uv in terms of user experience - how long a person waits for command line operations to complete. Things that pip takes multiple seconds to do, uv will do in dozens of milliseconds. If uv were written in python, it would take dozens of ms + a few dozens more, which means absolutely fuck all nothing in the context of the thousands of milliseconds saved over pip.
Its possible a user might perceive a slight difference in larger projects, but if pip had been uv-but-in-python, the uv-in-rust project would never have been started in the first place because no one would have bothered switching.
> This conversation has become nonsensical.
Agreed. No one in this thread is disputing that Rust code is faster than Python, only that in this case it is completely insignificant in the face of all the useless file and network I/O that pip is doing, and uv is not.
> IO is executed by kernel, file system or network drivers. IO performance is not dependent at all on which language makes the syscalls.
I think your posts on this topic can not possibly be worth responding to if you're coming to the conversation with this level of not understanding things.
Your post is a combination of not understanding computers and then hand waving about fake numbers and user expectations. IO is not magic, it is not some distinct process that you have no control over from userland, it is exactly the sort of thing that Python does very poorly at, in fact.
I'll just reference techempower again, or you can look up those system calls you referenced like how epoll works and then look into what is involved for Python to use epoll effectively.
What is more expensive, copying the message, or memory fencing it, or do you always need both in concurrent actors? Are you saying the message passing overhead is less than the cost of fragmented memory? I wouldn't have expected that.
Usually both, but they show up in different places.
You need synchronization semantics one way or another. Even in actor systems, "send" is not magic. At minimum you need publication of the message into a mailbox with the right visibility guarantees, which means some combination of atomic ops, cache coherence traffic, and scheduler interaction. If the mailbox is cross-thread, fencing or equivalent ordering costs are part of the deal. Copying is a separate question: some systems copy eagerly, some pass pointers to immutable/refcounted data, some do small-object optimization, some rely on per-process heaps so "copy" is also a GC boundary decision.
The reason people tolerate message passing is that the costs are more legible. You pay per message, but you often avoid shared mutable state, lock convoying, and the weird tail latencies that come from many heaps or stacks aging badly under load. Fragmentation is less about one message being cheaper than one fence. It is more that at very high concurrency, memory layout failures become systemic. A benchmark showing cheap fibers on day one is not very informative if the real service runs for weeks and the allocator starts looking like modern art.
So no, I would not claim actor messaging is generally cheaper than fragmented memory in a local micro sense. I am saying it can be cheaper than the whole failure mode of "millions of stateful concurrent entities plus ad hoc sharing plus optimistic benchmarks." Different comparison.
Why couldn't a machine that identifies relations between tokens be AGI? You're imposing an arbitrary constraint. It is either generally intelligent or its not, whether it uses tokens or whatever else is irrelevant.
Also, languages made up of tokens are still languages, in fact most academics would argue all languages are made up of tokens.
Anyway, it's not LLM's that achieve AGI, it's systems built around LLM's that achieved AGI quite some time ago.
Less than 5% of the population knew what it meant to install an app when the iPhone launched. I believe Steve Ballmer ridiculed the idea when asked about it.
A great many amount of people use Android to this day because of its more open nature, and that's despite Google's involvement. If Motorola could go back to its native roots, shake the idea of Chinese influence, and do open source proper, I bet there's a lot more than 5% of the market ready for it.
My company helps companies do migrations using LLM agents and rigid validations, and it is not a surprising goal. Of course most projects are not as clean as a compiler is in terms of their inputs and outputs, but our pitch to customers is that we aim to do bug-for-bug compatible migrations.
Porting a project from PHP7 to PHP8, you'd want the exact same SQL statements to be sent to the server for your test suite, or at least be able to explain the differences. Porting AngularJS to Vue, you'd want the same backend requests, etc..
The Falcon Heavy is $97 million per launch for 64000 kg to LEO, about $1,500 per kg. Starship is gonna be a factor 10 or if you believe Elon a factor 100 cheaper. A single NVidia system is ~140kg. So a single flight can have 350 of them + 14000kg for the system to power it. Right now 97 million to get it into space seems like a weird premium.
Maybe with Starship the premium is less extreme? $10 million per 350 NVidia systems seems already within margins, and $1M would definitely put it in the range of being a rounding error.
But that's only the Elon style "first principles" calculation. When reality hits it's going to be an engineering nightmare on the scale of nuclear power plants. I wouldn't be surprised if they'd spend a billion just figuring out how to get a datacenter operational in space. And you can build a lot of datacenters on earth for a billion.
If you ask me, this is Elon scamming investors for his own personal goals, which is just the principle of having AI be in space. When AI is in space, there's a chance human derived intelligence will survive an extinction event on earth. That's one of the core motivations of Elon.
Depends on what you want to hear. The Iranian family in my neighborhood whose father was a doctor fled after Islamist police cut their daughter to pieces in their own home for dressing inappropriately. That's the sort of non headscarf wearing Iranian elite you'll find with an opinion critical of the current regime. I don't know about ostentatious clothing.
Here you have women putting away men for 20 years with fake rape allegations. Last week a man committed suicide in a very public case where a woman falsely accused him of molestation on camera. Potayto potato. I’m going to be downvoted for this because it’s crass but there is truth in what I am saying.
There was no need for the 15 year old boy who told me this traumatic story of how his sister was killed in his own house to make that story up, because just the fact that they're a liberal family coming from Iran would have been enough information for them to get a visa to stay in The Netherlands based on political persecution.
This happened during Clinton, if you're counting history in US presidencies. And also it doesn't even matter if their sister really was killed. Islamic regimes like the one in Iran are despicable, and would have been even they didn't support goons killing girls for dumb religious regions.
The fact that the person you're responding to even still has a functioning HN account after their post history leaves me shocked and honestly appalled at the moderation of the site.
I've been told off repeatedly and threatened with all kinds of consequences by dang and I haven't come close to postings like this.
If something is literally incredible, then it's prudent to stop and consider whether it should be believed or that you have made an incorrect assumption. In this case, you wrongly assume that Musk is somehow being rewarded for something that happened in the past, or for something that might not even happen. The reality is that the pay package will only have value if Elon manages to dig Tesla out of the hole.
Despite how much conning you believe Musk has done (I won't refute it), Tesla is a company that actually builds cars, and while the Cybertruck flopped and anyone could see that coming from a mile away, that doesn't really affect the Tesla bottom line. That Musk grifted the government into buying them doesn't really do anything besides saving Tesla some money.
I wouldn't buy Tesla shares, I still don't really see their crazy valuation, but I would buy a Tesla car, as they are ostensibly awesome. If you disregard all the lying Musk has done it's still an epic car with unrivaled self driving capabilities.
That he starts talking about something historically has been a sign that some part of it is going to be a reality. You can stand apart from the crazy people who worship the ground he walks on, and still appreciate that he accomplishes great things. Whether it's through conning and grifting, or hard work and keen insight, there are still an electric car company and a rocket company where there weren't before.
Just stop reacting to people believing or shouting things or grotesque behaviors, and just look at the actual reality. It'll do you a lot better than just believing everything Musk says is BS.
The GP was talking about Unreal Engine 5 as if that engine doesn't optimize for low end. That's a wild take, I've been playing Arc Raiders with a group of friends in the past month, and one of them hadn't upgraded their PC in 10 years, and it still ran fine (20+ fps) on their machine. When we grew up it would be absolutely unbelievable that a game would run on a 10 year old machine, let alone at bearable FPS. And the game is even on an off-the-shelf game engine, they possibly don't even employ game engine experts at Embark Studios.
>And the game is even on an off-the-shelf game engine, they possibly don't even employ game engine experts at Embark Studios.
Perhaps, but they also turned off Nanite, Lumen and virtual shadow maps. I'm not a UE5 hater but using its main features does currently come at a cost. I think these issues will eventually be fixed in newer versions and with better hardware, and at that point Nanite and VSM will become a no-brainer as they do solve real problems in game development.
reply