Hacker Newsnew | past | comments | ask | show | jobs | submit | kmaitreys's commentslogin

Can you elaborate on this? Slightly concerned because I have written (and planning to write more) Rust HPC code

Maybe not what they meant, but Rust sometimes makes it tempting to just copy things rather than fighting the borrow checker. Whereas in C++ you're free to just pass pointers around and not worry about it until / unless your code crashes or gets exploited.

Speaking authoritatively from my position as an incompetent C++ / Rust dev.


I see. Fortunately, I'm aware of that and I don't use clone (unless I intend to) as much. Borrow checker is usually not a problem when writing scientific/HPC code.

Because passing pointers isn't as ergonomic in Rust, I do things in arena-based way (for example setting up quadtrees or octrees). Is that part of the issue when it comes to memory bandwidth?


Stable Rust doesn't have a local allocator construct yet, you can only change the global allocator or use a separate crate to provide a local equivalent.

Right. I have seen Zig where one needs to specify allocators as well. I'm sorry I'm not well versed enough to know how it makes things better for HPC though?

For now my plan is to write fairly similar style code as one may write in C++/Fortran through MPI bindings in Rust.


if you're using thread level parallelism, there is always a benefit to having a per-thread allocator so that you don't have to take global locks to get memory, they become highly contended.

if you take that one step further and only use those objects on a single core, now your default model is lock-free non-shared objects. at large scale that becomes kind of mandatory. some large shared memory machines even forgo cache consistency because you really can't do it effectively at large scale anyways.

but all of this is highly platform dependent, and I wouldn't get too wrapped up around it to begin with. I would encourage you though to worry first about expressing your domain semantics, with the understanding that some refactoring for performance will likely be necessary.

if you have the patience and personally and within the project, it can be a lot of fun to really get in there and think about the necessary dependencies and how they can be expressed on the hardware. there's a lot of cool tricks, for example trading off redundant computation to reduce the frequency of communication.


Thank you for such a great reply!

There's a lot of useful advice here that'll surely come in handy to me later. For now, yeah I'm just going to try to make things work. So far I have mostly written intra-node code for which rayon has been adequate. I haven't gotten around to test the ergonomics of rs-mpi. But it feels quite an exciting prospect for sure.


I think there's a lot of difference between sounding like someone and being someone. The models are excellent at pretending indeed.

I don't think that sama was arguing that ChatGPT actually passed a PhD thesis defense. But arguably, it could make for an interesting benchmark.

Please do not get swayed by nor defend the words vomited by a snake oil salesman.

Also what benchmark? How will you you design it?


exactly. this is what whole RL thing is optimizing for, even if that is not the intent.

Why and how do you think it applies to broader domains?

Children learning in schools should not become product managers. If they are, what exactly is the "product" that they are "managing"? Reducing everything to and looking everything from a corporate viewpoint is bizarre.


I'm not saying this should be every single domain. This isn't about products or management, instead I would frame it like this: I notice that multiple cases where we are worried about the impact of AI are basically just about the replacement of certain activities that some humans already aren't doing in today's society. If we are worried we will be less good at doing job X once we don't do job X anymore, why are we not worried about people who never did job X in the first place? If we are worried about people not doing jobs anymore, why are not worried for the human development of people wealthy enough not to work anymore for the rest of their days? I would not assume someone who won the lottery is going to have their life become uninteresting or see some cognitive decline. It could probably happen, but you can also see a path where the person just chooses to do the activities they always wanted to do, where they keep learning and exploring without the burden of usual life constraints. People already play chess when machines have beaten us for decades, just because they enjoy it.

Regarding education I think AI is a huge revolution waiting to happen. Usual courses have become boring? Have future super powerful AI generate per student highly personalized programs, create bespoke video games where succeeding can only happen once the student has validated all the notions you wanted them to validate etc.


>If we are worried we will be less good at doing job X once we don't do job X anymore, why are we not worried about people who never did job X in the first place? If we are worried about people not doing jobs anymore, why are not worried for the human development of people wealthy enough not to work anymore for the rest of their days?

None of this is equivalent to the topic of discussion. The point is that even in a world of division of labour and shared expertise, there is no atrophy in general populace because someone is trying to become expert in something. The whole point is that the brain is being put in use to do something. If not in X, then in Y. If none of the alphabets are available, where do you put your brain in use to?

>I would not assume someone who won the lottery is going to have their life become uninteresting or see some cognitive decline. It could probably happen, but you can also see a path where the person just chooses to do the activities they always wanted to do, where they keep learning and exploring without the burden of usual life constraints. People already play chess when machines have beaten us for decades, just because they enjoy it.

Again, please play attention to the main idea of the article linked. Most of cognitive development happens in the early formative years. Yes, learning itself never stops, but the primary period of it during perhaps the first 25 years of someone's life. You NEED to make mistakes and learn from them during this period. If you are offloading work that your brain was supposed to do here, it's extremely worrying.

>Regarding education I think AI is a huge revolution waiting to happen. Usual courses have become boring? Have future super powerful AI generate per student highly personalized programs, create bespoke video games where succeeding can only happen once the student has validated all the notions you wanted them to validate etc.

I think there is some truth to it, but you need to regulate how much AI can assist a student. It can be a patient teacher but it shouldn't replace their cognitive abilities. That is the whole point.


Padé approximations are not discussed as much, but they are much more stable than Taylor series approximations.

High level languages that replaced assembly are not black boxes.

And they're as deterministic as as the underlying thing they're abstracting... which is kinda what makes an abstraction an abstraction.

I get that people love saying LLMs are just compilers from human language to $OUTPUT_FORMAT but... they simply are not except in a stretchy metaphorical sense.

That's only true if you reduce the definition of "compiler" to a narrow `f = In -> Out`. But that is _not_ a compiler. We have a word for that: function. And in LLM's case an impure one.


> You never (or rarely) write the wiki yourself — the LLM writes and maintains all of it.

Then what is the point? Why be averse so to use your own brain so much? Why are tech bros like this?


Reminded me of the anecdote mentioned in the classic "Real Programmer Don't Use Pascal"

> Some of the most awesome Real Programmers of all work at the Jet Propulsion Laboratory in California. Many of them know the entire operating system of the Pioneer and Voyager spacecraft by heart. With a combination of large ground-based FORTRAN programs and small spacecraft-based assembly language programs, they are able to do incredible feats of navigation and improvisation -- hitting ten-kilometer wide windows at Saturn after six years in space, repairing or bypassing damaged sensor platforms, radios, and batteries. Allegedly, one Real Programmer managed to tuck a pattern-matching program into a few hundred bytes of unused memory in a Voyager spacecraft that searched for, located, and photographed a new moon of Jupiter.

> The current plan for the Galileo spacecraft is to use a gravity assist trajectory past Mars on the way to Jupiter. This trajectory passes within 80 +/-3 kilometers of the surface of Mars. Nobody is going to trust a PASCAL program (or a PASCAL programmer) for navigation to these tolerances.

The article is satirical so I am not sure how true is this, but over its history, the maintainers of these probes have done truly remarkable stuff like this.

https://homepages.inf.ed.ac.uk/rni/papers/realprg.html


Duh its space you have to use Turbo pascal


Duh, turbo doesn't work in space. Surely you meant High Speed Pascal!

https://www.fihl.net/HSPascal/


At least Voyager has enough space to carry Turbo Pascal if we wanted to send a copy to our galactic neighbors:

https://news.ycombinator.com/item?id=30644308


> "Many of them know the entire operating system of the Pioneer and Voyager spacecraft by heart"

is that actually true? During the voyager memory problems of 2023, I seem to recall that there were significant issues uploading entirely new programs to it because there was so little documentation around the internal workings of the hardware and software, and creating a virtual machine to actually test on was a significant achievement


well duh, if they knew it by heart why would they write it down?


Great take. I have seen the discussion on this often gets turned into a hard vs soft science debate where in actuality it's just simply about money.


I track these across all fields. It’s money and prestige and arrogance and ignorance and “keep my job” and more


Multi-worlds is not really relevant here. You are just asking the question how the building blocks of life form in the Universe and how can they reach a planet like ours.


I mean, Multi worlds is the only way this impossibly random event can occur on its own.


That's not true. If life has the odds of one in a quadrillion of happening, and we're here to discuss it, then we're that one in a quadrillion. If we weren't, we wouldn't be alive. By definition, we were the lucky ones with the perfect conditions that resulted in us.


That is what I meant.

But I don't think we are "lucky", because we are part of the world, not something that was placed inside it by choice. It is like asking why is Nile in Egypt and not in some other place. If Nile is in some other place, it would not be Nile...So does it make sense to say that Nile is lucky to be in Egypt? No, I think it does not make sense...


Sorry, but nothing you have said here is true or makes sense. Multi worlds are universes, not worlds within our universe. The multiworld interpretation is one of several interpretations of quantum mechanics of the exact same evidence--one or the other interpretation being "true" has no empirical implications. And it is an interpretation of quantum mechanics, which has nothing to do with the distribution of nucleotides. And it's incoherent to call an observed event "impossible". You seem to mean that you think that it is highly unlikely, but offer no reason to think so ... nor for the bizarre claim that "Multi worlds is the only way". I suspect that you are mixing up a very confused understanding of "Multi worlds" with some version of the anthropic principle. But the anthropic principle is an a posteriori explanation of an a priori unlikely occurrence, it's not a "way" for something to happen.

I won't comment further unless you offer a convincing proof of your assertion.


Ok, what is the mystery in the origin of life? As I understand, it is how all the required molecules came together in the right configuration spontaneously? Is that the question that we are trying to answer?

If this is the question, I think the Multi Worlds Interpretation provides the answer. Because it says that there is some worlds where any given random event will manifest.

So it follows that there is some worlds, where this random event that we call the "origin of life" manifested. And it is just that we are part of one such world.

>multiworld interpretation is one of several interpretations of quantum mechanics of the exact same evidence

I think we might think the other way around. That the origin of life, as well as the fact that we seem to be alone in the universe, as a proof of the MWI..

About the latter, I think we have an overwhelming chance to be alone, because while it is true that there can be universes where random events have lead to origin of life in multiple places, the universes where there is only a single "origin of life" event will vastly outnumber such universes that the chances of us finding oursleves in one such universe (where life has originated independently more than once) is vanishingly small.


It had to start somewhere which is favourable to preserve the necessary molecules. Early Earth was not such place.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: