Hacker Newsnew | past | comments | ask | show | jobs | submit | more glouwbug's commentslogin

The Practice of Programming by Kernighan and Pike had a really elegant Markov:

https://github.com/Heatwave/the-practice-of-programming/blob...


And Mark V. Shaney was designed by Rob Pike and posted on Usenet, but that happened a long time ago:

https://en.wikipedia.org/wiki/Mark_V._Shaney


About a decade ago I somehow came across Genudi[0], a markov chain based "AI", and had quite a bit of fun with it. The creator has a blog that makes for an interesting reading session.

0: https://www.genudi.com/about



I think this is the most apt summary of LLMs I’ve seen to date


In the age of high interest rates everyone is pushing quantity over quality


I fail to see the causality.


High interest rates bring layoffs. Layoffs require performance, or at least perceived performance


The massive productivity gains I’ve seen come from multidisciplinary approaches, where you’d be applying science and engineering from fields like chemistry, physics, thermodynamics, fluids, etc, to speedy compiled languages. The output is immediately verifiable with a bit of trial and error and visualization and you’re saved literally months of up front text book and white paper research to start prototyping anything


Data structures like maps and vectors from the standard library are still incredibly useful and make a fantastic addition to C if your focus relies on POD types, though if real time performance with heap cohesion is a problem then you’re right to go pure C


Curious if you can get away with Burgers on a CPU: https://youtu.be/oxzfY-hPt2k


Are they very different in terms of compute? Looks like Burgers saves maybe a couple of FMA per cell. I’m pretty sure you can get away with Navier Stokes on a CPU. (Depends on the resolution, of course, but the examples here are relatively low res.)


Yeah Navier stokes accounts for continuity over Burgers' which elevates you from what might be conventional game-dev like water to ANSYS grade CFD. Although, realistic CFD has its tradeoffs too. Solvers like LF, HLLE, and HLLC all offer computation vs. realism tradeoffs. LF is branchless, but struggles with certain sonic/supersonic shock wave characteristics (which one would see in compressible flow only anyway). For incompressible flow I'd expect the final visual realism to be in the order of Burgers -> LF -> HLLE -> HLLC [1]. The vast majority of the industry enjoys HLLC for mechanical/civil engineering, but I'm often fascinated by just how much one can cheat to get realistic incompressible/compressible flow. You can even further hamstring Burgers' and be left with something resembling the wave equation [2], which is the absolute cheapest "CFD" available.

[1] https://en.wikipedia.org/wiki/Riemann_solver

[2] https://en.wikipedia.org/wiki/Wave_equation


Oh that reminded me, in terms of cheating, you can use curl of a noise field to get completely fake incompressible flow. I used this in a Siggraph course once, and in some shots for a CG movie, but Bridson made it useful and way better by showing how to make it flow around objects. https://www.cs.ubc.ca/~rbridson/docs/bridson-siggraph2007-cu...

The main issue with it is that computing curl of a noise field is a ton more compute than Navier stokes. :P


Very cool. Thanks for the link. I like to print physical copies of neat finds like this, and will be doing just that


Use what you need to solve your problem. Learning frameworks never made sense to me


My approach has always been to consult 3 different models to get my understanding on the right path and then do the rest myself. Are we really just doing blind copy pastes? As an example, I recently was able to prototype several different one dimensional computational fluid dynamic GLSL shaders. Claude outputted everything with vec3s, and so the flux math matched what you’d see in the theory. It’s rapid iteration and a declutterred search engine for me with an interactive inline comment section, though I do understand some would disagree that statement, especially since it’s lacking any sort of formal verification. I counter with the old adage that anyone can be a dog on the internet


> My approach has always been to consult 3 different models to get my understanding on the right path and then do the rest myself. Are we really just doing blind copy pastes

For me, if I spent the time testing 3 different models I would definitely be slower than writing the code myself


But I'm not writing code. It's research with iteration. Punching out manual CFD is time consuming


I think the main appeal is subset lock-down and compile times. ~5000 lines in C gets me sub second iteration times, while ~5000 lines in C++ hits the 10 second mark. Including both iostream and format in C++ gets any project up into the ~1.5 second mark which kills my iteration interests.

Second to that I'd say the appeal is just watching something you've known for a long time grow slowly and steadily.


This, and the two pages of incomprehensible compiler spam you get when you make a typo in C++.


Depends pretty much on where you do such typo.

If you mean templates, a kind of solved problem since C++17 with static_assert and enable_if, moreso in C++20 with concepts.


Use binary libraries and modules, alongside incremental compilation and linking.


I can't really afford the link time optimization losses


It is called link time optimization for a reason.


Which kills my iteration interests ;)


I've been thinking of maybe doing CTL2 with this. Maybe if #def makes it in.


I think the #include extension could make vec_vec / vec_list / lst_str type nesting more natural/maybe more general, but maybe just my opinion. :-)

I guess ctags-type tools would need updating for the new possible definition location. Mostly someone needs to decide on a separation syntax for stuff like `name1(..)=expansion1 name2(..)=expansion2` for "in-line" cases. Compiler programs have had `cc -Dname(..)=expansion` or equivalents since the dawn of the language, but they actually get the OS/argv idea of separation from whatever CL args or Windows APIs or etc.

Anyway, might makes sense to first get experience with a slimcc/tinycc/gcc/clang cpp++ extension. ;-) Personally, these days I mostly just use Nim as a better C.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: