Hacker Newsnew | past | comments | ask | show | jobs | submit | avestura's commentslogin

Some names are problematic, like this one. Some others are lucky [1] (I know it's a joke)

[1] https://ifunny.co/picture/go-gle-most-popular-actor-in-video...


Even with people that are not famous it can be difficult. My wife knows three people with the same name. So when I ask with whom are you going out she needs to add the city to make me understand.


Part of why in many times, ‘of x’ was a common ‘surname’. Also, profession - basically ‘Bob the Smith’, ‘John the Baker’, etc.

A lot of places (India, Brazil) still use one of your parents on official documents to disambiguate (Bob Smith, son of John (or Jill) Smith from Smithville).


If you wanted to become a painter, would you use generative AI to make you a painting and then call yourself an artist? I wouldn't. Downloading mp3 files doesn't make anyone a musician.

That said, I don't see any problem with using AI to grasp ideas faster[1]. AI can help you create a personalized roadmap and resolve confusion along the way. But if one doesn't go hands-on and understand what the AI is doing, and only consumes the output, there is no learning in that.

[1] By 'faster,' I mean getting a response quickly (requires fact-checking, ofc) instead of spending hours on Stack Overflow.



awesome lists are pretty common on GitHub

https://github.com/topics/awesome


Maybe I have a too strict definition of systems programming languages, but I would never call a GC language a systems programming language. Who would use an operating system that suddenly stops the entire world because some random language's runtime wants to collect its memory garbage?

That said, well done for making this.


>but I would never call a GC language a systems programming language....

Lilith: x86-64 OS written in Crystal (github.com/ffwff). [1]. And Crystal has GC.

[1] https://news.ycombinator.com/item?id=21860713


Probably they meant that GC introduces a certain level of uncertainty, whereas OS or "systems" programming leans heavily towards manual handling (except when the compiler allows to avoid it, not the runtime).


I think you have some stereotypes about GC. In fact, current GC technology only requires a very short STW. Of course, I agree with your opinion, so this is not the final form of nature. I will find a better technology to assist GC.


I think it's rather the other way around. What parent said, even though they prefixed it with a 'maybe' to make their statement less confronting, is that you used the term "systems programming language" too freely. As is also the case with Go.

If people keep doing that, the term will eventually loose meaning. Maybe in 20 years, it will have eroded enough that something like Python will be called a "systems programming language". I mean after all, a printf statement in C also parses the format string at runtime. Who is to say the fact that all of the code is interpreted should thusly exclude something like Python? I'm being sarcastic, if that wasn't obvious.

Nature claims 'concise'ness in it's README's opening paragraph. That is laudable. It's even more laudable if the conciseness would also be reflected in the use of natural language (no pun) that describes it.

Calling it a "systems programming language" while using GC is IMHO eroding the meaning of the term.

Something meaning X and someone including Y and then someone pointing out that X does not include Y has thusly nothing to do with stereotypes here.


I don't want to disagree too strongly with use of the term "systems language" as my career is not tied to it, but I do think we should reevaluate it. "Systems" programming is in many ways a cultural term and not a technical one. It implies something about scale and reliability that is actually quite hard to tie to specific qualities of languages. In some sense, go is very much a systems language because it allows operating at high scale and high reliability. But in another sense, it is very unsuitable for "systems" work in the sense that you can't (trivially) swap out a kernel written in C for one written in go, although I suppose you could bolt manual memory primitives to the side, remove a bunch of features and runtime, and get it to work. But at that point you're just writing C in go with basically no benefit (and probably great cost if you consider how much weaker the go compiler is compared to LLVM or GCC), so it seems rather silly.

I worked in a "systems" lab as an undergraduate (basically, doing memory allocator research as an independent study) and my main workhorse was python because I was working with largely static data and then generating C. Is python a systems language? The idea is ridiculous. But I was definitely doing systems work. I think we need more flexibility in terms of how we view these cultural ties as inherent to the language as opposed to how it's wielded in context. Most of the time we can use a more specific term (eg "manually memory-managed", "C ABI linkable", "native", "reentrant", "able to use inline assembly", etc) with zero loss of meaning and great benefit in reducing arguing over terms.

Hell, if scala native could take off, it could be a real competitor to C++ and Rust. Is oberon a systems language? How about lisp? People have definitely written entire operating systems in both. Things get really weird once you wander outside the mainstream. Is erlang in a switchboard a systems language? I would say it should be considered such despite looking wildly different than C in just about every manner.


A systems language does not imply anything about scale and reliability. By your definition, Java would be a systems language, while C and assembly would not be.

A systems language is one that not only allows direct low-level access to hardware, but it's well suited for it. It not only works without a runtime (or at least a very minimal one), but it is its main mode of operation.

Many languages can do many things. Some are more suited for writing systems, and we call those systems languages. That rules out Go, Lisp, etc, despite people having written systems in them.


Well, that's fine, we're all entitled to our opinions.

Such a definition of systems language strikes me as borderline useless and vague, though, hence my request to chuck it out the window. (See also: high-level vs low-level is an even more poorly defined and useless term). I also don't see C (or rust, or C++) as having any unique affinity for accessing hardware—certainly, certainly not better access than lisp, which can often resemble an assembler with twenty megabytes of macros baked on top. Hell, I'd argue lisp is more systems-y and "low-level" than C is, which can't even be easily utilized to pump out a maintanable boot sector. It's just not built for that; it's built to generate object files, which then need to be lowered into binaries with a linker. Lisp is just flexible enough to rewrite itself into the target domain, which C cannot do.

Look, I'm not trying to argue about definitions; it's a waste of both our time. But genuinely, why are you so strident about using the term if you clearly have a more precise understanding of what you want to communicate? Why not just say "programming languages with access to an inline assembler and address lookups" (which, again, would include most common lisps)? Why not say "can link to the C ABI without relying on FFI at runtime" (which would exclude many lisps)? And granted, this only came up because it was used seriously in the original post, so the onus is really on everyone to care about how precisely they characterize language.

We should honestly just stop using "systems/high-level/low-level" entirely; they're too vague and get people too worked up. Which is just baffling to me; I have no emotional attachment to any programming language or similar tool.

Edit: you may find Game Oriented Assembly Lisp interesting: https://en.m.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp


> But genuinely, why are you so strident about using the term if you clearly have a more precise understanding of what you want to communicate?

Because, up until I've read your comment, I've seen the term used with my definition pretty much everywhere, so I believe it's a fairly well understood and agreed on term, with you being more of an exception.


We all have our opinions and perspectives. Typically, people who agree with eachother don't beat this conversation to death every time the term comes up. Frankly, nobody has ever made the case to me the term is worth using iin the first place outside of protecting some misplaced sense of pride. Until you, I wasn't aware anyone outside of enterprise c/c++ coders took the term seriously.

But, if you didn't read and acknowledge the submission that "systems" is more of a culture than some association with a language, I think this conversation is over. I'm super happy you know what a pointer is, bro. Good luck socializing.


That's a very good point, and I agree with you. I will adjust the relevant wording and use words such as “system” more carefully.


Tbh, I've completely abandoned the concept of a systems programming language because the primary benefit of the concept seems to be to argue over it. There's practicality (eg can i link this into the kernel?) and then there's hand-waving about how it feels to a developer, and the conversation seems almost entirely subsumed by the latter.

See also: "is C a high-level or low-level language?" Just shoot me instead, please.


Somewhat agree, but it depends. I agree that core of the operating system (kernel) should be responsive, but there are many tasks in the operating system that don't require immediate response and can be run in batch. Usually these are higher-level decisions, such as any housekeeping tasks like file backup and indexing, workload reprioritization and optimization, system update and reconfiguration, and the like.


Yeah, then you write a GC or use a library ;)


Isn't this a problem of allocating and freeing resources being generally non-deterministic, regardless of what algorithm is used?


From the website:

"Nature is... A general-purpose open-source programming language and compiler designed to provide developers with an elegant and concise development experience, enabling them to build secure and reliable cross-platform software simply and efficiently."


That's true. However the website also promotes it as a systems programming language here and there. For example in the same homepage it says "Perfect for Systems Programming: Operating systems and IoT" and some other places like here: https://nature-lang.org/docs/get-started


I use Go for IoT devices in my work, such as routers and TV boxes, which run on RISC-V/MIPS/ARM32/ARM64, etc. I appreciate its portability; even on devices with only 512 MB of memory, I don't have to worry about memory overflow issues.

I believe nature is equally suitable.


Define operating system, the higher up the stack you go the less things like GC matter.

I wouldn't care if my network daemon was written in Python or not, but I would care if the networking stack itself was.


> the higher up the stack you go the less things like GC matter.

But suppose the very top of stack is high frequency trading system or traffic light controller. Car brakes...

Depending on your stack, determinism may or may not be a key part. And that is only possible if determinism is guaranteed all the way down.


I definitely don't want my traffic light controller to be written using manual memory management if this is at all possible to avoid. Waiting another millisecond for the light to turn green feels like an acceptable cost. But this seems silly: how on earth did you write such a simple program to have so many allocations that the gc significantly impacts performance? Why would a traffic controller need variable memory in the first place? Surely it's just a mix of a state machine (statically allocatable) and I/O.

"Determinism" feels like a very odd pitch for manual memory management when the latter in no way implies the former, and lack of manual memory management in no way implies non-determinism. Generally, any dynamic allocation is non-deterministic. Furthermore, in the HFT context the non-determinism of the network is going to absolutely dwarf any impacts GC has, especially if you have ever heard of arena allocation. Even your OS's scheduler will have larger impacts if you make any efforts to avoid memory churn.

Now, an interrupt handler should never allocate memory and should generally run with a constant number of cycles. But that's an extremely niche interest, and you'd probably want to hand-code those instructions regardless.

(FYI, I work in a support role to a HFT product, among many others, but it runs on the JVM)


Wowow. GO to not be used with systems programming? Absurd.


go is ill suited for many systems programming environments. It works well on servers and larger iot devices with 512+ mib off ram, but not great on things with tighter constraints. We tried to use it for our netstack implementation for years and ultimately came to the conclusion we needed to switch to a language with greater control. Storage, RAM, and CPU usage were all improved with the switch. I don't consider it a systems programming language but rather something approaching systems programming.


My jaw dropped when I read your comment.


If your program isn’t a system, it is unlikely that you need to worry about freeing memory at all — It will be cleaned up when finished.


in a sense, RCU is garbage collection.


Another hug of death. The website says "It must be upgraded via the Firebase console before it can begin serving traffic again."

Wayback machine for when it used to work: https://web.archive.org/web/20250317122419/https://benjamina...


Makes you wonder, how many webpages are dependant on such services. The Web has always been brittle, but it's a little sad seeing a website not able to survive ~50k users on its first day online.

Even the least offenders, GitHub Pages, broke links before [0].

[0]: https://github.blog/news-insights/product-news/new-github-pa...


Fun-shaming a passionate developer who, beyond their job description, delivered an editor that checks all the required boxes (small binary, fast, ssh-support, etc.) in just 4 months, while working on weekends and even Christmas, and calling it "wasting time" is incredibly upsetting. I'm grateful to work with people who value that kind of initiative.


You're making it all up, he's a terminal product manager, so it is not beyond job description.

Of course the app doesn't check all boxes, plenty of features other editors have had years to add simply couldn't be added even if you work on Christmas (by the way, is also entirely driven by the NIH decision to write from scratch and not than that doing a 3 language mock)

And I'm not shaming the fun, I'm saying that that's not a good justification for shipping a worse app to the millions in a professional setting


It was very interesting to me that you liked Zig the most. Thank you for making this!


Microsoft can't name a project leading with a trademark (Linux <something>), hence why it's called WSL.

Source: https://x.com/richturn_ms/status/1245481405947076610?s=19



Very interesting comment there:

“ I still hope to see a true "Windows Subsystem for Linux" by Microsoft or a windows becoming a linux distribution itself and dropping the NT kernel to legacy. Windows is currently overloaded with features and does lack a package manager to only get what you need...”


People that comment things like this probably have their heart in the right place, but they do not understand just how aggressive Microsoft is about backwards compatibility.

The only way to get this compatibility in Linux would be to port those features all over to Linux and if that happened the entire planet would implode because everyone would say “I knew it! Embrace Extend Extinguish!” At the same time.


I agree. For years I supported some bespoke manufacturing software that was written in the 80s and abandoned in the late 90s. In the installer, there were checks to see what version of DOS was running. Shit ran just fine on XP through W10 and server 2016. We had to rig up some dummy COM ports, but beyond that, it just fuckin worked.


NT is a better consumer kernel that Linux. It can survive many driver crashes that Linux cannot. Why should Microsoft drop a better kernel for a worse one?


Meanwhile in Linux we cannot even restart compositor without killing all GUI apps.

I swear sometimes progress goes backwards..


Is this a Wayland issue? This works fine for me on X. But yes, progress goes backwards in Linux. I had hope for the Linux desktop around 2005-2010, since then it only got worse.


If your $DISPLAY managed by Xorg server goes away your X apps will also crash. Wayland combines the server with the parts that draw your window decoration into the same process.

Under Windows everything including the GPU driver can crash. As long as it didn't take the kernel with it, causing a BSOD. Your applications can keep running.


I can restart window manager and compositor just fine in X. Also it is not generally true that X apps crash when the server goes away. This is a limitation of some client libraries, but I wrote X apps myself that could survive this (or even move their display to a new server). It is of course sad that popular client libraries never got this functionality under Linux, but this is a problem of having wrong development priorities.


Can you expand on this? I've used Windows 10 for 2-3 years when it came out and I remember BSODs being hell.

Now I only experienced something close to that when I set up multiseat on single PC with AMD and Nvidia GPUs and one of them decided to fall asleep. Or when I undervolt GPU too much.


Of course that depends on the component and the access level. RAM chip broken? Tough luck. A deep kernel driver accessing random memory like CrowdStrike; you'll still crash. One needs an almost microkernel-like separation for preventing such issues.

However, there are certain APIs like WDDM timeout detection and recovery: https://learn.microsoft.com/en-us/windows-hardware/drivers/d... . It is like a watchdog that'll restart the driver without BSOD'ing. You'll get a crash dump out of it too.


IBM marketed "OS/2 for Windows" which made it sound like a compatibility layer to make Windows behave like OS/2. In truth it was the OS/2 operating system with drivers and conversion tools that made it easier for people who were used to Windows.


Untrue. OS/2 for windows leveraged the user’s existing copy of windows for os/2’s compatibility function instead of relying on a bundled copy of windows, like the “full” Os/2 version.

Os/2 basically ran a copy of windows (either the existing one or bundled one) to then execute windows programs side by side with os/2 (and DOS) software.


It was previously called the Windows Subsystem for Android before it pivoted. It had a spiritual predecessor called Windows Services for UNIX. I doubt the name had been chosen for the reasons you say, considering the history.

That said, to address the grandparent comment’s point, it probably should be read as “Windows Subsystem for Linux (Applications)”.


>for the reasons you say

That's not what I say, that's what the former PM Lead of WSL said. To be fair, Windows Services for UNIX was just Unix services for Windows. Probably the same logic applied there back then: they couldn't name it with a leading trademark (Unix), so they went with what was available.


WSA was a separate thing.

WSA and WSL both coexisted for a time.


Wikipedia states that WSL was made based on WSA.


Got a link? WSA didn’t come out until Windows 11 was released, and WSL predates Windows 11.


https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux#Hi...

It was called Project Astoria previously. Microsoft releasing the Windows Subsystem for Android for Windows 11 is news to me. I thought that they had killed that in 2016.


Astoria and WSA are different things. Sort of. WSL and WSA both use the approach that was proven by Astoria. That approach was possible since the NT kernel was created, but no one within Microsoft had ever used that feature outside of tiny pieces of experimentation prior to Astoria. Dave Cutler built in subsystem support from the beginning, and the Windows NT kernel itself is a subsystem of the root kernel, if I am remembering a video from Dave Plummer correctly.

Anyway, Astoria was an internal product which management ultimately killed, and some of the technology behind it later became WSL and much later, WSA. WSA's inital supported OS was Windows 11.

Microsoft being Microsoft, they artificially handicapped WSA at the outset by limiting the Android apps it could run to the Amazon App Store, because that's obviously the most popular Android app store where most apps are published. [rolls eyes] I don't think sideloading was possible. [rolls eyes again]

I don't work for Microsoft and I never have; I learned all of this from watching Windows Weekly back when it was happening, and from a few videos by Dave Plummer on YouTube.


I believe that both Windows Services for UNIX (Interix) and OS/2 application support were NT subsystems too. I am under the impression that Windows Services for UNIX was the foundation for Astoria.


They should not presume to trademark something called a "Linux subsystem for Windows".


GNU/Linux Subsystem for Windows


Subsystem Linux for Windows.


Creating a Git commit using low-level commands was always something I wanted to do, but I never found the time to really deepen my knowledge of Git. I have actually googled if I could find a blog post or something in this topic, but I've failed to find one. Finally, I got the chance, and for the past couple of weekends, I’ve been reading the Pro Git book (which it seems it's the same content as git-scm.com/book). I believe it’s a good practice to write a blog post about a topic after finishing a book (teaching is a good way of submitting knowledge in memory). To my surprise, creating a Git commit using plumbing commands was already covered in the final chapters of the book. I thought it would be a good idea to simplify that process and write a blog post which can be read under 10 minutes, allowing those who haven’t read the book yet (like myself in the past) to get a basic understanding of what Git is doing under the hood.

> But why is nobody finding or reading the later chapters in the docs?

I think to read the latest chapter of a book, one usually needs to read the earlier ones too. I personally don't jump directly to the internals when I want to read about something, because I'd then assume I am missing a lot of context and background.


I haven't read a "book" like this chapter-by-chapter since I first learned Python by reading the docs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: