Hacker Newsnew | past | comments | ask | show | jobs | submit | BwackNinja's commentslogin

Zawinski's Law, when taken literally, argues that programs all eventually need to be communicated with by people and other programs using a generic protocol and without using program-specific or domain-specific libraries to do so.

Unix shells (local), I'll add in HTTP (remote), and Email (remote and asynchronous) are the only forms that are ubiquitous, precisely because they enforce no structure for the payload. The structured alternatives are only popular in specific domains because they create their own ecosystem. As long as input can clearly be parsed, which goes hand in hand with being human-readable as long as you're not trying to be too efficient, you get schema implicitly transmitted out of band (by having output that you can visually inspect) and interoperability for anything that can read and write text.

I'd be interested in other directions this might go, but I remain skeptical of anything that will introduce and enforce a new paradigm that requires adding complexity to programs to accommodate it.


every object should have its own url


It's easy to agree that the AI assisted email writing (at least in its current form) is counterproductive, but we're talking about email -- a subject that's already been discussed to death and everyone has staked countless hours and dollars but failed to "solve".

The fundamental problem, which AI both exacerbates and papers over, is that people are bad at communication -- both accidentally and on purpose. Formal letter writing in email form is at best skeuomorphic and at worst a flowery waste of time that refuses to acknowledge that someone else has to read this and an unfortunate stream of other emails. That only scratches the surface with something well-intentioned.

It sounds nice to use email as an implementation detail, above which an AI presents an accurate, evolving, and actionable distillation of reality. Unfortunately (at least for this fever dream), not all communication happens over email, so this AI will be consistently missing context and understandably generating nonsense. Conversely, this view supports AI-assisted coding having utility since the AI has the luxury of operating on a closed world.


The professional matchmaker angle as a contrast is fascinating. The subscription model not only removes the incentive to provide quality quickly -- it reverses it. Doing a worse job is encouraged if you can leverage that to convince people that the future (which is only available by continuing your subscription) is worth waiting for. It's also more attractive because it has smaller up-front costs for the consumer.

It would be an interesting world if we outlawed auto-renewal for services that you need to actively use in order to get any value from them. When you're paying for Netflix, you aren't paying to watch movies, you're paying for /access/ to movies you can watch. The flip side is that the maximum potential service quality would decrease if revenue decreases -- which is also why ad-supported services prevail. If all players are subject to the same rules, that would either end up as a decrease in licensing costs or a focus on quality content over quantity. If they aren't producing exclusive content, they are beholden to the quality of the market. Either way, that should encourage quality content to be made over saturating the market with content.

Unfortunately, pipe dreams will remain pipe dreams.


Similar to if you engaged a realtor on a monthly subscription instead of a (roughly) fixed commission based on % of sales price - incentivizes them to spin things out.

Having legislators outlaw bad business practices is in general very slow; if competition works then it seems there should be a niche for a lump-sum/fixed commission-based dating service where they match you with the people in their database most likely to actually be compatible with you. But now that creates a new problem of measuring "successful" outcomes in matchmaking, which will be near-impossible to measure and easy for all parties to game, if it's mostly transacted by app. But it sounds in principle like the business model for traditional introduction-based matchmaking (the matchmaker only gets a good reputation if they have some successes, and most prospective customers will only be willing to pay $ for say 3-12 months).

EDIT: makes me wonder: eHarmony never opened matchmaking offices.


There is no distinction between system and program libraries in Linux. We used to pretend there was one before usrmigration, but that was never good to take seriously.

The distro as packager model ensures that everything is mixed together in the filesystem and is actively hostile to external packaging. Vendoring dependencies or static linking improves compatibility by choosing known working versions, but decreases incentive and ability for downstream (or users) to upgrade those dependencies.

The libc stuff in this article is mostly glibc-specific, and you'd have fewer issues targeting musl. Mixing static linking and dlopen doesn't make much sense, as said here[1] which is an interesting thread. Even dns resolution on glibc implies dynamic linking due to nsswitch.

Solutions like Snap, Flatpak, and AppImage work to contain the problem by reusing the same abstractions internally rather than introducing anything that directly addresses the issue. We won't have a clean solution until we collectively abandon the FHS for a decentralized filesystem layout where adding an application (not just a program binary) is as easy as extracting a package into a folder and integrates with the rest of the system. I've worked on this off and on for a while, but being so opinionated makes everything an uphill battle while accepting the current reality is easy.

[1] https://musl.openwall.narkive.com/lW4KCyXd/static-linking-an...


> adding an application (not just a program binary) is as easy as extracting a package into a folder and integrates with the rest of the system

I have fond memories of installed Warlords Battle Cry 3, Warcraft 3, AOE2 etc. directories on flash drives, distributed to 20+ kids in high school (all using the same key). Good days.


Way off topic but you just reminded me of all the time I spent playing Warlords 3 (not Warlords Battlecry 3, the original Warlords games were turn-based). One cool feature it had that I'm surprised I haven't really seen other turn-based games do is a "play by email" option similar to correspondence chess, except you're just emailing save files back and forth and the game makes importing/exporting the save files via email a bit more streamlined.


Civilization V has a "cloud game" option that keeps the save file online. Players can then take their turn when they have a chance (though non-responsive players can break the game), and if you're running the game through Steam you'll even get a Steam notification when it's your turn. You can also define a webhook that gets called when it's somebody's turn. One of these days I'll put together a little tool that takes that webhook message and translates into a Discord PM or Discord channel post @-ing the person whose turn it is.

They specifically say that it's their way of paying tribute to Civ playing by email.


PBEM was not uncommon in games from that era. The other title I can think of that had the same feature built-in was Age of Wonders.


> Even dns resolution on glibc implies dynamic linking due to nsswitch.

Because, as far as I’ve heard, it borrowed that wholesale from Sun, who desperately needed an application to show off their new dynamic linking toy. There’s no reason they couldn’t’ve done a godsdamned daemon (that potentially dynamically loaded plugins) instead, and in fact making some sort of NSS compatibility shim that does work that way (either by linking the daemon with Glibc, or more ambitiously by reimplementing the NSS module APIs on top of a different libc) has been on my potential project list for years. (Long enough that Musl apparently did a different, less-powerful NSS shim in the meantime?)

The same applies to PAM word for word.

> Mixing static linking and dlopen doesn't make much sense, as said [in an oft-cited thread on the musl mailing list].

It’s a meh argument, I think.

It’s true that there’s something of a problem where two copies of a libc can’t coexist in a process, and that entails the problem of pulling in the whole libc that’s mentioned in the thread, but that to me seems more due to a poorly drawn abstraction boundary than anything else. Witness Windows, which has little to no problem with multiple libcs in a process; you may say that’s because most of the difficult-to-share stuff is in KERNEL32 instead, and I’d say that was exactly my point.

The host app would need to pull in a full copy of the dynamic loader? Well duh, but also (again) meh. The dynamic loader is not a trivial program, but it isn’t a huge program, either, especially if we cut down SysV/GNU’s (terrible) dynamic-linking ABI a bit and also only support dlopen()ing ELFs (elves?) that have no DT_NEEDED deps (having presumably been “statically” linked themselves).

So that thread, to me, feels like it has the same fundamental problem as Drepper’s standard rant[1] against static linking in general: it mixes up the problems arising from one libc’s particular implementation with problems inherent to the task of being a libc. (Drepper’s has much more of an attitude problem, of course.)

As for why you’d actually want to dlopen from a static executable, there’s one killer app: exokernels, loading (parts of) system-provided drivers into your process for speed. You might think this an academic fever dream, except that is how talking to the GPU works. Because of that, there’s basically no way to make a statically linked Linux GUI app that makes adequate use of a modern computer’s resources. (Even on a laptop with integrated graphics, using the CPU to shuttle pixels around is patently stupid and wasteful—by which I don’t mean you should never do it, just that there should be an alternative to doing it.)

Stretching the definitions a little, the in-proc part of a GPU driver is a very very smart RPC shim, and that’s not the only useful kind: medium-smart RPC shims like KERNEL32 and dumb ones like COM proxy DLLs and the Linux kernel’s VDSO are useful to dynamically load too.

And then there are plugins for stuff that doesn’t really want to pass through a bytestream interface (at all or efficiently), like media format support plugins (avoided by ffmpeg through linking in every media format ever), audio processing plugins, and so on.

Note that all of these intentionally have a very narrow waist[2] of an interface, and when done right they don’t even require both sides to share a malloc implementation. (Not a problem on Windows where there’s malloc at home^W^W^W a shared malloc in KERNEL32; the flip side is the malloc in KERNEL32 sucks ass and they’re stuck with it.) Hell, some of them hardly require wiring together arbitrary symbols and would be OK receiving and returning well-known structs of function pointers in an init function called after dlopen.

[1] https://www.akkadia.org/drepper/no_static_linking.html

[2] https://www.oilshell.org/blog/2022/02/diagrams.html


> Witness Windows, which has little to no problem with multiple libcs in a process

Only so long as you don't pass data structures from one to the other. The same caveats wrt malloc/free or fopen/fclose across libc boundaries still applies.

Well, not anymore, but only because libc is a system DLL on Windows now with a stable ABI, so for new apps they all share the same copy.


Yes, but in a culture where this kind of thing is normal (and statically linking the libc was popular for a while), that is in mostly understood, CPython’s particular brand of awfulness notwithstanding. It is in any case a much milder problem than two libcs fighting over who should set the thread pointer (the FS segment base), allocate TLS, etc., which is what you get in a standard Linux userspace.


> The same applies to PAM word for word.

That's one of the reasons that OpenBSD is rather compelling. BSDAuth doesn't open arbitrary libraries to execute code, it forks and execs binaries so it doesn't pollute your program's namespace in unpredictable ways.

> It's true that there's something of a problem where two copies of a libc can't coexist in a process...

That's the meat of this article. It goes beyond complaining about a relatable issue and talks about the work and research they've done to see how it can be mitigated. I think it's a neat exercise to wonder how you could restructure a libc to allow multi-libc compatibility, but question why anyone would even want to statically link to libc in a program that dlopen's other libraries. If you're worried about a stable ABI with your libc, but acknowledge that other libraries you use link to a potentially different and incompatible libc thus making the problem even more complicated, you should probably go the BSDAuth route instead of introducing both additional complexity and incompatibility with existing systems. I think almost everything should be suitable for static linking and that Drepper's clarification is much more interesting than the rant. Polluting the global lib directory with a bunch of your private dependencies should be frowned upon and hides the real scale of applications. Installing an application shouldn't make the rest of your system harder to understand, especially when it doesn't do any special integration. When you have to dynamically link anyway:

> As for why you’d actually want to dlopen from a static executable, there’s one killer app: exokernels, loading (parts of) system-provided drivers into your process for speed.

If you're dealing with system resources like GPU drivers, those should be opaque implementations loaded by intermediaries like libglvnd. [1] This comes to mind as even more reason why dynamic dependencies of even static binaries are terrible. The resolution works, but it would be better if no zlib symbols would leak from mesa at all (using --exclude-libs and linking statically) so a compiled dependency cannot break the program that depends on it. So yes, I agree that dynamic dependencies of static libraries should be static themselves (though enforcing that is questionable), but I don't agree that the libc should be considered part of that problem and statically linked as well. That leads us to:

> ... when done right they don't even require both sides to share a malloc implementation

Better API design for libraries can eliminate a lot of these issues, but enforcing that is much harder problem in the current landscape where both sides are casually expected to share a malloc implementation -- hence the complication described in the article. "How can we force everything that exists into a better paradigm" is a lot less practical of a question than "what are the fewest changes we'd need to ensure this would work with just a recompile". I agree with the idea of a "narrow waist of an interface", but it's not useful in practice until people agree where the boundary should be and you can force everyone to abide by it.

[1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28...


Complexity has always been less respected the more it is separated from the experiences of the average person; or more personally, your own experiences. Unfortunately, that's the very nature of complexity itself. Dependencies increase instead of decrease in the hopes of scaling, saving time and money, or even providing a slightly better product. As positive as those efforts sound, they also make the failure cases worse.

The only people to respect with regards to complexity are those dealing with inherent complexity -- especially when they're building more digestible abstractions, and those working to reduce complexity due to having a substantial understand of it. It can be rather difficult to distinguish between inherent complexity and incidental complexity though.


This is only looking at the problem from one side. If there's a crisis and you're not treating it as one, you are committing two sins. Not only are you wrong, but you're wasting time more valuable than your own.

On the flip side, if you're treating something as a crisis when the person trying to help knows there isn't one, you're putting undue pressure and stress on someone and hurting both parties in the process.

The fundamental problem is inconsistent and inaccurate assessments of the situation. People are attached to their mindset and are often too stubborn to re-evaluate. The solution presented only shows concessions on one side. I'd add that if someone else thinks a situation is dire and you don't, you should try to understand their perspective better rather than immediately considering them rude and dismissing their concerns.


None of that requires being rude. Terse maybe. Which may seem rude. But not actually rude. To me it is simple. Is it ad hominem?


It is simple in theory but messy in practice

People are bad at identifying bad behavior in themselves, but they can sometimes see that same behavior when others exhibit it. Whether they can then put this together into genuine introspection and growth is another story.

https://youarenotsosmart.com/2014/01/07/yanss-podcast-015-i-...


I wish many people would know what ad hominem is to begin with.


This seems emblematic of the general direction of technology. It's an approach to managing system complexity that works by adding additional complexity and handling that combination well as long as you use their tools. Nix isn't special in this -- all (non-derivative) Linux distros do exactly that, though that's probably little more than an extension of having written a package manager. I dislike higher level tools like this because they discourage understanding of anything below them. The filesystem is no longer organized in a human-readable fashion, it is an implementation detail.

I'd love to see more concerted efforts going the other direction -- managing complexity by working to make the system simpler and compartmentalizing additional functionality. System snaps, now packaging lxd and cups, are probably the closest mainstream example today.


IMO, the complexity is an important aspect of the Sisyphean drive that life seems to have to reverse the action we call the second law of thermodynamics. I highly recommend reading The Animate and the Inanimate[0] by William Sidis.

However, I do agree that there is a lot of unnecessary complexity in our tech stacks today; though I blame much of this on how young the concept of computing and information theory still is. We're still exploring, we're still building, and it's really exciting to me to have the opportunity to work in a space that is only in its infancy. There is so much room for almost anyone in the field to bring innovations and improvements, if only so many of us weren't slaves to grinding our bones, and finite time, to dust creating things of dubious moral value under an almost singular focus on monetary wealth and the threat of death and ostracism.

[0] https://www.sidis.net/animate.pdf


I must say that as far as I'm concerned, Nix is very much emblematic of simpler technology. Nix tries to solve problems (such as build impurities) very much by trying to fix things at the source (and contributing back to upstream!), rather than slapping things that work on top of each other, mindlessly using magic like containers. That's one of the reasons why I adore nixpkgs, that they have heroically attempted to fix problems in packages at their sources, and have very much succeeded.


> rather than slapping things that work on top of each other, mindlessly using magic like containers.

I have exactly this problem. I need to run centos7-era binary application on rocky9, and of course, it does not compile on the new gcc compiler in rocky9, and also some libraries are missing/have changed too much.

I was thinking I will run the app in a centos7 container on rocky9 machine, but this creates lots of unwelcome complications and additional work.

I'm not very familiar with Nix, but it seems one could install Nix on rocky9 and then somehow use it to build my application against the centos7 devel libraries. Do you think this is a plausible pathway for compiling and running such an old application? It would be great if I could just compile the old app on the new system and forget about containers.


Sounds like https://gokrazy.org/ might be of interest to you.


MacOS 9 and the spatial desktop metaphor is neat. I went that route for a while. What this misses, however, is that the biggest problem with the desktop interface is that we've substantially increased application complexity and laptops (and even smaller devices) won. As a result, we're trying to answer the question "how we fit our skeuomorphic paradigm in a diminutive form factor". The inspiration involved much larger actual desks and tables where you can freely arrange several documents that are each visible and can be reached at a glance. If you're maximizing the window for a document for reasons beyond helping you focus, then your workspace ahem your screen is too small.

The screenshot is 1920x1080. Screens are sold using buzzwords like 'HD', 'UHD', and 'retina' that evoke a sense of image clarity. I spent years telling my dad that I liked higher resolutions because it meant more /space/ and he couldn't grasp what I meant. He was stuck on associating higher resolution with clarity until I bought him a 43" 4k monitor, and he used it for a while. Even at 1.5x scaling, suddenly, he was able to view multiple pages of a document clearly at the same time without even scrolling. This isn't at all a normal desktop setup or the kind of setup that desktop environments are optimizing for or advocating. But it works better and better matches the inspiration.


For me, I look back to the Amiga for this. Most actual work happened on individual screens, which match neatly to mostly tiled virtual desktops set up for individual tasks.

It was mostly on the Workbench we used floating windows, and while we had "sort-of" spatial, in that the position of windows were remembered if you chose, the if you chose (by choosing "snapshot") part meant you were free to move folder around knowing they'd be back where they should be when you opened them again. To me it's always been annoying that the attempts at spatial on Linux all took it to the extreme of remembering every change, which to me was always the biggest wart of these systems.

I absolutely like expanding screen size, and can't deal with peoples tendency to opt for tiny little laptops, but at the same time, I don't need all that much physical screen space for most things because everything happens on separate "screens"/virtual desktops the way it used to back on my Amiga.


For many people, the limiting factor of this is visual acuity though. I personally can't see it useful to have more than 2560x1440 equivalent pixels of space on a 27 inch monitor. For a larger monitor, you have to sit further back, so it is effectively the same. If you want to see more clearly, you'd need to get closer, but that causes issues since you are still limited by your available field of view.


Requiring that you sit further back is built on the notion that you need to be able to see your entire workspace at once, which was never true with an actual desk and largely implies that you want a single document to take up the whole screen. If you remove that limitation, then you find yourself with a larger workspace with elements at a comfortable size to work with.

I do prefer to turn my head side to side rather than up and down, so right now I'm happiest with a 5120x1440 49" monitor and may consider a 7680x2160 57" monitor sometime in the future.


It’s interesting you mention this. The way I use my desktop I always have my applications maximized and I just alt-tab to switch contexts. I also am in the terminal a lot and use Yaquake but not in maximized mode because I don’t want to focus in the bottom left corner of my screen. I also put the task bar left vertical because I don’t care about the horizontal space.

Doing all of this still felt cumbersome and then it dawned on me about a year ago, because I don’t game or watch full screen video, I think I’d much prefer the old 4x3 screens for my workflow.


Well that's kind of impossible, as alt-tab will show al windows or applications.

The best productivity for me is a separate machine per context (with synergy or similar), because it won't clutter the alt-tab.

Fast userswitching doesn't work, as I'll have to switch back and forth between users (roles actually). I simply want isolated users, with their own filesystem/directory, but still be able to control them at the same time (virtual KVM).

Ideally, I'd create "contexts" or users on my mac, and split / arrange parts of my monitor as desktops. I thought about using parallels or X11 to mimic this behavior, but it simply is not the same.

MacOS's stage manager kind of works, but it's very buggy, and it won't get you an isolated filesystem. I've "solved" having the browser for different purposes by creating separate instances (not just separate profiles, but actual executables) of chrome (dev, social media, general browsing), which helps a lot, but I can't do that with everything


When I said I used alt tab to switch contexts I meant applications as I said I run my applications full screen.


4:3 is pretty rare but finally... finally you can get screens from 16:10 to 3:2 now without too much trouble.


I'd probably agree with the "useful" but I find higher resolution more aesthetically pleasing, especially text.


Conversely, I find anything above 1920x1080 very displeasing precisely because it removes my ability to practically use bitmapped fonts. Subpixel antialiasing is very distracting and Retina (IMHO) is a solution in search of a problem when it comes to making user interfaces that are actually aesthetic and easy on the eyes. I'm autistic and have diagnosed vision problems tho, so that probably feeds into it for better or worse.


Fair enough and to each their own. I've been using computers since bitmapped fonts on 320x200 screens were the norm, and I've always been excited to upgrade resolution.


I too think text looks nicer at higher dpi but I had bad experiences with fractional scaling on Linux in the past, and 4K monitors are more expensive, so I didn't bother getting one.


I always see with surprise these claims about the so-called "fractional scaling", which is something I have never encountered on Linux.

This "fractional scaling" might be a problem of Wayland and/or Gnome, but it certainly it is not a problem of Linux or of X Window System.

In any non-stupid graphics environment you need just to set an appropriate value for the dots-per-inch parameter, which will inform all applications about the physical size of the pixels on your monitor (allowing the rendering algorithms to scale arbitrarily any graphic elements).

Any non-stupid application must specify the size of the fonts in typographic points, not in pixels. When this is done, the fonts will be rendered at the same size on any monitor, but beautifully on a 4k monitor and uglier on a Full HD monitor.

The resolution of a Full HD monitor is extremely low in comparison with printed paper, so the fonts rendered on it are greatly distorted in comparison with their true outlines. A 4k monitor is much better, but at normal desktop sizes it is still inferior to printed paper, so for big monitors even better resolutions are needed to recreate the same experience that has been available for hundreds of years when reading printed books. A 4k monitor can match the retina resolution only for very small screens or for desktop monitors seen from a great distance, much greater than a normal work distance.

Similarly, any non-stupid drawing applications must not specify any dimensions in pixels, but in proper length units or in units relative to the dimensions of the screen or of the windows, and then the sizes will be the same everywhere, but all graphical elements will be more beautiful on a 4k monitor.

This was already elementary knowledge more than 30 years ago, and recommended since the most ancient versions of X Window System and MS Windows. I do not even know when this modern "fractional scaling" junk problem has appeared and who is guilty of it.

I have switched to using only 4k monitors with my desktops and laptops, on all of which I use Linux (with XFCE), about a decade ago, and during all this time I never had any kind of scaling problems, except with several professional (!!) applications written in Java by incompetent programmers, which not only ignore the system settings, so they show pixel-sized windows and fonts, but they also do not have any option for choosing another font or at least another font size (so much for the "run anywhere" claim of Java).


Now try 2 screens with different pixel densities. Also, it is pretty dumb to call out apps like that — popular frameworks either support that workflow or not. I should not be programming font rendering in my todo list app, that is outside the scope of such a project.


Here you are right that there is a defect in the ancient X Window System, because it has only one global DPI value, instead of one DPI value per each attached monitor.

Correcting this is a very small change that would have been much simpler than inventing the various "integer scaling" and "fractional scaling" gimmicks, which have been included in some desktop environments.

Using the correct units in graphics APIs is not "programming font rendering". It would have been better if pixels would have never been exposed in any graphics APIs after the introduction of scalable font rendering and scalable drawing, removing thus any future scaling problems, but it was tempting to provide them to enable optimizations, especially during times when many were still using very low resolution VGA displays.

However such optimizations are typically useless, because they optimize an application only for the display that happens to be used by the developer, not for the display of the final user. Optimizations for the latter can be achieved only by allowing the users to modify any sizes, to be able to choose those that look best on their hardware.


Even if there were no way to control the output on a pixel level, you could easily be left with minecraft-like blocks -- there is not much else your high-DPI monitor can do with a client that simply don't output higher resolutions. E.g. if they are using a bitmap icon, that will still be ugly. (sure, they should use vector icons, but what about an image viewer showing bitmaps?)


It's too bad that displays designed for 2x UI scaling are so rare outside of Apple stuff. Even on Windows which is probably the OS with the best fractional UI scaling, 2x looks visibly better.


The 9” B&W screen on my SE/30 with a 512x384 resolution is perfectly usable for Word, Excel, IRC, and code editing.

Refreshingly so at times. Comparatively it’s very distraction free.

Whenever I fire it up to journal or fiddle with some classic MacOS development I always think, “Where did we manage to go so wrong in the last 30 years?”


This is where the classic Mac OS really shines: one fullscreen application which is totally dedicated to the task at hand. It's why I still favor it for many "creative" endeavors and why Apple was able to survive so relatively long with it despite the OS being a flaming garbage pile of technical debt and hacks underneath the glossy exterior.


With regards to abstractions, the perceived quality of code depends heavily on who will be expected to read it. "Principle of least surprise" isn't universal, it's subjective because you're hiding complexity not actually making things less complex. If an abstraction feels natural to the person reading it, they will find the code easy to read.


Or alternatively, should we be predicting that there will be a John Henry of AI?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: