Bash is ubiquitous and stable, making bash scripts incredibly portable.
All of the other languages you bring up are great for authoring code, but have a non-zero amount of friction when running code in the wild. Python may be omnipresent, but you can rarely count on a specific version or the presence of specific libraries. Go requires compiling platform-specific binaries. Even JVM- or JS-based software requires installing a separate toolchain first.
If you want to write some code (e.g., a launcher, installer, utility code, etc...) that is almost certainly going to run on any computing device created in the last several decades, bash is your language.
> Why doesn’t Amber have equivalent issues? Such as depending on a specific version of bash, or specific executables to be installed.
Because it's easy (for most cases) to write backwards-compatible or portable shell scripts and, since Amber is compiled, it can simply generate backwards-compatible code.
> That would cut against the idea that bash can target more devices than Python, which runs natively on all platforms.
The point is that Bash is more ubiquitously available, which is important if you write something like an install script.
"pretty ubiquitous" is what I'm referring to here. OP seemed to imply that because the other options have "non-zero friction" that targeting bash has zero friction. But you have to make sure bash is there, and if it's not, you have to install a toolchain that may include an entire operating system.
I guess I just don't understand how having the user install git bash or WSL any different from having them install Python or JVM?
Amber doesn't have equivalent issues because bash and the utilities it uses like bc and sed are incredibly stable. I've found nontrivial shell scripts I wrote decades ago that still run entirely unchanged.
That only applies to platforms on which those utilities already run. We are talking about portability here, so that means Windows, and those utils don't run on Windows. So you're left with git bash, which isn't bash and isn't running the same utilities; and WSL, which requires installing an entire operating system.
So I ask again, why does targeting bash offer a better portability story than say the JVM?
I suspect we may have different ideas of the use case here. To me Amber is not a language I would develop an application in. I would use it in the same places I currently write bash.
Given that, my production systems are likely a big target. None of my production systems have a JVM/JRE installed, and installing one just to run shell scripts would be (IMHO) a huge increase in attack surface for little to no gain. It would also bloat the hell out of my container images.
If I'm writing a GUI application or a web server or something, then I would agree JVM is more "portable." But if I just want a script that will run equally well on Ubuntu 18.04 and Fedora 40, and across all production machines regardless of what application stack is there (node.js, ruby, python, etc), and regardless of what version of node or python or ruby is installed, Amber feels highly portable to me.
GNU and BSD tooling differs in small, but sometimes breaking ways. One example off the top of my head is that GNU sed accepts `sed -i`, but BSD requires `sed -i ''`, i.e. an empty string to tell it not to back up the existing file. Or GNU awk having sorting capabilities. Etc.
I didn't realize we were going so far back. In that case, Perl may be more convenient than Python/Go, and almost certainly a better choice than bash.
Still,
> If you want to write some code (e.g., a launcher, installer, utility code, etc...) that is almost certainly going to run on any computing device created in the last several decades, bash is your language.
Can you give an example of a "several decades" old device for which you'd want/need to write a launcher or installer?
A few days ago, I tried running some code that hasn't been updated in about 5 years. The python launcher has bit-rotted, so now I need to rewrite it. The other 99% of the project compiles fine.
Things like perl (without CPAN) and bash generally take backward compatibility more seriously than python does.
My experience with python (even ignoring the 2 to 3 debacle) is that you can run code on machines that were setup +/- six months from when the software was written. That's unacceptable for non-throwaway use cases unless your company is willing to burn engineering time needlessly churning software that's feature complete and otherwise stable.
But whenever people talk about writing Bash, you always have someone advocating that you should write POSIX sh instead if you want maximum portability. To paraphrase Qui-Gon Jinn, "there's always a more portable language."
Underlying all of these discussions is an attempt to reduce issues of portability to 0. It's a good goal, but IMO interpreted languages will by-definition never be the solution.
I've started reaching for Go in situations in which I'd usually reach for Bash or Python, and it's a godsend, because I can control for portability at compile time. If you make compiling for various GOOS and GOARCH targets the norm, portability is never an issue again.
The big difference I see at a skim is that, in classical datalog, facts are only allowed to contain domain attributes, and not value attributes. E.g., you can express binary facts like Raining(12:00) (it's raining at 12:00), but not Rain(12:00) = 5 in (At 12:00, 5 inches of rain had accumulated).
Value attributes make it much easier to express most forms of aggregation (sum, min, max), so you'll find very similar patterns in practical datalog variants e.g., RelationalAI's Rel [1], DBToaster's AGCA [2], etc...
Apart from that, and a syntax that seems to resemble map-style collection programming a bit more than datalog, yeah, this basically looks like datalog.
From a practical standpoint for most database systems, sort of? One might say that there's a functional dependency from the 'time' to the 'precipitation' attribute, and providing that information to the optimizer might affect its decisions... but at the level of data storage and query evaluation runtimes, there's not a huge difference.
From a data modeling and query optimization perspective, however, there's some value in distinguishing attributes uniquely related to identity (e.g., keys, or group-by attributes) and attributes that we're only interested in computing statistics over. This makes it easier to automatically create e.g., data cubes or similar indexes, and many useful statistics can be modeled using a nice mathematical structure like a ring or semiring [1], who's properties (commutativity, associativity, distributivity) are very helpful when optimizing queries.
Classical Datalog, in particular, is entirely based on the former type of attribute; value (dependent) attributes always need to be hacked in, in some way.
potrace worked just fine for me when I needed to digitize some drawings that became a prominent part of a business's graphic design. Maybe a decade and a half ago.
Edit: I see now that Inkscape can do color by first decomposing into colors and then doing each of those separately.
TFA's point is that the model is horrible for creators: a revenue sharing model is only viable as long as the number of creators being shared is comparatively small. $5, $10, $20 from a few hundred or thousand viewers is a decent haul when compared to a few fractional pennies per view.
It makes sense that creators focus their efforts on cultivating personal relationships with a small, but loyal base. You're not their target demographic.
People say they will pay for news. The translation is that will be dragged kicking and screaming into paying $100 per year for all they can eat news rather than the thousands it would actually cost.
Exactly. People are willing to pay for news if it’s sensibly priced. It’s not a charity. If it really costs thousands, then the model is broken and they should all be out of a job.
Differing skillsets? Managing people doesn't have to mean more responsibility. Some people are good at doing, others are good at the bureaucracy game and buffering for the doers. Both are critical, and in a good org, both have different, but not necessarily differing scales of responsibility.
Things are bad enough already, even with pre-LLM technology. "I'm sorry sir, your 8-year old son can't board this plane because the system says he's a terrorist" [1].
You're using a straw example to try to claim a massive point.
There are 340 million people in the US. The US has one of the largest government systems that humanity is likely to ever see. How many similar cases of children being put on the terrorist watch list have there been in the past 20 years? Unless there are a lot of cases of that happening, if it's not rare (hint: it's extraordinarily rare; your example story is from 13 years ago), your premise is pure straw.