It has nothing to do with interpreters or JIT, it has nothing to do with npm at all. All package managers have the insane security model of "arbitrary code execution with no constraints".
It just so happens that all of those languages share the worst design points, such as the need for a package manager at all and the classic "eval and equivalents run arbitrary code".
>All package managers have the insane security model of "arbitrary code execution with no constraints".
Not all of them, just the most popular ones for these highly sophisticated, well thought-out bunch of absolute languages.
I tend to agree but think npms post install hook is a degree worse. Triggering during install, silently because npm didn't like someone using the feature to ask for donations, is worse than requiring you to load and run the package code.
> The kraken known as Kubernetes might never have been needed if Plan9s features were adopted.
Which Plan9 features exactly give me a unified API layer to handle workload scheduling incl. fault tolerance, flat networking across a cluster or service discovery?
Containers are an implementation detail and not what Kubernetes is fundamentally about.
Let’s be clear about one thing: Kubernetes is an operating system on top of Linux which exists solely because operating systems don’t provide what it needs already. I’m saying that operating systems should provide scalable ways to launch applications securely across many physical machines natively. Plan9 offers that, and it has for 30 flippin years.
Plan9 has those things out of the box if you configure them. Fault tolerance, flat networking across a cluster, and service discovery. And if I’m wrong about that (my knowledge of both Plan9 and kubernetes is incomplete) then it would almost be trivial to implement given what plan9 has out of the box. In fact, I think the built-in network database can do all of these things if you put the relevant data in and use it. It was designed for these exact things.
Plan9 is designed to be deployed as lots of physical systems all working cooperatively. User systems and servers in a server room, both. A program that lives on computer A can run using the CPU of computer B and the networking of computer C. Natively. It can look up the address of any service via the network database (provided that info is put into the database when that service is started) and all of it. Note that I am not talking about DNS. That is separate from the network database.
Plan9 is different and it is superior in many ways.
Unix was built with the assumption that end users had terminals and that computing was centralized at the server. That assumption is no longer even remotely true. Yet we still cling to it as if it is ideal. It is not.
Plan9 was built with the assumption that everyone had capable computers at their desks and that people seated together often worked on things together. Closer to where we are today, but not quite. Today we have near-supercomputers at our desks in the form of development machines and servers of all descriptions in the server room, both more powerful and less powerful than our local machines.
If Plan9 were designed today it would be different, but the core features would remain.
And if you look at the source for Plan9 you’ll see that they got a hell of a lot done with very few lines of code. They were very, very “pro-simplicity”. Go read it and see how they did it. Then count the lines of code in Kubernetes and see which is bigger and more complex and then ponder that for a bit. It would have been easier to write an operating system to handle those workloads natively than it was to write Kubernetes.
I read your comment as in, comparing to node, my bad.
With regards to Rust, crates are packages which may include opaque binaries, e.g. serde_derive, and the stdlib is weak, so imports of thousands of lines of code are basically necessary for otherwise fundamental features like async.
It's probably easier to add dependencies in Go, but in the end people/projects don't.
I had a fairly fun time using Auth0 a few years back. The ability to run arbitrary code hooks at various points allowed us to do pretty interesting stuff in a managed way without resorting to writing or self-hosting something that was entirely flexible.
The fact that they have a "stay signed in" checkbox that doesn't keep me signed in tells me all I need to know about these jokers. I love going through a bloated login process multiple times a day, apparently.
Microsoft/EntraID does this too. The famous "Keep me signed in" and "Don't show this message again" buttons that don't do what they say they do, ever.
Maybe if enterprise sales decisions weren't made based on checklist and which account exec took them out on the best golf trip, we'd have better products.
Security and safety is all over their marketing but I have yet to hear anything about them that doesn't indicate either bumbling incompetence or gross negligence.
It's a fair question. I found them way better to implement SSO in my small startup than OneLogin.
Using Auth0 in apps, I find their documentation bafflingly difficult to read. It's not like being thrown in the deep end unexpected to swim. It's like being injected at the bottom of the deep end.God help the poor non-native English speakers on my team who have to slog through it.
That's why I'm putting emphasis on it, because to Go it is.
And to languages that actually have centralized package repositories it isn't. There is a difference between code and packages and Go simply does not have the latter (in the traditional sense - what Go calls a package is a collection of source files in the same directory that are compiled together within a module (a module is a collection of packages (again, code) that are released, versioned, and distributed together. Modules may be downloaded directly from version control repositories or via proxy servers)).
To the other languages mentioned above, packages may have binaries, metadata and special script hooks. There is a package manager like pip , cargo or npm and if you want to install one, you won't have to specify a URL because there is a canonical domain to go to.
Go just knows code and it'll use git, hg or even svn. And if you want to claim that lots of open-source code being on GitHub makes it special, then
> GitHub is every single programming language's centralized package repository
and
> Someone at Microsoft with root access could compromise every user of every single programming language
I think you're being silly to be so insistent about this. 95% of Go packages are hosted on Github, a centralized hosting platform. The fact that they install via the git protocol (or do they? do they just use https to check out?) is immaterial.
95% of Python packages are installed from PyPI, but just like Go can also install from non-Github sources, Python supports installing from other non PyPI indexes[0] or even from a Git repository directly[1] like Go.
> what Go calls a package is a collection of source files in the same directory
What is it that you imagine Python or NPM packages consist of? Hint: A Python .whl file is just a folder in a zip archive (Python also supports source distributions directly analogous to Go)
> 95% of Go packages[=code, the author] are hosted on Github
So "GitHub is every single programming language's centralized package repository, because lots of code is hosted there" ?
> Python supports installing from other non PyPI indexes
> 95% of Python packages are installed from PyPI, but just like Go can also install from non-Github sources, Python supports installing from other non PyPI indexes[0] or even from a Git repository directly[1] like Go.
And yet there is a clear difference between source distributions and pip/npm/rubygem/cargo packages - and between tooling/ecosystems that ONLY support the former and those that MAY use either and unfortunately mostly use the latter.
> What is it that you imagine Python or NPM packages consist of?
Something like a script that runs as part of the package that downloads a tarball, modifies package.json, injects a local bundle.js and runs npm publish (see this post). Usually also hosted at the default, centralized, authoritative source run by the maintainers of the package management tool.
But I'm repeating myself.
> (or do they? do they just use https to check out?)
Maybe try it out or read the docs first.
I'm closing with this:
> NPM, Python, Rust, Go, Ruby all suffer from this problem, because they have centralized and open package repositories.
is either wrong or disingenuously misleading, requiring nothing to apply to every single thing, depending on how you slice your definitions. It does not hold any water, that is my entire argument.
k, let me know how your CI pipeline fares the next time there's a Github outage and we can revisit this discussion of Go's fantastic uniquely decentralized dependency management.
You really ought to research a topic before arguing.
For the average user, both GitHub and default $GOPROXY would have to be down. For me, my CI runs where my (and code I've cloned) lives, self-hosted GitLab.
No. For the vast majority of games, they're either free to play, or less than 20 bucks.
In addition: people pay over $100 per month for these cheats, plus the initial hardware investment. The 60 bucks license doesn't matter. You just hop to another server.
The 60 bucks license doesn't matter. You just hop to another server.
That's the entire point and why community moderation servers work. Why would a hacker keep coming back to a well moderate private server where they only get 5-10 minutes of play time before their account/license get banned when they can instead go to another unmoderated server and not worry about it?
Because the unmoderated servers are full of other cheaters.
Some companies have tried a strategy of quietly shunting cheaters off to cheater ghettoes but the cheaters figure it out pretty quickly. With some limited exceptions, the cheating we're talking about is motivated by a desire to gain an advantage over legitimate, non-cheating players.
The problem with "you don't need to outrun the bear, only to outrun your friend" is that either you or your friend are going to get eaten. All other things being equal, it would be preferable to have a strategy where no one gets eaten.
I mean no... there is always a large portion of players that I would never want to play with in any meaningful way, aka racists, screechers, try hards, toxic, etc.
I'm perfectly content of feed those people to the bear so me and my friends can continue to have a fun and mostly hacker free experience.
No, the question is "why do we even need invasive anti-cheating mechanisms" with the proposed alternative being to simply ban people (of course, bans are still used even with anti-cheat).
Yes, banning a license key (assuming you have an unforgeable proof of license key ownership) is more potent than banning an IP address or email address. There are cross-game mechanisms like Steam VAC bans and Xbox Live account bans which are pretty potent too.
But they can still be evaded. Besides many cheaters simply having the money to buy new things, they can also get them from sites that trade in stolen license keys and accounts.
So, not warning shots but targeted fire?
And is there any evidence of those combatants, amongst all the body cams and drone footage?
To be clear, I mean actual combatants, not the IDF definition of "any man of fighting age"(https://en.m.wikipedia.org/wiki/Killing_of_Alon_Shamriz,_Yot...)
Libya
reply