> Japan spends ~$5,790 and has the highest life expectancy in the OECD
Is Japan's life expectancy because of its healthcare or culture? I'm pretty sure Americans would not live to the same age as Japanese even with Japanese healthcare because of our low nutrition high sugar diets...
Life expectancy is not a useful health care system comparison because the primary factors that cause divergences between developed countries aren't based in the health system --- they're things like traffic accidents, homicides, drug overdoses, and suicide. Yes, CVD will appear in that list of factors, but it's noisy; despite having structurally the same health care system, states in New England will have Scandinavian CVD outcomes while southern states (some of whom actually do a better job than New England at making care available) have developing-nation CVD outcomes.
What if you throw a transformation step into the mix? i.e. "Take this python library and rewrite it in Rust". Now 0% of the code is directly copied since python and Rust share almost no similarities in syntax.
Ok, but what if in the future I could guarantee that my generative model was not trained on the work I want to replicate. Like say X library is the only library in town for some task, but it has a restrictive license. Can I use a model that was guaranteed not trained on X to generate a new library Z that competes with X with a more permissive license? What if someone looks and finds a lot of similarities?
I think there could be a market for "permissive/open models" in the future where a company specifically makes LLM models that are trained on a large corpus of public domain or permissively licensed text/code only and you can prove it by downloading the corpus yourself and reproducing the exact same model if desired. Proving that all MIT licensed code is non-infringing is probably impossible though at that point copyright law is meaningless because everyone would be in violation if you dig deep enough.
As a complete layman, quantum computing seems like it could be like AI. AI for the longest time was a scam. Like, it was clearly improving, but only in marginal increments. The bar was so low though... even the state of the art was garbage - https://en.wikipedia.org/wiki/Tay_(chatbot), for example. Eventually it was starting to seem like narrow applications of CNNs/machine learning would be the future of AI, but that general purpose AI would be garbage forever. It took the attention/transformer breakthrough (and someone to realize how to use it) before we hit the explosive improvement in general purpose AI that we see today. Quantum computing could still be in the "Tay" phase right now.
AI was mostly based on machine learning, right? Please correct me if I'm wrong, but there were very primitive AI/ML examples even decades ago. Thus, I agree with the parent.
Is using virtualization the only good way of taking a 288-core box and splitting it up into multiple parallel workloads? One time I rented a 384-core AMD EPYC baremetal VM in GCP and I could not for the life of me get parallelized workloads to scale just using baremetal linux. I wanted to run a bunch of CPU inference jobs in parallel (with each one getting 16 cores), but the scaling was atrocious - the more parallel jobs you tried to add, the slower all of them ran. When I checked htop the CPU was very underutilized, so my theory was that there was a memory bottleneck somewhere happening with ONNX/torch (something to do with NUMA nodes?) Anyway, I wasn't able to test using proxmox or vmware on there to split up cpu/memory resources; we decided instead to just buy a bunch of smaller-core-count AMD Ryzen 1Us instead, which scaled way better with my naive approach.
They are used for VMs because the load is pretty spiky and usually not that memory heavy. For just running single app smaller core count but higher clocked ones are usually more optimal
>Anyway, I wasn't able to test using proxmox or vmware on there to split up cpu/memory resources; we decided instead to just buy a bunch of smaller-core-count AMD Ryzen 1Us instead, which scaled way better with my naive approac
If that was single 384 (192 times 2 for hyperthreading) CPU you are getting "only" 12 DDR5 channels, so one RAM channel is shared by 16c/32y
So just plain 16 core desktop Ryzen will have double memory bandwidth per core
How did the speed of one or two jobs on the EPYC compare to the Ryzen?
And 384 actual cores or 384 hyperthreading cores?
Inference is so memory bandwidth heavy that my expectations are low. An EPYC getting 12 memory channels instead of 2 only goes so far when it has 24x as many cores.
Could this also be related to Facebook killing messenger.com (i.e. they are no longer running a charity so they need all users to be on the main site now to consume the slop)?
I say do it, if it simplifies the architecture. For example if you are using firestore with a redis cache layer, that's 2 dbs. If you can replace 2 dbs with 1 db (postgres), I think it's worth it. But if you are suggesting using a postgres cache layer in front of firestore instead of redis... to me that's not as clear cut.
As someone who works on medical device software, I see this as a huge plus (maybe a con for FOSS specifically, but a net win overall).
I'm a big proponent of the go-ism "A little copying is better than a little dependency". Maybe we need a new proverb "A little generated code is better than a little dependency". Fewer dependencies = smaller cyberseucity burden, smaller regulatory burden, and more.
Now, obviously foregoing libsodium or something for generated code is a bad idea, but probably 90%+ of npm packages could probably go.
I feel npm gets held to an unreasonable standard. The fact is tons of beginners across the world publish packages to it. Some projects publish lots of packages to it that only make sense for those projects but are public anyway then you have the bulwark pa lager that most orgs use.
It is unfair to me that it’s always held as the “problematic registry”. When you have a single registry for the most popular language and arguably most used language in the world you’re gonna see massive volume of all kinds of packages, it doesn’t mean 90% of npm is useless
FWIW I find most pypi packages worthless and fairly low quality but no ones seems to want to bring that up all the time
I think you are completely oblivious to the problems plaguing the NPM ecosystem. When you start a typical frontend project using modern technology, you will introduce hundreds, if not thousands of small packages. These packages get new security holes daily, are often maintained by single people, are subject to being removed, to the supply chain attacks, download random crap from github, etc. Each of them should ideally be approved and monitored for changes, uploaded to the company repo to avoid build problem when it gets taken down, etc.
Compare this to Java ecosystem where a typical project will get an order of magnitude fewer packages, from vendors you can mostly trust.
If these packages get security holes daily, they probably cannot "just go" as the parent comment suggested (except in the case of a hostile takeover). If they have significant holes, then they must be significant code. Trivial code can just go, but doesn't have any significant quality issues either.
I'm not, in the least. I'm aware of the supply chain issues and CVEs etc.
One thing I want to separate here is number of packages is not a quality metric. For instance, a core vue project on the surface may have many different sub dependencies, however those are dependencies are sub packages of the main packages
I realize projects can go overboard with dependencies but its not in and of itself an issue. Like anything, its all about trade offs and setting good practices.
Its not like Java as an ecosystem has been immune either. The `Log4Shell` vulnerability was a huge mess.
My point isn't to bash the Java ecosystem, but nothing is immune to these issues and frequency is a fallacy reason to spread FUD around an ecosystem because it lacks context.
It's a matter of community culture. In the Node.js ecosystem, all those tiny packages are actually getting widely used, to the extent that it's hard to draw a line between them and well-established packages (esp. when the latter start taking them as dependencies!). Python has been npm'ified for a while now but people are still generally more suspicious of packages like that.
I am utterly confused at how you think rewriting entire libraries have less security holes than battle-hardened libraries that 1000s of other people use.
- Generating your own left pad means you don't have to pull in an external left pad
- Which in turn means left pad doesn't show up on your SBOM
- Which in turn means CVEs won't show up for left pad when you run your SBOM through through SCA
- Which means you don't have to do any CVE triage, risk analysis, and mitigation (patching) for left pad
- It also means you don't have to do SOUP testing for left pad
Now imagine you've done that for a dozen libraries that you are only using a small piece of. That's a ton of regulatory and cybersecurity work you've saved yourself. I never claimed generating code makes your software more secure, I claimed it can reduce the regulatory and cybersecurity burden on your SDLC, which it does as demonstrated above. Taken to the extreme (0 external dependencies), your regulatory burden for SOUP and SCA goes to zero.
Seems like a good reason you should need to "pair" the RF remote to the device, similar to Bluetooth. Otherwise a bad actor in an apartment complex could get a "universal" RF remote and randomly try stuff until they can control your devices.
Honestly I could see arguments going both ways. Pairing prevents unauthorized access, but at the same time, pairing means you need to be able to pair without having a paired device on-hand.
For a passive read-only device (like most satellite/cable receivers 20 years ago), it was probably more important to allow customers to easily replace their lost remotes than it was to prevent pranksters (who could often be dissuaded by more physical means).
Is Japan's life expectancy because of its healthcare or culture? I'm pretty sure Americans would not live to the same age as Japanese even with Japanese healthcare because of our low nutrition high sugar diets...
reply