If they were using the 1977 logo, including the color scheme, it would be impossible to argue, that the similarity can lead to confusion of them and the point of the article would be moot.
Android for Work already lets you isolate some apps in their own little container with separate data, but this would let you keep it safer by never having that data even physically present on the phone.
Actually generalize that: Depending on your exact threat models, this is a great way to have data that can't be lost or stolen just because your device is.
For a large enough app it's probably less data intensive, not to mention much faster, to run the app in a cloud data center where it has effectively unlimited bandwidth, and just stream a video of it over a smaller network connection to the phone.
Creating virtual phones in cloud is probably an easy way to create a lot of them, which allows you to do things like run untrusted apps in complete isolation without access to any of your data or other applications.
I have nothing but praise for Photopea. Having used it in the past, it's great for things that I might otherwise use macOS Preview (or similar utilities) for.
However, Photopea is at least a couple of orders of magnitude simpler than the Adobe apps I mentioned. It'd be interesting to compare to Photoshop 1.0 (1987), though!
I'm not saying it's a bad thing, I like boring and reliable elections with more than two choices. But I suspect that this also means that there is simply not that much interest for a 538-like site in Germany.
Deno's permissions are per-process though, it's a big jump for sure, but also still leaves the door wide open for abuse by dependencies of any serious project.
It does allow for some coarse grained improvements for certain services, though.
You might write an smtp daemon that only delivers email on to another service via LMTP - thus without ability to write to (or read from, depending on configuration) the file system, for example.
Yes, you can accomplish this via containers, too - but it's nice to get as much out of process isolation as possible, not to mention it is likelier easier to test and debug the closer to the surface of the runtime limitations are implemented.
How is that different from Node.js which also runs in a single process? Or does Deno create per-request processes (or v8 "isolates" etc) like CGI does?
I think the point was that it's not different from Node.js - and thus not much of a benefit.
If it was more along the lines of "I want to use this array helper library, but it shouldn't have any permissions" then it would be a lot more useful, but right now if your Deno app needs any file or network access, then all of your dependencies get access too.
en.wikipedia.org has around 260 million page views per day. Each page view consists of multiple HTTP requests (picking a few random pages shows twenty or more per page) - so I'm pretty sure that wikipedia has higher traffic figures.
Both wikipedia and pypi requests are >99% readonly and cacheable, and neither have strict global consistency guarantees. That makes them very cheap and technically easy to run through a free global CDN like cloudflare.
It seems wikipedia english gets 255m per day. This doesn't include all of their image hosting and other side services though. I think those are in the many hundreda of millions if not billions. Id say they are at least comparable.
It's kinda moot though. I wouldn't begrudge pypi asking for money either.