Hacker Newsnew | past | comments | ask | show | jobs | submit | denkquer's commentslogin

this is just awful. fucked up country


This is incorrect. Old company logos don't lose their trademark. This logos shape dates back to 1977.


It is correct; you are moving the goalpost.

If they were using the 1977 logo, including the color scheme, it would be impossible to argue, that the similarity can lead to confusion of them and the point of the article would be moot.


for example? I don't see how it would be useful at all


Android for Work already lets you isolate some apps in their own little container with separate data, but this would let you keep it safer by never having that data even physically present on the phone.

Actually generalize that: Depending on your exact threat models, this is a great way to have data that can't be lost or stolen just because your device is.

For a large enough app it's probably less data intensive, not to mention much faster, to run the app in a cloud data center where it has effectively unlimited bandwidth, and just stream a video of it over a smaller network connection to the phone.

Creating virtual phones in cloud is probably an easy way to create a lot of them, which allows you to do things like run untrusted apps in complete isolation without access to any of your data or other applications.


impossible to prevent


Not sure. Try screenshoting a Netflix movie


That’s DRM’d video, the one exception.


make sure you set your languages of choice in the settings of the app.


photopea.com


I have nothing but praise for Photopea. Having used it in the past, it's great for things that I might otherwise use macOS Preview (or similar utilities) for.

However, Photopea is at least a couple of orders of magnitude simpler than the Adobe apps I mentioned. It'd be interesting to compare to Photoshop 1.0 (1987), though!


this isn't a bad thing


I'm not saying it's a bad thing, I like boring and reliable elections with more than two choices. But I suspect that this also means that there is simply not that much interest for a 538-like site in Germany.


I recall Ryan regreting nodes full-permission approach. Not every script should be able to access the fs for securities sake.


Deno's permissions are per-process though, it's a big jump for sure, but also still leaves the door wide open for abuse by dependencies of any serious project.


It does allow for some coarse grained improvements for certain services, though.

You might write an smtp daemon that only delivers email on to another service via LMTP - thus without ability to write to (or read from, depending on configuration) the file system, for example.

Yes, you can accomplish this via containers, too - but it's nice to get as much out of process isolation as possible, not to mention it is likelier easier to test and debug the closer to the surface of the runtime limitations are implemented.


How is that different from Node.js which also runs in a single process? Or does Deno create per-request processes (or v8 "isolates" etc) like CGI does?


I think the point was that it's not different from Node.js - and thus not much of a benefit.

If it was more along the lines of "I want to use this array helper library, but it shouldn't have any permissions" then it would be a lot more useful, but right now if your Deno app needs any file or network access, then all of your dependencies get access too.


Yes, this was my point. It's only trivially better in practice since you'll have to open all the doors nearly all the time.

We have so many dependencies that really only need to work in-memory and have zero IO needs. That part is not solved in Deno at all.


there is no way on earth pypi traffic is higher than wikipedias.


https://dtdg.co/pypi

pypi servers receive 1 million requests per minute. That's likely more than wikipedia.


https://pageviews.toolforge.org/siteviews/?platform=all-acce...

en.wikipedia.org has around 260 million page views per day. Each page view consists of multiple HTTP requests (picking a few random pages shows twenty or more per page) - so I'm pretty sure that wikipedia has higher traffic figures.


Both wikipedia and pypi requests are >99% readonly and cacheable, and neither have strict global consistency guarantees. That makes them very cheap and technically easy to run through a free global CDN like cloudflare.


You make it sound like Cloudflare would be OK serving all Wikipedia or pypi traffic for free. Reality would be a little different.



Fastly currently serves all of PyPI's CDN traffic for free. Cloudflare probably would as well.


It seems wikipedia english gets 255m per day. This doesn't include all of their image hosting and other side services though. I think those are in the many hundreda of millions if not billions. Id say they are at least comparable.

It's kinda moot though. I wouldn't begrudge pypi asking for money either.


Back-of-the-envelope math says Wikipedia serves at least twice as much traffic.

According to Wikipedia's public statistics[0], they receive ~3 Billion media requests per day, which is about 2 million requests per minute.

https://stats.wikimedia.org/#/all-projects/content/total-med...


Imaginary numbers are 1d. complex are 2d.


And real numbers are ∞-d, over rationals.


There are 2d values that aren't imaginary numbers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: