When set to true, the browser will not abort the associated request if the page that initiated it is unloaded before the request is complete. This enables a fetch() request to send analytics at the end of a session even if the user navigates away from or closes the page.
Am I reading this wrong or does this almost open up any server bound to localhost to the outside?
I think proxy_pass will forward traffic even when the root and try_files directives fail because the junction/symlink don't exist? And "listen 80" binds on all interfaces doesn't it, not just on localhost?
Is this clever? Sure. But this is also the thing you forget about in 6 months and then when you install any app that has a localhost web management interface (like syncthing) you've accidentally exposed your entire computer including your ssh keys to the internet.
Nothing is preventing you to add an IP whitelist and/or basic auth to same configuration. That is what I do to all my nginx configurations to be extra careful, so nothing slips by accident.
I got something similar running with nginx myself with purpose of getting access to my internal services from outside. The main idea here is that internal services are not on same machine this nginx is running on, so it will pass around to needed server in internal network. It goes like this:
Basically any regex matched subdomain is extracted and resolved as $service.internal and proxy passed to it. For this to work, of course any new service has to be registered in internal DNS. Adding whitelisted IPs and basic auth is also a good idea ( which I have, just removed from example ).
That's why I switched to Caddy for most of my needs. I create one Caddy server template, and then instantiate it as a new host with one line per server.
Serious question: Do you really think that Cloudflare is trying to keep these kinds of thing private? If so, I'd suggest that's not a reasonable expectation.
Related question (not rhetorical). If you do DNS for subdomains yourself (and just use Cloudflare to point dns.example.com at your box) will the subdomain queries leak and show up in aggregate datasets? What I'm asking is if query recursion is always handled locally or if any of the reasonably common software stacks resolve it remotely.
If you just use Cloudflare as a registrar, then they can't see what resolution happens on your servers.
If you delegate a subdomain through Cloudflare to your own DNS servers, from what I remember from the animal book, the recursive server should ask Cloudflare for the address of the machine to which the delegation has been made (yours), and while any further resolutions would be answered by your machine, Cloudflare would at very least know of every query to that subdomain.
If you delegate a subdomain and have subdomains under that subdomain, then Cloudflare would only see resolutions to that subdomain and not to the sub-subdomains.
In other words, for most things, they'd have full insight.
As well as assuming Cloudflare sells DNS lists, it's probably safe to assume the operators of public resolvers like 8.8.8.8, 9.9.9.9 and 1.1.1.1 (that is Google, Quad9 and Cloudflare again) are looking at their logs and either selling them or using them internally.
Storing the log files (or IP addresses in general) is not a problem IF you're using them only with a legitimate interest basis.
For instance, you can use this stored IP address to help identify whether your user has had their account breached, and prompt for extra verification before letting them log in. You can also do a full browser fingerprint for this purpose, this is all covered under legitimate basis.
However, once you use any of this data to market to the user then you are in breach of the GDPR as you did not have a consent basis for it. The storage was never a problem, it's the use of it that becomes a problem.
Depends on the product, payments products generally use fingerprinting and present extra prompts if you're using an unknown device – that is kind of one of the main problems of the GDPR though, there are nuances and it's usually not white and black what can be done without specialised legal counsel (and sometimes, even then...)
Sounds like there could be an opportunity here for a GDPR noncompliant analytics product. Personally, my customers are in the United States and I don't want ambiguity in my analytics because of Lawyers who reside outside of my jurisdiction.
Technically correct, but arguable... There are lots of UK and EU-based companies that blatantly breach the GDPR and get away with it as the regulatory bodies don't have the resources to chase after every breach at home, let alone abroad.
Unless you are a huge company or have a significant amount of customers in the UK/EU it's probably okay to ignore the GDPR.
Most streaming sites break a video into many small fragments, which are listed in a m4mu file. I have a script to download the fragments one by one using curl. To merge the video fragments back into one file, I do the following.
Merge video files with ffmpeg
- Make a file listing all the videos in sequence. E.g.
file 'video01.ts'
file 'video02.ts'
file 'video03.ts'
...
- Generate the file list for files in the current directory.
(for %i in (*.ts) do @echo file '%i') > filelist.txt
- ffmpeg command to combine videos
ffmpeg -f concat -safe 0 -i filelist.txt -c copy output.mp4
For anyone needing this youtube-dl (and ffmpeg if you need post-dl conversion) can do this for you if it's any easier, point it at the index file and let it do its thing
I totally agree with you, most arguments are obsolete at best, and ignorant at worst.
> no threads
Worker threads are a thing since Node v10.
Also, I like the single thread concurrency, it makes state management easier (since there is no race condition).
> node does not have a concurrency and parallelism story
You have sockets, http(s) (even http2), multiprocessing, readline, streams, timers, promises, JSON, etc...
They want String.capitalize()? They would complain about the implementation being bloated because it handles edge cases like other alphabets, etc...
> async makes your program more difficult to reason about
That may have been true with the callback hell before async/await. But now, it's just non-sense.
> the ecosystem is an absolute dumpster fire
Ironically, for an ecosystem based on the philosophy of "Don't reinvent the wheel", there are a lot of packages that does the same thing.
There is the leftpad fiasco, the colorette/nanocolors "scandal", the is-odd/is-even packages, ...
But as I said in this thread, this is not Node's fault, nor npm's fault. This is the developer's fault.
Cargo (rust), Mix (elixir), pip (python), they could all serve such an ecosystem.
It's up to the developer to be careful about his dependency tree. For example, if I need VueJS which depends on a specific library, I'll avoid installing another library doing the same thing.