Hacker Newsnew | past | comments | ask | show | jobs | submit | vince14's commentslogin

Their fetch call is missing `keepalive: true`.

    When set to true, the browser will not abort the associated request if the page that initiated it is unloaded before the request is complete. This enables a fetch() request to send analytics at the end of a session even if the user navigates away from or closes the page.
https://developer.mozilla.org/en-US/docs/Web/API/RequestInit...


Because projects like these were missing back then, I got creative with nginx and do not need any config changes to serve new projects:

  server {
    listen 80;
    server_name ~^(?<sub>\w+)(\.|-)(?<port>\d+).*; # projectx-20201-127-0-0-1.nip.io
    root sites/$sub/public_html;
    try_files $uri @backend;
    location @backend {
      proxy_pass http://127.0.0.1:$port;
      access_log logs/$sub.access;
    }
  }
Configuration is done via the domain name like projectx-20205-127-0-0-1.nip.io which specifies the directory and port.

All you need to do is create a junction (mklink /J domain folder_path). This maps the domain to a folder.


Am I reading this wrong or does this almost open up any server bound to localhost to the outside?

I think proxy_pass will forward traffic even when the root and try_files directives fail because the junction/symlink don't exist? And "listen 80" binds on all interfaces doesn't it, not just on localhost?

Is this clever? Sure. But this is also the thing you forget about in 6 months and then when you install any app that has a localhost web management interface (like syncthing) you've accidentally exposed your entire computer including your ssh keys to the internet.


Nothing is preventing you to add an IP whitelist and/or basic auth to same configuration. That is what I do to all my nginx configurations to be extra careful, so nothing slips by accident.


Will just any request even pass the host matching?


I got something similar running with nginx myself with purpose of getting access to my internal services from outside. The main idea here is that internal services are not on same machine this nginx is running on, so it will pass around to needed server in internal network. It goes like this:

  server_name ~^(?<service>(?:lubelogger|wiki|kibana|zabbix|mail|grafana|git|books|zm))\.domain\.example$;
  location / {
        resolver 127.0.0.1;
        include proxy.conf;
        proxy_set_header Authorization "";
        proxy_set_header Host $service.internal;
        proxy_set_header Origin http://$service.internal;
        proxy_redirect http://$proxy_host/ /;
        proxy_pass http://$service.internal;
  }
Basically any regex matched subdomain is extracted and resolved as $service.internal and proxy passed to it. For this to work, of course any new service has to be registered in internal DNS. Adding whitelisted IPs and basic auth is also a good idea ( which I have, just removed from example ).


That's why I switched to Caddy for most of my needs. I create one Caddy server template, and then instantiate it as a new host with one line per server.


I'm having the same issue.

https://securitytrails.com/ also had my "secret" staging subdomain.

I made a catch-all certificate, so the subdomain didn't show up in CT logs.

It's still a secret to me how my subdomain ended up in their database.


They could be purchasing DNS query logs from ISPs.


Serious question: Do you really think that Cloudflare is trying to keep these kinds of thing private? If so, I'd suggest that's not a reasonable expectation.


Related question (not rhetorical). If you do DNS for subdomains yourself (and just use Cloudflare to point dns.example.com at your box) will the subdomain queries leak and show up in aggregate datasets? What I'm asking is if query recursion is always handled locally or if any of the reasonably common software stacks resolve it remotely.


If you just use Cloudflare as a registrar, then they can't see what resolution happens on your servers.

If you delegate a subdomain through Cloudflare to your own DNS servers, from what I remember from the animal book, the recursive server should ask Cloudflare for the address of the machine to which the delegation has been made (yours), and while any further resolutions would be answered by your machine, Cloudflare would at very least know of every query to that subdomain.

If you delegate a subdomain and have subdomains under that subdomain, then Cloudflare would only see resolutions to that subdomain and not to the sub-subdomains.

In other words, for most things, they'd have full insight.


As well as assuming Cloudflare sells DNS lists, it's probably safe to assume the operators of public resolvers like 8.8.8.8, 9.9.9.9 and 1.1.1.1 (that is Google, Quad9 and Cloudflare again) are looking at their logs and either selling them or using them internally.


maybe your server responded to a plain ip addressed request with the real name...


Host header is a request header, not a response one, isn't it?


He said he used a wildcard cert though. So what part of the response would contain the subdomain in that case?


Fire-and-forget?


https://gdpr-info.eu/recitals/no-49/

> Network and Information Security as Overriding Legitimate Interest

> stopping ‘denial of service’ attacks

Storing logs with IPs is no problem at all.


Storing the log files (or IP addresses in general) is not a problem IF you're using them only with a legitimate interest basis.

For instance, you can use this stored IP address to help identify whether your user has had their account breached, and prompt for extra verification before letting them log in. You can also do a full browser fingerprint for this purpose, this is all covered under legitimate basis.

However, once you use any of this data to market to the user then you are in breach of the GDPR as you did not have a consent basis for it. The storage was never a problem, it's the use of it that becomes a problem.


You're mostly right, but legitimate interest also require balance. Fingerprinting may be considered to be too intrusive if logs are enough.


Depends on the product, payments products generally use fingerprinting and present extra prompts if you're using an unknown device – that is kind of one of the main problems of the GDPR though, there are nuances and it's usually not white and black what can be done without specialised legal counsel (and sometimes, even then...)


Sounds like there could be an opportunity here for a GDPR noncompliant analytics product. Personally, my customers are in the United States and I don't want ambiguity in my analytics because of Lawyers who reside outside of my jurisdiction.


If your customers are of a European nationality you will need to comply as well.


Technically correct, but arguable... There are lots of UK and EU-based companies that blatantly breach the GDPR and get away with it as the regulatory bodies don't have the resources to chase after every breach at home, let alone abroad.

Unless you are a huge company or have a significant amount of customers in the UK/EU it's probably okay to ignore the GDPR.


Your interpretation is incorrect.

You have the right to log IP addresses only if they are used for the two purposes you listed, otherwise you will need explicit consent.


Creating clips from tv recordings:

    ffmpeg -ss 01:59:00.000 -i "interlaced.ts" -ss 00:00:12.000 -t 26 -max_muxing_queue_size 1024 -c:a libopus -b:a 96k -c:v libx264 -crf 20 -vf "yadif=1" -profile:v baseline -level 3.0 -pix_fmt yuv420p -movflags +faststart -y clip.mp4
Make clip compatible with WhatsApp:

    ffmpeg -i in.mp4 -map 0:v:0 -map 0:a:0 -map_metadata -1 -map_chapters -1 -c:v libx264 -preset slow -tune film -crf 32 -c:a aac -b:a 128k -profile:v baseline -level 3.0 -pix_fmt yuv420p -movflags +faststart out.mp4
Copying in various terminals:

    PuTTY: select
    tmux: Shift + select
    cmd: select + Enter in quick edit mode
    Windows Terminal: select + Right Click


Most streaming sites break a video into many small fragments, which are listed in a m4mu file. I have a script to download the fragments one by one using curl. To merge the video fragments back into one file, I do the following.

  Merge video files with ffmpeg
  - Make a file listing all the videos in sequence. E.g.
     file 'video01.ts'
     file 'video02.ts'
     file 'video03.ts'
     ...
  - Generate the file list for files in the current directory.
      (for %i in (*.ts) do @echo file '%i') > filelist.txt
  - ffmpeg command to combine videos
     ffmpeg -f concat -safe 0 -i filelist.txt -c copy output.mp4


For anyone needing this youtube-dl (and ffmpeg if you need post-dl conversion) can do this for you if it's any easier, point it at the index file and let it do its thing


Youtube-dl is great. I just wanted to build it from scratch, and it was very simple once the underlying streaming tech was understood.


I find -vf bwdif much smoother than yadif for action sport replays.


whoami


> Introduced arbitrary code execution via ${jndi:ldap://... inside any logged string.

hehe


Me too, that's why I always have `html { overflow-y: scroll; }`.


I think many people feel threatened by the success and the pace of the Node.js ecosystem.

Most arguments, even in this thread already, are just wrong. Obviously they have never had any experience with the Node.js ecosystem and another.


> Most arguments, even in this thread already, are just wrong. Obviously they have never had any experience with the Node.js ecosystem and another.

What makes them obviously wrong? To me they sound like sound arguments by people who actually use(d) it.


> no threads

> node does not have a concurrency and parallelism story

> standard library is tiny

Also claims like "async makes your program more difficult to reason about" without any explanation.

I hereby claim that goroutines and mutexes make your program more difficult to reason about.

> the ecosystem is an absolute dumpster fire

I am super curious which ecosystem that person would consider better.


I totally agree with you, most arguments are obsolete at best, and ignorant at worst.

> no threads

Worker threads are a thing since Node v10. Also, I like the single thread concurrency, it makes state management easier (since there is no race condition).

> node does not have a concurrency and parallelism story

  - https://nodejs.dev/learn/understanding-javascript-promises
  - https://nodejs.org/api/cluster.html
> standard library is tiny

You have sockets, http(s) (even http2), multiprocessing, readline, streams, timers, promises, JSON, etc...

They want String.capitalize()? They would complain about the implementation being bloated because it handles edge cases like other alphabets, etc...

> async makes your program more difficult to reason about

That may have been true with the callback hell before async/await. But now, it's just non-sense.

> the ecosystem is an absolute dumpster fire

Ironically, for an ecosystem based on the philosophy of "Don't reinvent the wheel", there are a lot of packages that does the same thing.

There is the leftpad fiasco, the colorette/nanocolors "scandal", the is-odd/is-even packages, ...

But as I said in this thread, this is not Node's fault, nor npm's fault. This is the developer's fault.

Cargo (rust), Mix (elixir), pip (python), they could all serve such an ecosystem.

It's up to the developer to be careful about his dependency tree. For example, if I need VueJS which depends on a specific library, I'll avoid installing another library doing the same thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: