Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In the past year I designed two UDP protocols (for connection measurements and a game server) and last week wrote a DNS server. In my own protocols, I always made sure that the sender has to put in more bytes until a >2^64-bit secret was echoed. Only with DNS this does not seem to be possible. At best, you can refuse a query in an equal number of bytes, but useful responses necessitate amplifying.

Every nameserver out there, from duckduckgo to hacker news, will send back larger responses because it must echo the query.

Does anyone know why this is not considered an issue? Are we just waiting for open resolvers to be eliminated and attackers to switch over to this lesser amplification factor before we start fixing it?

The only solution given the current protocol, considering reasonable compatibility, is to use rate limiting per source IP, which means that someone can use source IP spoofing to block benign sources. This problem can be mitigated with DNS cookies, but I don't know if those are universally supported enough to simply reject any clients that don't support DNS cookies yet. It also means state keeping per client (hello IPv6). If clients would just send back a slightly larger packet than the response they expect, and servers didn't have to echo the query, amplification protection would be much easier to implement.



There's no ultimate solution, but there are a few things being done in common DNS servers that can mitigate the issue:

* Large answers like ANY queries or large DNSSEC records should either be not supported or only supported via TCP (see RFC 8482 e.g. for ANY).

* DNS software can implement response rate limiting: https://kb.isc.org/docs/aa-00994

This doesn't prevent all amplification, but it prevents strong amplification, i.e. you'll be less valuable for attackers.


Thanks for the reply, it seems I've already done everything currently possible in my implementation then :/


Your idea for the secret to prevent spoofing is interesting and reminiscent of the verification secret in the SCTP packet header: https://en.m.wikipedia.org/wiki/SCTP_packet_structure


It's indeed a common technique, not my original idea. SCTP seems to be from 2000, but good old tcp also already does this with its sequence numbers (albeit 32-bit, so while it can't be relied on fully it does prevent amplification specifically).


> In my own protocols, I always made sure that the sender has to put in more bytes until a >2^64-bit secret was echoed.

I have a hard time parsing this statement. What do you mean?


Yeah I packed a lot into there, sorry. What I meant is that the client initially has to send more data into the connection than the server returns. That means sending some padding data in the client hello packet. This rule is only broken after the client echoed an unguessable random token, which has a similar effect to doing a TCP handshake (because a spoofed-source client wouldn't be able to echo this value), so I know they're legit and I can safely send big answers to small requests without the risk of DoS'ing an innocent third party. It also prevents a third party from trigger the rate limiting and thus blocking an innocent third party's access.


Presumably this could be mitigated by the DNS request needing padding?


Pretty sucky to have to bloat a fundamental protocol of the internet, and thus, all traffic, forever, to avoid amplification attacks.


Queries are already larger than necessary, and often two are sent where one is needed (which adds UDP, IP, Ethernet, probably also WiFi header overhead) to get both v4 and v6 while the DNS protocol technically supports multiple queries in one packet.

While I don't disagree with your point in principle, it must be said that if we cared about fourty extra bytes just so the most common records can be answered without amplification (a quite significant problem currently) then we should start by optimizing existing unnecessary inefficiencies.


The thing you find fault with was caused by exactly the kind of thinking proposed to do so it doesn't seem to make sense it can itself be justification for that thinking being alright.

Many people regret the dual lookup solution as it causes nearly twice the pps load on public servers these days to work around issues with servers of the time. In ~2005 it was seen as easier to implement as clients could opt in to do it with nothing needed from the server (plus how much longer could IPv6 deployment take right /s) and now in 2022 it seems impossible to un-implement as you'd need to make sure every authoritative server and client in the world stop relying on the behavior to do so. This in no way implies that going forward every solution should get a free pass on being inefficient it just means we allowed the easy bloating answer before and it bit us in a way we can't easily fix now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: