nanog mailing list archives

Re: Recommended DNS server for a medium 20-30k users isp


From: Mel Beckman via NANOG <nanog () lists nanog org>
Date: Sat, 9 Aug 2025 17:44:45 +0000

Saku,

Thanks for the well delineated examples. I agree with them. You clearly illustratewrong configurations that can cause 
unanticipated failure modes. Thus it’s best to follow established design patterns, rather than cooking your own recipe.

But how is this different than using a firewall to protect any other service? Firewalls can fail, and thus require 
resiliency considerations. But they also can do a lot to insulate underlying services from attacks — source IP 
flooding, for example, or the myriad of sequence attacks — the kinds of attacks that are difficult to protect against 
in the pure IP stack. 

I submit that one major firewall advantage is consistency of implementation. People who are protecting their DNS by 
cleverly hardening them using packet filters and load balancing are doing so with error-prone manual methods. Human 
error, as HAL says, is always a problem. Firewall code, on the other hand, goes through certification processes and 
deep regression testing before being deployed. Firewall developers are dedicated to the protection mission, while 
people standing up DNS at many enterprises, including ISPs, are not DNS experts. DNS is just one of many services they 
must manage. 

I appreciate your anecdotes, but as every good scientist knows, the plural of anecdote is not data. I need to see some 
data backing up these claims about the relative unreliability of firewalls.

 -mel

On Aug 9, 2025, at 8:41 AM, Saku Ytti via NANOG <nanog () lists nanog org> wrote:

On Sat, 9 Aug 2025 at 15:42, Måns Nilsson via NANOG
<nanog () lists nanog org> wrote:

I suppose you bring the beer then, because it's going to take both to
endure the cringefest that is "cascading resource exhaustion in DNS /
firewall setup" -- it can pretty fast end up snowballing completely out
of hand. Don't ask me how I know without picking up the bar tab.

I can share lessons from personal mistakes.

a) FW is always additional fuse in front of service, failure modes are
union of FW and Service, so MTBF is lower and MTTR is higher
   - state establishment rate is reduced
   - state count is reduced
   - either FW has protocol intelligence and occasionally as
protocols evolve or more exotic use cases exist drops valid protocol
packets or protocol unintelligent and doesn't add anything to
stateless HW based filter on edge router
   - any service protected by FW is easier to DoS than same service without FW

b) Even if FW is ran (like in front of corporate LAN which doesn't
have to deal with denial of service issues and regulator or PCI or
equivalent may require FW) valid configurations in my mind are
   - if 2 == cluster, 1 == single and + == routing separation
   - 1, 1+1, 2+1 are valid configurations
   - 2 and 2+2 are invalid configurations
   - every time i've ran '2', eventually there has been case where
cluster is dead and MTTR is high as vendor needs to be engaged and
depending on hour the people at vendor who actually can troubleshoot
the issue are not at work (used to be US hours, now increasingly
experts are in India time)
   - So if you can only afford 2 devices, have two devices separated
by routing, you'll lose state during failure, but you have less
failures, even if you can afford 4 devices, don't buy two clusters,
since the problem that breaks cluster may affect both clusters


Generally FW is needed if what is behind FW has dubious and únknown
state (like user LAN). But if what is behind FW is well thought out
DNS or HTTP service FW adds no utility and a lot of liability.

--
 ++ytti
_______________________________________________
NANOG mailing list
https://lists.nanog.org/archives/list/nanog () lists nanog org/message/DZZOX5JWSLEGYIO45G5HUUYXT5RXHRD2/
_______________________________________________
NANOG mailing list 
https://lists.nanog.org/archives/list/nanog () lists nanog org/message/CSKJGBIL2R7BVVW2JZTGVSQIZY2UCWNK/

Current thread: