DNS Best Practices: The Definitive Guide

This is the most comprehensive list of DNS best practices and tips on the planet.

In this guide, I’ll share my best practices for DNS security, design, performance and much more.

Table of contents:

DNS Best Practices

Warning: I do not recommend making changes to critical services like DNS without testing and getting approval from your organization. You should be following a change management process for these types of changes.

Have at least Two Internal DNS servers

In small to large environments, you should have at least two DNS servers for redundancy. DNS and Active Directory are critical services, if they fail you will have major problems. Having two servers will ensure DNS will still function if the other one fails.

In an Active Directory domain, everything relies on DNS to function correctly. Even browsing the internet and accessing cloud applications relies on DNS.

I’ve experienced a complete domain controller/DNS failure and I’m not joking when I say almost everything stopped working.

In the above diagram, my site has two domain controllers and DNS servers. The clients are configured to use DHCP, the DHCP server will automatically configure the client with a primary and secondary DNS server. If DC1/DNS goes down the client will automatically use its secondary DNS to resolve hostnames. If DC1 went down and there was no internal secondary DNS, the client would be unable to access resources such as email, apps, internet and so on.

Bottom line: Ensure you have redundancy in place by having multiple DNS/Active Directory servers.

Use Active Directory Integrated Zones

To make the deployment of multiple DNS servers easier you should use Active Directory integrated zones. You can only use AD integrated zones if you have DNS configured on your domain controllers.

AD integrated zones have the following advantages:

  • Replication: AD integrated zones store data in the AD database as container objects. This allows for the zone information to get automatically replicated to other domain controllers.  The zone information is compressed allowing data to be replicated fast and securely to other servers.
  • Redundancy: Becuase the zone information is automatically replicated this prevents a single point of failure for DNS. If one DNS server fails the other server has a full copy of the DNS information and can resolve names for clients.
  • Simplicity: AD Integrated zones automatically update without the need to configure zone transfers. This simplifies the configuration while ensuring redundancy is in place.
  • Security: If you enable secure dynamic updates, then only authorized clients can update their records in DNS zones. In a nutshell, this means only members of the DNS domain can register themselves with the DNS server. The DNS server denies requests from the computers that are not part of the domain.

Best DNS Order on Domain Controllers

I’ve seen lots of discussion on this topic. What is the best practice for DNS order on domain controllers?

If you do a search on your own you will come across various answers BUT the majority recommends the configuration below.

This is also Microsoft’s recommendation.

  • Primary DNS: set to another DC in the site
  • Secondary DNS: Set to itself using the loopback address

Let’s look at a real-world example.

In the above diagram, I have two domain controllers/DNS at the New York site. I have DC1 primary DNS set to its replication partner DC2. Then the secondary DNS is set to its self using the loopback address. Then DC2 primary DNS is set to DC1 and it’s secondary set to itself using the loopback address.

Microsoft claims this configuration improves performance and increases the availability of DNS servers. If you point the primary DNS to itself first it can cause delays.

Source: https://technet.microsoft.com/en-us/library/ff807362(v=ws.10).aspx

Domain Joined Computers Should Only Use Internal DNS Servers

Your domain joined computers should have both the primary and secondary DNS set to an internal DNS server. External DNS servers cannot resolve internal hostnames so this could result in connectivity issues and prevent the computer from accessing internal resources.

Let’s look at an example of why this is a bad setup.

  1. The client makes a request to an internal server called VEGAS.
  2. The client decides to contact its secondary DNS server which is 8.8.8.8. It asks the server what the IP address is for the host VEGAS.
  3. The external DNS knows nothing about this host, therefore, it cannot provide the IP address.
  4. This results in the client being unable to access the VEGAS file server.

Typically if the primary DNS server is available it will be used first but it may be unresponsive which can result in using the secondary DNS. It may take a reboot of the computer for it to switch back to the primary DNS, this can result in frustrated users and calls to helpdesk.

The recommended solution is to have two internal DNS servers and always point clients to them rather than an external server.

Point Clients to the Closest DNS Server

This will minimize traffic across WAN links and provide faster DNS queries to clients.

In the diagram above the client computers are configured to use the DNS servers that are at their site. If the client in New York was incorrectly configured to use the DNS servers in London this would result in slow DNS performance. This would affect the user’s apps, internet access and so on. I promise you users will be complaining about how slow everything is.

The best way to automatically configure the right DNS servers is by using DHCP. You should have different DHCP scopes setup for each site that includes the primary and secondary DNS servers for that site.

Configure Aging and Scavenging of DNS Records

DNS aging and scavenging allows for automatic removal of old unused DNS records.  This is a two-part process:

Aging: Newly created DNS records get a timestamp applied.

Scavenging: Removes DNS records that have an outdated timestamp based on the time configured.

Why is this needed?

There will be times when computers register multiple DNS entries with different IP addresses. This can be caused by computers moving around to different locations, computers being re-imaged, computers being dropped and added back to the domain.

Having multiple DNS entries will cause name resolution problems which result in connectivity issues. DNS aging and scavenging will resolve this by automatically deleting the DNS record that is not in use.

Aging and Scavenging only applies to DNS resource records that are added dynamically.

Resources:

How to Configure DNS Aging and Scavenging (Cleanup Stale DNS Records)

Setup PRT Records for DNS Zones

PTR records resolve an IP address to a hostname. Unless you are running your own mail server PTR records may not be required.

But… they are extremely helpful for troubleshooting and increasing security.

Some systems like firewalls, routers, and switches only log an IP address. Take for example the windows firewall logs.

In this example, helpdesk was troubleshooting a printer issue and thought 10.1.2.88 was a printer being blocked by the firewall. Because I have PTR records setup I was able to quickly look it up using the nslookup command.

10.1.2.88 resolves to nodaway.ad.activedirectorypro.com, I know this is a server and not a printer.  If I didn’t have a PTR record setup I would have been digging through inventory trying to find more information about this IP.

There is really no reason not to setup PTR records, it’s easy to setup and causes no additional resources on the server. See my complete guide on setting up reverse lookup zones and ptr records.

Additional Resources:

NSLookup to Check DNS Records

Root Hints vs DNS Forwarders (Which one is the best)

By default, Windows DNS servers are configured to use root hint servers for external lookups. Another option for external lookups is to use forwarders.

Basically, both options are ways to resolve hostnames that your internal servers cannot resolve.

So which one is the best?

Through my own experience and research, it really comes down to personal preference.

Here are some general guidelines they will help you decide:

  • Use root hints if your main concern is reliability (windows default)
  • Forwarders might provide faster DNS lookups. You can use benchmarking tools to test lookup response times, link included in the resource section.
  • Forwarders can also provide security enhancements (more on this below)
  • Forwarders must be configured manually on each DC

For years I used the default setting (root hints) then I was introduced to Quad9 at a security conference.  Quad9 is a free, recursive, anycast DNS platform that provides end users robust security protections, high-performance, and privacy. In a nutshell, Quad9 checks the DNS lookup against a list of bad domains, if the client makes a request to a domain on the list that request is dropped.

I’ve used this service for over a year now and I’ve had zero issues. Since security has been a big concern for me it was my personal preference to switch to Quad9 forwarders from root hints. It is providing fast and reliable lookups with the added bonus of security.

Quad9 does not provide any reporting or analytics. The blocked requests are logged in the Windows Server DNS debug logs, so make sure you read the next section on how to enable it. The drops will be recorded with NXDomain so you could build a report by looking for that in the logs.

Additional Resources:

OpenDNS – is another company that offers this service, it has a high cost but includes additional features and reporting.

How Quad9 Works – This page shows how to setup Quad9 on an individual computer, if you have your own DNS servers DO NOT DO THIS. You will want to use your DNS server and add quad9 as a forwarder. This page provides some additional details and is the main reason why I included it. You could use these steps for your home computer or devices that just need internet access.

DNS Benchmark tool – Free tool that allows you to test the response times of any nameservers. This may help you determine if you want to stick with root hints or use forwarders.

List of Root Servers 

Enable DNS Debug Logging

DNS debug logs can be used to track down problems with DNS queries, updates, and other DNS errors. It can also be used to track client activity.

With logging tools like splunk you can create reports on top domains, top clients and find potential malicious network traffic.

Microsoft has a log parser tool that generates the output below:

You should be able to pull the debug log into any logging tool or script to create your own reports.

How to Enable DNS Debug Logs

Step 1: On the DNS console right click your DNS server and select properties

Step 2: Click on the Debug Logging Tab

Change the default path and max size, if needed.

Additional Resources:

Parsing DNS server log to track active clients 

Use CNAME Record for Alias (Instead of A Record)

  • A record maps a name to an IP address.
  • CNAME record maps a name to another name.

If you use A records to created aliases you will end up with multiple records, over time this will become a big mess. If you have PTR records configured this will also create additional records in that zone which will add to the mess and create bigger problems.

If you need to create an alias it’s better to use CNAME records, this will be easier to manage and prevent multiple DNS records from being creating.

How to Create an Alias CNAME record

I have an A Record setup for my file server called file1 that resolves to IP 192.168.0.201

Our Dev team wants to rename the server to Paris to make it more user friendly. Instead of re-naming the server I’ll just create a CNAME record.

Right click in the zone and click on New Alias (CNAME)

For Alias name, I’ll enter Paris

The alias name resolves to file1 so I add that to the target host box:

Click OK and you’re done!

Now I can access Paris by hostname which resolves to file1

Easy Right?

This keeps DNS clean and helps prevent DNS lookup issues.

Use DNS Best Practice Analyzer

The Microsoft best practice analyzer is a tool that scan server roles to check your configuration against Microsoft guidelines. It is a quick way to troubleshoot and spot potential problems configuration issues.

The BPA can be ran using the GUI or PowerShell, instructions for both are below.

How To Run BPA DNS Using The GUI

Open Server Manager, then click DNS

Now scroll down to the Best Practices Analyzer section, click tasks then select “Start BPA Scan”

Once the scan completes the results will be displayed.

How To Run BPA DNS Using The PowerShell

You will first need the ID of the role. Run this command to get the ID

Get-BPaModel

I can the ID for DNS is Microsoft/Windows/DNSServer. I take that ID and use this command to run the BPA for DNS.

Invoke-BPAModel “Microsoft/Windows/DNSSerer”

You may get some errors, this is normal

The above command only runs the analyzer it does not automatically display the results.

To display the results run this command:

Get-BpaResult Microsoft/Windows/DNSServer

Bonus: DNS Security Tips

I think we can all agree that DNS is an important service. How would anything function without? Now let’s look at a few ways we can secure this service, some of these features are enabled by default on Windows servers.

  • Filter DNS Requests (Block Bad Domains)
  • Secure DNS forwarders
  • DNS Cache Locking
  • DNS Socket Pool
  • DNSSecFilter DNS Requests (Block bad domains)

Filter DNS Requests (Block Bad Domains)

One of the best ways to prevent viruses, spyware, and other malicious traffic is to block the traffic before it even hits your network.

This can be done by filtering DNS traffic through a security appliance that checks the domain name against a list of bad domains. If the domain is on the list the traffic will be dropped preventing any further communication between the bad domain and client. This is a common feature on next generation firewalls, IPS systems (Intrusion Prevention System) and other security appliances.

I’ve been using a Cisco FirePower firewall that provides this service. Cisco provides a feed (list of bad domains) that is automatically updated on a regular basis. In addition, I can add additional feeds or manually add bad domains to the list. I’ve seen a huge decrease in viruses and ransomware type threats since I’ve been filtering DNS requests. I’ve been amazed at how much bad traffic this detects and blocks, surprisingly very little false positives!

Additional Resources:

Cisco Next Generation Firewall official site
https://www.cisco.com/c/en/us/products/security/firewalls/index.html

Paloalto – Another popular firewall/IPS system
https://www.paloaltonetworks.com/products/secure-the-network/next-generation-firewall

Secure DNS Forwarders

Secure DNS forwarders are another way to filter and block DNS queries.

In addition to blocking malicious domains, some forwarding services offer web content filtering. This allows you to block requests based on a category like adult content, games, drugs and so on. One big advantage this has over an on-premise appliance like a firewall is this can provide protection to devices when they are off the network.  It may require a client be installed on the device but it would direct all DNS traffic through the secure DNS forwarder if the device was on the internal or external network.

List of DNS Forwarding Filters:

Quad9

OpenDNS

DNSFilter

DNS Cache Locking

DNS cache locking allows you to control when the DNS cache can be overwritten.

When a DNS server performs a lookup for a client, it stores that lookup in the cache for a period of time. This allows the DNS server to respond faster to the same lookups at a later time. If I went to espn.com the DNS server would cache that lookup, so if anyone went to it at a later time it would already be cached allowing for a faster lookup.

One type of attack is poising the cache lookup with false records. For example, we have espn.com in the cache, an attacker could alter this record to redirect to a malicious site. The next time someone went to espn.com it would send them to the malicious site.

DNS cache locking blocks records in the cache from being changed. Windows Server 2016 has this featured turn on by default.

Additional Resources

https://nedimmehic.org/2017/04/25/how-to-deploy-and-configure-dns-2016-part6/

DNS Socket Pool

DNS Socket pool allows the DNS server to use source port randomization for DNS lookups. By using randomized ports the DNS server will randomly pick a source port from a pool of available sockets. Instead of using the same port over and over it will pick a random port from the pool, this makes it difficult for the attacker to guess the source port of a DNS query.

This is also enabled by default on Windows server 2016

Additional Resources

Microsoft Configure the Socket Pool

DNSSEC

DNSSEC adds a layer of security that allows the client to validate the DNS response. This validation process helps prevent DNS spoofing and cache poising.

DNSSec works by using digital signatures to validate the responses are authentic. When a client performs a DNS query the DNS server will attach a digital signature to the response, this allows the client to validate the response and prove it was not tampered with.

Additional Resources:

Overview of DNSSec

Step by Step implementation

Recommended Tool: SolarWinds Server & Application Monitor (SAM)

This utility was designed to Monitor Active Directory and other critical applications. It will quickly spot domain controller issues, prevent replication failures, track failed logon attempts and much more.

What I like best about SAM is it’s easy to use dashboard and alerting features. It also has the ability to monitor virtual machines and storage.

Download Your Free Trial of SolarWinds Server & Application Monitor. 

Leave a Comment