DNS: An Enemy of Uptime?

Recently, there has been a lot of things in the news about DNS services being a weakness of one sort or another. Comcast customers in the American Midwest experienced downtime just within the last week due to DNS servers not being available – which happened previously on the American East Coast. Wikileaks.com became unavailable when its DNS supplier, EveryDNS, cut them off. Many sites found their domains seized by the US government without warning and without legal warrants.

What can one do to prevent these sorts of downtime and unavailability of DNS services? One person is already considering this: the founder of PirateBay is attempting to create a distributed DNS as a result of the US government’s seizure of numerous domains.

Others have noted related projects – projects which hoped to be alternate root servers. One such project, Telecomix DNS, has been reinvigorated by the recent domain seizures – and even has a page for those who own seized domains. In the history of the Internet, alternate domain root servers have sprung up – such as AlterNIC and Open Root Server Network and OpenNIC – but most have shut down (OpenNIC continues to operate).

However, most of those projects suffer from the same availability problems that others do: if the service shuts down, the domains become unavailable. If the owner is forced or convinced to seize domains, then the domain is gone. With a truly distributed service, this becomes impossible, and availability increases.

What has Wikileaks done to solve this problem (aside from moving to a Canadian DNS provider named EasyDNS) is to add multiple DNS providers beyond just EasyDNS. PCWorld has a nice article detailing all of what Wikileaks does to stay online – which provides a good lesson for the rest of us. EasyDNS also has an excellent article on how to keep DNS up and running in the face of a denial of service attack, written in fantastic detail by the service provider.

Have you considered what would happen if your primary DNS resolver went offline? Even if you have your own DNS server in-house, there is an upstream server that could potentially go away. Maybe there are even two or three different servers that your servers send requests to – but are they the same provider? There are several services that provide free DNS – including:

Make DNS a part of your disaster recovery plan and prevent it from taking your services down – do it today.

Update: ITWorld has a nice article that explains several projects that have sprung up to make DNS resistant to censorship by a central entity.

Why DNSSEC May Not be a Good Thing

Recently, DNSSEC has been rolling out into major DNS servers, including those that service the .org zone and now the root zone. This sounds at first glance like a good thing: all responses from DNS servers are validated, and it becomes impossible for man-in-the-middle attacks to take place.

However, there are commercial uses for “man-in-the-middle” operations; OpenDNS is one that comes immediately to mind. Indeed, OpenDNS is opposed to DNSSEC and has implemented DNSCurve instead.

The main problem (for this discussion) is that DNSSEC completely removes the possibility of a man-in-the-middle – that is, it is impossible for a DNS server like OpenDNS to return a different IP address than the actual DNS address of a machine.

The OpenDNS article also suggests that Akimai and the NTP Pool Project will both be affected by this as well. In these cases, the problem is that when a name is presented to the DNS server, it chooses a particular IP address based on parameters of its choosing – so a one-to-one mapping of DNS name to IP address is irrelevant and impossible.

This also suggests that DNS round robin for clusters would be impossible to implement with DNSSEC active as well.

DNSSEC also interferes with split horizon DNS configurations, although there are ways to make it work.

It will be interesting to see what becomes of DNSSEC if commercial interests like OpenDNS and Akimai speak out against it.

Why Site Filtering By DNS Fails

Filtering by DNS seems a good idea when you first consider it. OpenDNS has a very nice setup for doing just this, and is often recommended as a business tool for content filtering.

The concept is simple: use a benign form of DNS “hijacking” in reverse against malicious sites – and other undesirable web sites (such as adult or gaming or sports, et al). To use the DNS server in this way, the client identifies itself (pairing an IP address to a server-based account) to the DNS server, then replies with the appropriate web addresses based on the client’s DNS requests.

For example, once the client authenticates to the DNS server, then the client will make a DNS request. Once the server receives the request, it consults the filtering in place for the account, and either returns the actual IP address, or an IP address of a website showing the actual web site as blocked.

Unfortunately, the problem is not in the implementation at the DNS server; it is in actually getting to the DNS server that is the problem. One very big problem is that any DNS cache will subvert the filtering at the DNS server. When the DNS cache makes its requests, the association with the account is broken, and the actual IP address is cached.

This means that you will not be able to use a DNS cache on your local host for speeding up your Internet access. However, the problem is deeper than that: if your Internet provider uses a DNS cache – which they might and you would never know – then the DNS filtering breaks.

The other problem has to do with IP addresses. If the user can get to a site that has the actual IP addresses in it, then the DNS server is never consulted and filtering again breaks down.

There is also the problem with proxies. A proxy receives a URL itself, and makes the DNS request on its own, bypassing any DNS-based content filtering which may be in place.

And then there is the Google cache. Using Google, if a person selects the cached version of a page (and not the direct link) then the page can be seen rather than blocked.

The only reasonable way to perform content filtering is by using your own local proxy – such as Privoxy or Squid with Squidguard – but even this will not stop the Google cache and perhaps other methods. But at least it will be immune to most problems listed here. Privoxy is good for personal proxies, and Squid is good for enterprise implementation.

Using a local proxy is more resource intensive (both in terms of processing power and administration required) but this may be necessary to keep reasonable order in the workplace.

OpenDNS and Proxies: Putting it All Together on Ubuntu Karmic

I’ve been running Ubuntu as my laptop operating system for quite some time (year or more) and find it to be quite wonderful. However, recently I had some nasty times getting everything to work with OpenDNS.

The easy thing to do is change /etc/resolv.conf to contain the OpenDNS entries. However, this was complicated by my use of polipo (web cache), pdnsd (DNS cache), and resolvconf (a resolv.conf file manager) – not to mention the use of ddclient to update the dynamic IP on the laptop, and the Gnome NetworkManager.

To start at the beginning – the best thing to do is to install resolvconf by itself so that it loads and sets up first:

apt-get install resolvconf

Then you can install the rest:

apt-get install polipo pdnsd ddclient

Installing pdnsd will require you to specify that you want resolvconf to be used. Installing ddclient will require you to specify using dyndns2 protocol and updates.opendns.com as the server – but the ddclient configuration will be rewritten anyway.

Change the /etc/pdnsd.conf file by changing the paranoid option to off:

paranoid = off;

This is required because OpenDNS does some of the things that pdnsd would reject based on this setting; especially blocking sites and possibly other things. Restart pdnsd after making this configuration change.

Configure resolvconf next: resolvconf is configured to reject all nameserver entries except 127.* if it sees that entry. Turn this behavior off by creating /etc/defaults/resolvconf:

TRUNCATE_NAMESERVER_LIST_AFTER_127=no

Then create /etc/resolvconf/run/interface/opendns:

nameserver 208.67.222.222
nameserver 208.67.220.220

Update the resolv.conf settings with:

sudo resolvconf -u

Setting up polipo is not too hard, just a little contrary: it does its own DNS resolution, so that it won’t block waiting for DNS replies. To configure it, you can either use the OpenDNS name servers directly or use pdnsd on the local machine: I recommend the latter, as it puts all of the benefits of the DNS cache to work for the web cache.

Change the /etc/polipo/config file to contain the following entry, and restart polipo:

dnsNameServer = 127.0.0.1

This sets up polipo to start polling the pdnsd caching nameserver.

Then there is the Gnome NetworkManager: this program should be putting its configuration in resolvconf storage in /etc/resolvconf/run/interface/NetworkManager in the same format as the opendns file created earlier. You should make sure that the relevant interfaces don’t try to rewrite the DNS entries based on DHCP information – but I’ve not tested it extensively (resolvconf may overwrite DHCP entries).

If you are using a dynamically assigned IP – as one is on a laptop – you’ll need ddclient. A suitable configuration for OpenDNS is the following:

# /etc/ddclient.conf
ssl=yes

protocol=dyndns2
use=web, web=http://whatismyip.org
server=updates.opendns.com
login=your_login
password=your_password
NetworkName

The network name at the bottom should match the name you gave the network in OpenDNS; replace spaces in the network name with underscores in the configuration file.

Lastly, for a test: go to http://welcome.opendns.com – it will tell you whether you are using OpenDNS or not. Alternately, reload this page: the OpenDNS banner at the right will let you know if you are using OpenDNS. It might be worthwhile to reboot the system once to get everything synchronized.

UPDATE: fixed a bad filename (as pointed out in the comments) – thanks for pointing it out!

Google Enters Free DNS Fray

Now it seems that OpenDNS has some serious competition: Google announced their Google Public DNS service just days ago. The founder of OpenDNS, Dave Ulevitch, responded to Google’s announcement in his blog.

Several things stand out between OpenDNS and Google DNS:

  • Google DNS does not misuse NXDOMAIN responses. That is, when you try to resolve an entry that does not exist, you get a “no domain found” response: OpenDNS sends you to their search page.
  • Google DNS supports IPv6.
  • Google DNS implements a wide array of security tools to mitigate attacks against DNS servers.
  • Google will (probably) not redirect valid DNS entries to its own servers.

There has already been some speed testing that shows that, at least in India, the response from Google DNS is much faster than OpenDNS.

CNET had a nice write-up (in the DeepTech blog by Stephen Shankland) on Google’s DNS offering and what it means.

It also appears that the privacy concerns that have cropped up with OpenDNS may not be a concern with Google’s Public DNS (and ironically so). Over at the Slight Paranoia blog by Christopher Soghoian, he wrote a piece about their privacy policy – and received a nice response directly from Dave Ulevitch (the founder of OpenDNS).

Over at The Scream!, there is a forum posting that describes some of this in detail – including the redirection of google.com to google.navigation.opendns.com. The Wikipedia entry on OpenDNS also addresses some of these issues, none of which appear to exist in Google’s Public DNS.

The Domain Name System (DNS), Internationalization, and More

The DNS service has been in the news recently, most specifically when ICANN held the 36th ICANN Conference in Seoul, South Korea and decided to allow internationalized country code top-level domains (abbreviated as ccTLDs). The Russians and the Chinese have been after ICANN to do this for some time – and not with any real resistance from ICANN either. Over at the CircleID blog, they have a nice recap of the meeting.

The biggest problem was technological, and over the last several years ICANN and the DNS powers-that-be have worked diligently to implement a method of supporting Unicode domains – the approved method was the Internationalizing Domain Names in Applications (IDNA).

The biggest problem – which unfortunately hits the Russians and other users of the Cyrillic alphabet hardest – is that some of the domains will look like Roman (alphabet) domains. The most prominent example is the counterpart to the current .ru domain; the equivalent cyrillic example would be .py (which is the Republic of Paraguay). Of course the computer has no problems – the letters are different – but the human user could confuse the two, making a new angle to phishing attacks.

The presence of new internationalized domains may make a difference to you if your company is international – especially if it is located in another country. Countries such as France and Canada and Mexico won’t be affected, but many others will be – Japan and China and many Middle Eastern countries come to mind (with Japanese, Chinese, Arabic, and Hebrew domains coming to mind).

Getting a new international domain will mean making sure that all programs can handle the internationalized domains – such as mail clients, mail servers, local DNS servers, and more. Unless a complete conversion is mandated, it can be done alongside of the current working DNS service. Make sure that you brainstorm and work with as many affected individuals as possible to make the new DNS domain work; this becomes especially critical during a total conversion.

On the heels of the wrap-up of the meeting in Seoul is Paul Vixie’s article in the ACM Queue entitled What DNS is Not. He talks about how DNS is not a policy-making protocol, but rather an expression of facts (mapping names to addresses).