Handling an Internet Bandwidth Hog

I noticed that our company internet was very slow – and it wasn’t long before one of the higher-ups also noticed and asked me about it.

I went to SpeedTest.net and ran a test – the speeds measured were a fraction of what we should have been getting.

So I went to our pfSense firewall and looked at the traffic graphs (in the Status menu). Sure enough, outbound traffic was maxed out. I noticed that one particular host was responsible for virtually all traffic across the firewall.

This means that not only is Internet traffic for all being slowed down, but so is any traffic bound for the remote data center.

I added a rule to block the host temporarily and then reset all of their connections using the States tab (under the Diagnostics menu).

Eventually the user came and we straightened everything out. I asked them what they were doing, and it was a massive download they had started. Handling the user and educating the user is as important as bringing the Internet back to normal.

Mysterious IPs on the Network?

If you see mysterious IPs on the network – and they don’t seem to be actually be doing anything like SMTP or DNS or SSH, there’s an angle you may not have considered.

Do a Google search for the address and the word “default” and see if anything is using the IP address for a default setting. Many routers or other devices come with a default setting and you may have one or more of these on your network which has not yet been configured.

In my case, the culprit was the IP 192.168.0.120 – which turned out to be a Dell iDRAC which was not yet configured.

Naming Those Servers

When it comes time to name a server, there are a few places to go for ideas – and a few things to remember.

One of my favorites is to use Google Sets. If you give Google several entries, Google will come back with more that match your list. This can be handy when you are trying to come up with a new name in a list.

An unknown champion in this area would be the site NamingSchemes.com. Once you’ve found this site, you’ll want to bookmark it – there’s nothing like it out there. If some scheme is missing – you can add it, since it is an open wiki.

Lastly, there is a question over at ServerFault which has some fabulous naming schemes that people have used (my all time favorite has to be the one that used names from the RFB list!). Another fantastic list from that question involves something like the players name’s from the famous Abbot and Costello skit Who’s on First.

There is also an IEEE RFC on Naming Your Computer (RFC-1178) which has good tips.

In my background, I’ve used (or seen used – or heard of) schemes based on the Horses of the Apocolypse (“war”, “famine”, etc.), colors, headache relief (“aspirin”, “acetominophen”, etc.), “test” and synonyms (“test”, “quiz”, “exam” – for testing servers!), cities, and fruits. The cities environment was fun – the hardware had two partitions, so naming the partitions was done with cities that were near to each other. The headache relief was fun until the pharmaceutical company found some of the names were of competing drugs…

Think about your names – and have some fun, too!

Why DNSSEC May Not be a Good Thing

Recently, DNSSEC has been rolling out into major DNS servers, including those that service the .org zone and now the root zone. This sounds at first glance like a good thing: all responses from DNS servers are validated, and it becomes impossible for man-in-the-middle attacks to take place.

However, there are commercial uses for “man-in-the-middle” operations; OpenDNS is one that comes immediately to mind. Indeed, OpenDNS is opposed to DNSSEC and has implemented DNSCurve instead.

The main problem (for this discussion) is that DNSSEC completely removes the possibility of a man-in-the-middle – that is, it is impossible for a DNS server like OpenDNS to return a different IP address than the actual DNS address of a machine.

The OpenDNS article also suggests that Akimai and the NTP Pool Project will both be affected by this as well. In these cases, the problem is that when a name is presented to the DNS server, it chooses a particular IP address based on parameters of its choosing – so a one-to-one mapping of DNS name to IP address is irrelevant and impossible.

This also suggests that DNS round robin for clusters would be impossible to implement with DNSSEC active as well.

DNSSEC also interferes with split horizon DNS configurations, although there are ways to make it work.

It will be interesting to see what becomes of DNSSEC if commercial interests like OpenDNS and Akimai speak out against it.

5 Reasons for Admins to Know TCP/IP

As a system administrator, one can be forgiven for thinking that knowing the details of TCP/IP is unnecessary. However, knowledge of TCP/IP will be indispensable at times.

Knowing your TCP/IP and TCP protocols will assist you in debugging network problems in your systems.

  1. Server connection failures. When server connection fails, knowing the details of TCP/IP protocols will assist you in figuring out why. Is the connection attempted at all? Does the TCP connection fail or is the connection made only to be denied or dropped?
  2. Routing. Is the network connectivity down? Knowing the details of TCP can assist you in figuring out why.
  3. Physical connectivity. Is there activity on the wire? Is the link up? Are you using an old 10Base-2 network? If so, can you debug connectivity problems with it? Is your duplex set correctly on your 10Base-T networks?
  4. Internet connectivity. Is your firewall working correctly? Can you make connections to disallowed sites? Are there holes in the configuration? Are your Internet accessible sites really accessible from the Internet?
  5. Testing network services. Is that DHCP server serving correctly? Is the NFS server actually using TCP throughout? Is the load balancing working properly?

Even if you have a dedicated networking team, knowing the TCP protocols will help you to tell them what is wrong and exactly what is happening – and might just let you resolve it yourself.

Learning the network protocols is not difficult. Start by downloading the network utilities tcpdump and wireshark. These utilities will let you see what is actually happening on the network – real live traffic you can analyze.

Before you start analyzing real traffic, make sure that you can. Sniffing network traffic can violate corporate security rules; make absolutely sure you have authorization.

Secondly, get a general book on TCP/IP protocols; you can learn protocols in-depth later. The TCP/IP Guide from No-Starch press is one such book. Another good book would be one about Ethernet – Ethernet: The Definitive Guide from O’Reilly is one such good book.

Of course, if you aren’t using TCP/IP (as in a OpenVMS cluster, for instance) – then you need a different book…

BGP Still Contains 20-Year Old Insecurities

According to an article from the AP, BGP (Border Gateway Protocol) still contains weaknesses that could result in widespread loss of Internet connectivity.

The article spends an inordinate amount of time explaining that this has already happened in certain areas for other reasons, and only discusses BGP briefly (relative to the rest of the article).

It is, however, a real problem. Current protocol changes (to improve security) include a wide range of protocols: DNS-SEC, SNMP v4, SMTP (with message submission and encryption), POP3S, IMAPS, and others. Even IPv6 involves changes to increase security.

It is unfortunate that the routing backbone of the Internet is still suffering from reliability problems after all these years – even after the president said that fixing it was top priority.

The problems go beyond just security, but stability and scalability as well. Is it time for a replacement or redesign of BGP?

Court: FCC has no jurisdiction over the Internet

This court decision by the United States Federal Court of Appeals for the DC Circuit was not entirely unexpected, but it does not bode well for net neutrality. The case is Comcast v. FCC. Comcast put out a press release praising the decision and stated their commitment to “open Internet”. A party to the case, freepress, put out their own release. One notable quote from freepress is the following:

[Because of the decision, t]he FCC has virtually no power to make policies to bring broadband to rural America, to promote competition, to protect consumer privacy or truth in billing.

Net neutrality is the idea that all network traffic should be treated equally, without regard to content or source. What got Comcast in trouble with the FCC was interfering with peer-to-peer traffic such as BitTorrent.

Internet and legal blogs and press were all abuzz with talk of the decision. Bloggers that reacted included the Electronic Frontier Foundation, the Wall Street Journal (including WSJ blogs like Digits), Larry Downes (with the Stanford Center for Stanford Center Internet and Society), the New York Times, the Center for Democracy and Technology, Above the Law, the ACLU, and so many endless others.

If the Federal Communications Commission (FCC) cannot sanction a company (Comcast, in this case) for the way it throttled Internet access for its customers, then access to selected sites can be denied or slowed down upon an arbitrary decision by the company. Sites like Google could be charged different prices by their ISPs than other sites, web sites could be blocked, users charged different prices depending on their usage – how much and what kind – and more.

Imagine if your phone company could charge you more for making calls to businesses – or certain businesses. Imagine if the phone company decided that you couldn’t call certain companies. Imagine that your phone company decided you couldn’t order a pizza.

The way this decision stands, it sounds like the FCC no longer has any right to regulate the Internet at all – which leaves us at the mercy of the big ISPs. I hope this gets corrected by the US Supreme Court or the US Congress and soon.

Current Ethernet Not Enough?

At the recent Ethernet Technology Summit, there was grousing going on about the need for more power-conservative switches, for more manageable switches, but most of all for faster Ethernet.

Facebook, for one, spoke of having 40Gbits coming out of each rack in the data center, and related how its 10Gb Ethernet fabric is not enough and won’t scale. There are new standards (100Gb Ethernet and Terabit Ethernet) but they are not yet finalized. Analysts suggest that there is a pent-up demand for 100Gb Ethernet, and the conference bore that out.

Supposedly, there is supp

Direct NFS in Solaris with Oracle 11g: Benchmarks

Over at Glenn Fawcett’s Oracle Blog, there is a write-up about the speed of Oracle’s Direct NFS (now a few years old) as compared to the traditional NFS client. Glenn wrote about how to set this up initially, then followed up with a report on how to monitor the environment, as well as the results of testing the environment.

Glenn worked with Kevin Closson, who is one of the minds behind Oracle’s DirectNFS. Kevin wrote about the colloboration and about some of the misunderstandings surrounding dNFS with Solaris and Sun storage.

Oracle has a nice whitepaper on this topic, going into detail as well.

There is also an old posting by the Oracle Storage Guy describing DirectNFS in detail, particularly in regards to using dNFS with EMC storage.

Speeding up the Web: a new protocol

Google has revealed a new protocol – SPDY – that has been part of a research project to speed up the HTTP protocol that makes up the Internet. The speed increase is amazing – and sorely needed.

There is already a development version of Google’s Chrome browser available that supports SPDY; the branch is code-named Flip.

This new protocol requires a modified web server; this will be forthcoming from Google in the future. This is an exciting development that bears watching.