Is the Battle for Desktop Security Lost?

Jeremiah Grossman, CTO of Whitehat Security, wrote in his blog that he thought it just might be. He reports that at a conference he just attended (FS-ISAC) that there was discussions of how a financial institution must assume that their clients (or customers) are infected or otherwise compromised. This was seen as an admission that the war over desktop security was lost to the bad guys.

I beg to differ. The war can only be won if everyone accepts that the other fellow may be compromised; this is part of normal security measures. The mere fact that we must assume the client is compromised is not an indication that the war is lost.

Gunter Ollmann, a security analyst formerly with IBM, wrote a paper on this very topic that is enlightening.

It is entirely possible that financial institutions must raise this to the next level, considering that users passwords can be scanned, SSL transmissions intercepted or decoded, fake sites created, and more. For a financial institution, the stakes are much higher.

One of the weaknesses in many security installations is, in fact, the lack of suspicion towards partners and clients. Partner networks are assumed to be as secure as the hosting network – maybe they are, and maybe they are not.

However, I think that the battle is not going well – and I think it won’t get better until users are educated and Windows is more secure (even if users don’t like it). The battle for security must be engaged at all levels if we are to beat back the bad guys.

So what must we do to secure the desktop? As users it can be easy: use virus checkers, use firewalls, don’t open suspicious emails, perhaps use non-mainstream operating systems such as Ubuntu, PCBSD, OpenSUSE, Mac OS X, or others.

For administrators, securing the desktop is harder, especially for users that might connect from outside. Some things that one could do would be:

  • Give the user a client SSL certificate to connect with. This will prevent users from giving out passwords, deliberately or accidently. This should also prevent users falling for fake sites.
  • Make the user (if at all possible) run a virus-checker routinely, and virus-check everything you receive from them (such as email and documents).
  • If a virus is found, trace it and quarantine the user until they have undergone some sort of security audit (even if it is a quick audit).
  • Educate the users about phishing attempts and other things, and encourage them to “get the word out” among their friends and so forth.
  • In extreme (or unusual) cases, encourage the use of non-Windows and non-Intel environments: these environments have fewer viruses, and those that make it will not be able to run if the environment is not the expected one.
  • Similarly, use a different browser: in the recent browser security contest at CanSecWest, only Google Chrome remained alive at the end of the contest. In contrast, in the recent attack on Google, Internet Explorer 6 was used to compromise the entire company.

Many things must be done if we are to win the war against botnets and other such nefarious ne’er-do-wells. To arms!

Setting User Expectations (or the Scotty Factor)

When a user is affected by something we do, it is easy to miss the mark and let the user have unrealistic expectations without them or us realizing it. When this happens, the expectations are inevitably dashed and the user (for better or worse) blames the administrator (and not entirely without cause).

A good example is response time. When a new server is brought in, and the user application is moved from the old server to the new, it is easy for a user to think that all their response time troubles will be solved, things will be extremely fast, and there will be no waiting.

It is up to us as administrators (and customer service personnell!) to educate the user that some problems may be solved, and the exact response won’t be known until it runs on the new hardware. We know better than anybody how a slow disk could bring down the fast server, or an application bug could surface that was hidden or non-existant on the old hardware, or the new hardware is not completely supported by the application – there are any number of things that can affect response time.

Another is server downtime. When downtime is needed, it is not enough for us to post it to the central web page that details all of the outages. You know that not every user will read it – and maybe not even most. Let the users know that downtime is to be expected, and let them know that it will take this many hours (take your best estimate, then triple it).

This brings us to the Scotty Factor. The Wikipedia article on Montgomery Scott explains it fully:

The term ‘Scotty factor’ describes the practice of over-estimating how much time a project will require to complete by multiplying the actual estimate by a particular number. In strict terms it is a factor of four: the number cited by Scotty in the film Star Trek III: The Search for Spock.

So next time you can see your customers (users) building expectations – make sure that they build the right expectations.