How to Lose Your Life to the Law

Recently, the blog Gizmodo received a pre-release version of a new iPhone, and examined it and wrote about it. This caused Apple to request it, then a flurry of legal actions (including search and seizure) by the government.

What no one has wrote about is how this must have brought the reporter’s life to a complete standstill, with a loss of practically everything he uses and everything he knows. Consider what was taken from Gizmodo reporter Jason Chen:

  • A Samsung digital camera. How many photos were on it? Family photos? Friends, events, etc.?
  • Three Apple laptops and an IBM Thinkpad. How many articles were on them? How many emails? How many documents that Jason was working on?
  • An HP Mediasmart server. How many songs?
  • An external hard drive and several USB thumb drives. How much data was on these drives? Finances? Sources? Records? Insurance records? Health details?
  • An Apple iPad. How much was this being used? Did it contain important parts of Jason’s personal life?
  • An Apple iPhone. This would have had an address book, phone numbers called and received, and more.

In short, the officers of the law seized Jason’s entire digital life – for a sort of extended search in absentia.

No word on whether online services were served with warrants. Not only is the search warrant executed on Jason Chen sealed by the court, but the request to seal is also sealed. Thus one doesn’t know what they were looking for, nor why it is supposed to be secret.

So what’s the answer? The only answer is a change to the laws of the country or personally hiding and squirreling away your data. About the only thing to do in this day and age is to put your data onto a server which is in a country with excellent privacy laws, like Switzerland – the way Neomailbox has done with email. If this concerns you, you should check out the Surveillance Self-Defense site sponsored by the Electronic Frontier Foundation.

Drunk or Tired? No Difference!

As system administrators, the occasional all-nighter is necessary to fix things that go wrong. We also know that the majority of errors can be traced to human error – including those that affect server stability.

A recent article equates a lack of sleep to being drunk – specifically, a 24-hour wake cycle is equivalent to a blood alcohol level of 0.1. This is not good – not at all.

If we consider this fact, and expect that mistakes are very likely after 24 hours of being awake, then we must act. What can we do as system administrators when we work an all-nighter?

  • Take a nap. A small nap – from 15 minutes to an hour – can revitalize you and help you to minimize errors.
  • Have a coworker check your work – or work with you. Having someone to check your work will make a finer net whereby each of you can check the other’s work.
  • Take an extended nap or sleeptime. For example, take off the day of an extended maintenance schedule – or go to sleep at the end of the day and wake up hours later for maintenance (if it is scheduled for late at night).

In short, it is primarily about getting enough sleep. Finding ways to minimize errors or catch errors is good, but never as good as getting enough sleep in the first place.

Oracle Continues to Withdraw Sun Support Access

A couple of days ago, Techbert noted that Sun firmware downloads were no longer available from Oracle. This is just one more way that Oracle has been withdrawing from Sun’s traditional open stance.

Oracle already has stated it would not be putting all new technologies into OpenSolaris, and that it would provide support for all Sun servers in the (customer’s) data center or none at all.

The entire character of Sun’s offerings has changed, and for the customer, not for the better.

HP ITRC to Enter Read-Only for Three Days

HP announced that the HP ITRC is to undergo maintenance late in May, during which time the ITRC will be read-only.

Maintenance will start on May 19 at 6:30 am GMT, and end on May 22 at 3:00 pm GMT. During the time that ITRC is read-only, no new forum messages can be posted, and no changes to user profiles, favorites, or notifications will be possible.

All of those that use HP support should be using HP ITRC as much as possible; I’ve found that the HP-UX and OpenVMS support is fantastic. There is quite a lot of expertise behind the readers and responders of the forums.

Is the Battle for Desktop Security Lost?

Jeremiah Grossman, CTO of Whitehat Security, wrote in his blog that he thought it just might be. He reports that at a conference he just attended (FS-ISAC) that there was discussions of how a financial institution must assume that their clients (or customers) are infected or otherwise compromised. This was seen as an admission that the war over desktop security was lost to the bad guys.

I beg to differ. The war can only be won if everyone accepts that the other fellow may be compromised; this is part of normal security measures. The mere fact that we must assume the client is compromised is not an indication that the war is lost.

Gunter Ollmann, a security analyst formerly with IBM, wrote a paper on this very topic that is enlightening.

It is entirely possible that financial institutions must raise this to the next level, considering that users passwords can be scanned, SSL transmissions intercepted or decoded, fake sites created, and more. For a financial institution, the stakes are much higher.

One of the weaknesses in many security installations is, in fact, the lack of suspicion towards partners and clients. Partner networks are assumed to be as secure as the hosting network – maybe they are, and maybe they are not.

However, I think that the battle is not going well – and I think it won’t get better until users are educated and Windows is more secure (even if users don’t like it). The battle for security must be engaged at all levels if we are to beat back the bad guys.

So what must we do to secure the desktop? As users it can be easy: use virus checkers, use firewalls, don’t open suspicious emails, perhaps use non-mainstream operating systems such as Ubuntu, PCBSD, OpenSUSE, Mac OS X, or others.

For administrators, securing the desktop is harder, especially for users that might connect from outside. Some things that one could do would be:

  • Give the user a client SSL certificate to connect with. This will prevent users from giving out passwords, deliberately or accidently. This should also prevent users falling for fake sites.
  • Make the user (if at all possible) run a virus-checker routinely, and virus-check everything you receive from them (such as email and documents).
  • If a virus is found, trace it and quarantine the user until they have undergone some sort of security audit (even if it is a quick audit).
  • Educate the users about phishing attempts and other things, and encourage them to “get the word out” among their friends and so forth.
  • In extreme (or unusual) cases, encourage the use of non-Windows and non-Intel environments: these environments have fewer viruses, and those that make it will not be able to run if the environment is not the expected one.
  • Similarly, use a different browser: in the recent browser security contest at CanSecWest, only Google Chrome remained alive at the end of the contest. In contrast, in the recent attack on Google, Internet Explorer 6 was used to compromise the entire company.

Many things must be done if we are to win the war against botnets and other such nefarious ne’er-do-wells. To arms!

Life and Work with an Ubuntu Linux Laptop

I’ve been using a Ubuntu Linux desktop for over a year, and haven’t regretted it. The experience is beautiful and cost-effective.

I’ve learned to use Linux for everything. I used it to maintain HP-UX systems via screen and ssh; I’ve used it to write articles here using browsers like Firefox and Google Chrome or editors like BloGTK.

One of the nice things about Linux (in contrast to OpenSolaris, for instance) is that an installation of Linux has everything it needs on disk to work in any system. Moving the hard drive from one system to another causes no difficulty as the appropriate drivers are loaded as needed. This allows things like moving a virtual environment to a physical environment without problems.

Nothing is perfect, however. In running Ubuntu, it seems that the six-month turnaround leads the developers to push off fixes until the next release. I am also finding that there are many bugs in new releases of Ubuntu; the latest Lucid Lynx showed about a half-dozen bugs within the first day’s operations. Recently an article (and follow-up) expounded on the poor bug-fixing process in Ubuntu, with the author Caitlyn Martin blaming the six-month cycle – and pointing out that several Ubuntu-based distributions were changing to Debian instead. She wasn’t the first; Christopher Smart bemoaned Ubuntu stability in an article back in November 2009.

I’ve not had a lot of problems, but some. One was that my USB 2.0 device refused to work with Karmic. The solution was to stop using EHCI (yow!) but that could not be done because the kernel for Karmic has EHCI builtin, and EHCI could not be disabled. Another was bzflag, which died with a segmentation fault in Karmic and refuses to run in Lucid.

For my part, I would not only make the same recommendations as Caitlyn Martin, but would add one more: test on as many different kinds of machines as possible, specifically including older machines and strange machines as well. With better testing, most of the hardware related problems could be eliminated. This includes testing with add-on hardware as well.

I’m probably going to reacquaint myself with OpenSUSE again – though Debian does have the best LISP support on the planet… but then, trying different distributions is part of the fun of it.

Living with Linux is definitely possible; working with Linux is only slightly harder – but it can be done, and is worth it.