HP Tech Day: HP Superdome

I was recently invited to take part in an HP Tech Day in San Jose, California, celebrating the 10th anniversary of the HP Superdome (the most advanced server in the Integrity line).  This was an event designed to introduce several of us in the blog world to the HP Superdome.  The attendees included David Adams from OSNews, Ben Rockwood from Cuddletech, Andy McCaskey from Simple Daily Report (SDR) News, and Saurabh Dubey of ActiveWin.  This was a quite eclectic and broad mix of perspectives: David Adams covers operating systems; Ben Rockwood covers Sun goings on (as he mentions in his article, he wore a Sun shirt: bold as brass!); Saurabh Dubey covers Microsoft goings on; and I, as loyal readers may know, cover system administration (with a focus on HP-UX, OpenVMS, and Linux – all of which will run on Superdome). Andy McCaskey over at SDR News also had a nice writeup on his experiences.

It is possible I was the most familiar with the architecture and with the capabilities, though I’ve not seen or worked with a Superdome in the past: the capabilities of the Superdome are largely based on the fact that it is cell-based.  The rp7420 cluster which I have maintained over the last several years uses the same technology, though cells from the rp7420 are incompatible with the Superdome.  The software is the same: prstatus, etc.  The System Management Homepage (SMH) was also shown, although it was almost shown as a benefit of the Superdome (it’s actually in every HP-UX since 11i v2, and is an option for OpenVMS 8.x).

There was a lot of talk about “scaling up” (that is, use a larger, more powerful system) instead of “scaling out” (using a massive cluster of many machines).  The Superdome is a perfect example of “scaling up” and is possibly one of the best examples.  I was impressed by what I saw as the capabilities of the Superdome.  There was a lot of comparison with the IBM zSeries, which is the epitome of the current crop of mainframes.  The presenters made a very strong case for using Superdome over zSeries.

They did seem to focus on running Linux in an LPAR, however; this creates a limit of 60 Linux installations.  Using z/VM as a hypervisor, one can run many more Linux systems.  I have heard of a test run in Europe (years ago) where a zSeries was loaded with one Linux installation after another – when the testers reached into the tens of thousands (30,000?) the network failed or was overloaded; the zSeries system was still going strong.  Trouble is, I’m not able to back this up with a source at the moment: I’m sure it was available as part of a print (Linux) journal – it may have been called “Project Charlie.”  Can anyone help?

The usability features of the Superdome were in prime display: for example, the power supplies were designed so that they could not be inserted upside-down.  Another example: the cells for the Superdome are in two parts: the “system” (including CPU, memory, and chip glue) and the power supply.  This makes it much easier to remove in the typical datacenter row and makes each part lighter, making it easier for users. There are innumerable items like this that the designers took into account during the design phase.  The engineering on these systems are amazing; usability has been thought of from the start.  In my opinion, both HP and Compaq have been this way for  a long time.

Speaking of the tour, this system that they showed us was a prototype of the original HP Superdome that shipped for the first time in 2000.  This system was still going and was using modern hardware: these systems are not designed for a 3-4 year lifecycle, but a much longer, extended lifecycle.

There were a lot of features of the system that I’ll cover in the next few days; it was enjoyable and educational.  I very much appreciate the work that went into it and hope to see more.

By the way, if you read Ben Rockwood’s article at Cuddletech, look at the first photograph: your author is center left, with the sweater.

Update: thanks to Michael Burschik for the updated information on Test Plan Charlie, which saw 41,000 Linux machines running on the IBM zSeries back in 2000. I’ll have a report on it soon. Strangely enough, I still haven’t found the article I was thinking of – but of course, a project like that isn’t reported in just one periodical…

Powered by ScribeFire.

IBM z/Series and OpenSolaris

In the past, the z/Series has been known for its virtualization of Linux servers (over 10,000 possible!). Now, someone is showing OpenSolaris running on z/Series at the Gartner Data Center Conference in Las Vegas. There is an excellent write-up from the Mainframe blog, and another post from Marc Hamilton.

There is also an excellent open source IBM zSeries emulator called Hercules which can run Linux, MVS, DOS/VSE and others – and now, soon, OpenSolaris.

There is a very interesting speech given at Linuxworld by Guru Vasudeva from Nationwide about how his company implemented virtual Linux machines on IBM zSeries with z/VM.

When I first saw this (from IBM’s web site) I thought it was laugh out loud funny – and it still is. It also gives a hint at the power of the IBM z/Series, and shows one of the reasons I’m excited about the power one of these has.

I just love the deadpan Joe Friday responses from the cops, and some of the quotes:

  • “Could it be an inside job? We’re inside, right?”
  • “I need my pills!”
  • “What’s a server?”