HP Tech Day: HP Superdome

I was recently invited to take part in an HP Tech Day in San Jose, California, celebrating the 10th anniversary of the HP Superdome (the most advanced server in the Integrity line).  This was an event designed to introduce several of us in the blog world to the HP Superdome.  The attendees included David Adams from OSNews, Ben Rockwood from Cuddletech, Andy McCaskey from Simple Daily Report (SDR) News, and Saurabh Dubey of ActiveWin.  This was a quite eclectic and broad mix of perspectives: David Adams covers operating systems; Ben Rockwood covers Sun goings on (as he mentions in his article, he wore a Sun shirt: bold as brass!); Saurabh Dubey covers Microsoft goings on; and I, as loyal readers may know, cover system administration (with a focus on HP-UX, OpenVMS, and Linux – all of which will run on Superdome). Andy McCaskey over at SDR News also had a nice writeup on his experiences.

It is possible I was the most familiar with the architecture and with the capabilities, though I’ve not seen or worked with a Superdome in the past: the capabilities of the Superdome are largely based on the fact that it is cell-based.  The rp7420 cluster which I have maintained over the last several years uses the same technology, though cells from the rp7420 are incompatible with the Superdome.  The software is the same: prstatus, etc.  The System Management Homepage (SMH) was also shown, although it was almost shown as a benefit of the Superdome (it’s actually in every HP-UX since 11i v2, and is an option for OpenVMS 8.x).

There was a lot of talk about “scaling up” (that is, use a larger, more powerful system) instead of “scaling out” (using a massive cluster of many machines).  The Superdome is a perfect example of “scaling up” and is possibly one of the best examples.  I was impressed by what I saw as the capabilities of the Superdome.  There was a lot of comparison with the IBM zSeries, which is the epitome of the current crop of mainframes.  The presenters made a very strong case for using Superdome over zSeries.

They did seem to focus on running Linux in an LPAR, however; this creates a limit of 60 Linux installations.  Using z/VM as a hypervisor, one can run many more Linux systems.  I have heard of a test run in Europe (years ago) where a zSeries was loaded with one Linux installation after another – when the testers reached into the tens of thousands (30,000?) the network failed or was overloaded; the zSeries system was still going strong.  Trouble is, I’m not able to back this up with a source at the moment: I’m sure it was available as part of a print (Linux) journal – it may have been called “Project Charlie.”  Can anyone help?

The usability features of the Superdome were in prime display: for example, the power supplies were designed so that they could not be inserted upside-down.  Another example: the cells for the Superdome are in two parts: the “system” (including CPU, memory, and chip glue) and the power supply.  This makes it much easier to remove in the typical datacenter row and makes each part lighter, making it easier for users. There are innumerable items like this that the designers took into account during the design phase.  The engineering on these systems are amazing; usability has been thought of from the start.  In my opinion, both HP and Compaq have been this way for  a long time.

Speaking of the tour, this system that they showed us was a prototype of the original HP Superdome that shipped for the first time in 2000.  This system was still going and was using modern hardware: these systems are not designed for a 3-4 year lifecycle, but a much longer, extended lifecycle.

There were a lot of features of the system that I’ll cover in the next few days; it was enjoyable and educational.  I very much appreciate the work that went into it and hope to see more.

By the way, if you read Ben Rockwood’s article at Cuddletech, look at the first photograph: your author is center left, with the sweater.

Update: thanks to Michael Burschik for the updated information on Test Plan Charlie, which saw 41,000 Linux machines running on the IBM zSeries back in 2000. I’ll have a report on it soon. Strangely enough, I still haven’t found the article I was thinking of – but of course, a project like that isn’t reported in just one periodical…

Powered by ScribeFire.

UNIX Fragmentation

Did you ever notice that while UNIX and Linux versions are uniform in most areas, there are certain areas where every system seems to do things their own way? These are the things that nobody seems satisfied with, and so has to create their own version – and thus confuse new users. From what I’ve seen these areas are:

  • Menu-based system administration tools
  • Package management
  • Locations of user-added software
  • Filesystems

The last two are not so bad – user-added software locations are generally limited to /usr/local and to /opt (if any), and filesystems don’t tend to “wander” from one UNIX to another – and managing filesystems is not done often enough to make the slight differences annoying.

The real annoyances to system administrators tends to be the first two – system administration tools and packaging. Consider these options of system administration tools for various systems:

  • smit (AIX)
  • sam (HP-UX 11v2-)
  • smh (HP-UX 11v3+)
  • yast (SUSE Linux)
  • redhat-config-* (Red Hat)
  • sysadm (Unixware)
  • webmin (portable)

This doesn’t take into account all the others – IRIX, other Linuxes, ad nauseum. What is it about system administration that every last distributor has to do it their own way? Perhaps webmin will help some, but do we really want a web server on every server? Even so, this is becoming a requirement imposed by the distributors already – so might as well standardize on webmin, right? Perhaps not….

And packaging is even worse. We have these options:

  • RPM (Red Hat)
  • pkg (Solaris)
  • depot (HP-UX)
  • dpkg (Debian)
  • pkg (Unixware)
  • Ports Tree (FreeBSD)
  • emerge (Gentoo)
  • pbi (PCBSD)
  • lrp (Linux Router Project and spinoffs)
  • packages (MacOS X)
  • ports (NetBSD)
  • ESP (portable)

What is it that people just can’t leave well enough alone? Perhaps if BSD had come out of Berkeley with these tools, things might have been different.

There is one ray of hope in this ghastly array of choices: the most flexible and powerful (in my opinion) is also portable: RPM. So all of the systems previously mentioned will also run RPM. Too bad that they won’t necessarily run APT-RPM, but that’s another problem.

The Demise of the HP-UX System Administration Manager (SAM)

The venerable HP-UX utility SAM is now deprecated as of HP-UX 11v3, and in its stead is the System Manager Homepage (SMH). In HP-UX 11v3, SAM gives a message about being deprecated, then runs SMH. In HP-UX 11v4 (whenever that comes along) there will be no SAM.

The SMH requires a browser (such as Mozilla or Firefox) and a web server (Apache) to be installed, and certain plugins also require Java and Apache Tomcat. The text-based user interface is still there, but it is a basic (and usable) interface – no fancy windows (shucks). The requirements are indeed quite heavy for a server install – one of the very first things to do on a server when installing (and securing) is to strip out everything possible (and most especially complex network servers). Sigh.

Heavy or not, HP-UX does include all of these products on the distribution DVDs, so installing them isn’t a big deal – it’s just that when you want to strip things down to the basics, it becomes difficult when the developer of the software keeps increasing what you need to run.  Oh, well – it’ll be pretty.