Canonical Kills Ubuntu Maverik Meerkat (10.10) for Itanium (and Sparc)

It wasn’t long ago that Red Hat and Microsoft released statements that they would no longer support Itanium (with Red Hat Enterprise Linux and Windows respectively). Now Canonical has announced that Ubuntu 10.04 LTS (Long Term Support) will be the last supported Ubuntu on not only Itanium, but Sparc as well.

Itanium has thus lost three major operating systems (Red Hat Enterprise Linux, Windows, and Ubuntu Linux) over the past year. For HP Itanium owners, this means that Integrity Virtual Machines (IVMs) running Red Hat Linux or Microsoft Windows Server will no longer have support from HP (since the operating system designer has ceased support).

The only bright spot for HP’s IVM is OpenVMS 8.4, which is supported under an IVM for the first time. However, response to OpenVMS 8.4 has been mixed.

Martin Hingley has an interesting article about how the dropping of RHEL and Windows Server from Itanium will not affect HP; I disagree. For HP’s virtual infrastructure – based on the IVM product – the two biggest environments besides HP-UX are no longer available. An interesting survey would be to find out how many IVMs are being used and what operating systems they are running now and in the future.

With the loss of Red Hat and Microsoft – and now Canonical’s Ubuntu – this provides just that many fewer options for IVMs – and thus, fewer reasons to use an HP IVM. OpenVMS could pick up the slack, as many shops may be looking for a way to take OpenVMS off the bare metal, letting the hardware be used for other things.

If HP IVMs are used less and less, this could affect the Superdome line as well, as running Linux has always been a selling point for this product. As mentioned before, this may be offset by OpenVMS installations.

This also means that Novell’s SUSE Linux Enterprise Server becomes the only supported mainstream Linux environment on Itanium – on the Itanium 9100 processor at least.

From the other side, HP’s support for Linux seems to be waning: this statement can be found in the fine print on their Linux on Integrity page:

HP is not planning to certify or support any Linux distribution on the new Integrity servers based on the Intel Itanium processor 9300 series.

Even if HP doesn’t feel the effect of these defections, the HP’s IVM product family (and Superdome) probably will.

Data Centers: Weta Digital, New Zealand

Weta Digital, the special effects company behind Lord of the Rings, King Kong (2005), X-Men, and Avatar is in the news again.

Data Center Knowledge has an article about their data center, as well as another one about it last year.

Information Management also had an article about it, as well as a blog post by Jim Ericson.

HP even has a video about their use of HP blades in their cluster.

Some of the more interesting things about their data center include:

  • The use of water-cooling throughout.
  • Using external heat exchangers to release heat.
  • Using blades in a clustered configuration.

This is just the beginning. While this data center is not as radical as the others discussed here recently, the data center is more in the realm of current possibilities. There are photographs in the current Data Center Knowledge article as well.

A Data Center in a Silo

The CLUMEQ project is designing a supercomputer, and has several sites already built. One of these, a site in Quebec, was built in an old silo that used to contain a van de Graaf generator.

An article in the McGill Reporter from several years ago described the supercomputer installation at Montreal.

The new CLUMEQ Collossus (as the Quebec installation is called) was described in an article in Data Center Knowledge. The design has all of the computers (Sun blades) are in a circle with the core being a “hot core” and the cool air being drawn from the rim.

HP Superdome and Green Computing

The HP Superdome is designed with a much different basis than most of its competition – and indeed, many computers. The design principles behind the HP Superdome lead to a lesser impact on the environment, and thus are a "greener" choice for heavy computing.

Why? The HP Superdome is designed in such a way that its pieces can be replaced as needed, and the need to replace the entire system (common with other systems, including mainframes) can be dramatically reduced. The HP Superdome is designed with at least a 10-year lifespan, meaning that it when other systems have to be replaced the Superdome will (at most) only need "refreshing" with new cells or perhaps other parts.

For example, in 2009, the original HP Superdome prototype is still running – and even has HP Integrity cells operating.

Most other systems will have to be replaced once or twice before a Superdome has to be replaced. Replacing the system generates, as a result, a certain amount of electronic waste – and a mainframe will create a large amount of waste.

This is on top of the fact that the HP Superdome uses less electricity than a mainframe. It is also possible to only use the cells that you need, leaving the others either inactive via iCap (no power) if they exist at all.

All of these facts suggest that an HP Superdome would be a good choice for green computing in contrast to its mainframe competition.

A update on the recent HP Superdome Tech Day: turns out that Jacob Van Ewyk blogged about it in a two part article (part 1 and part 2) on the blog, Mission Critical Computing. John Pickett wrote about the energy savings inherent in using an HP Superdome on the blog Legacy Transformation.

HP Instant Capacity (iCap)

One of the things that may affect any clusters you have – or other systems – is that management does not want to spend enough to handle any possible load.  With a cluster, this means that you may not be able to handle a fail-over because there is not enough spare processing power to handle the extra load when it happens.

HP’s Instant Capacity (“capacity on demand”) is an answer to this dilemma.  The base idea is that you have extra hardware already in the data center that is not available for use until it is necessary.  The switch that will enable this expanded capacity can be automatic or manual; when some portion of the extra capacity is enabled, you pay for it and it can be used from then on.

Yet, Instant Capacity (iCAP) is more flexible than this.  The capacity may be enabled only temporarily instead of permanently – this is known as TiCAP (temporary iCAP).  Thus, you can save even more by buying extra hardware but enabling only a small portion of it.  During the recent HP Tech Days that I attended in San Jose, California, a situation was described where an HP Superdome could be purchased with a large amount of the hardware already in place – but only a small amount of the hardware enabled.  When the extra power is needed, for example, a cell in the Superdome could be enabled until such time as the power is no longer necessary.

There is also Global Instant Capacity (GiCAP) which even allows the movement of power from one system to another.  For example, if a CPU on one system is underutilized and another system needs the resource more – then the CPU resource can be “logically” moved from one system to the other through GiCAP.  Alternately, if one system dies and another system needs its power, the dead system’s resources can be used by the active system by moving them through GiCAP.

iCAP and TiCAP are available for HP-UX (on PARISC and Itanium) and for OpenVMS (only on Itanium). GiCAP is only available for HP-UX. 

I find iCAP and TiCAP to be very interesting.  From a cost perspective, you pay only a minimal amount to keep the resource; when it is enabled, you then pay for it for the duration – or buy the hardware outright for permanent use as needed.

Powered by ScribeFire.

HP Tech Day: HP Superdome

I was recently invited to take part in an HP Tech Day in San Jose, California, celebrating the 10th anniversary of the HP Superdome (the most advanced server in the Integrity line).  This was an event designed to introduce several of us in the blog world to the HP Superdome.  The attendees included David Adams from OSNews, Ben Rockwood from Cuddletech, Andy McCaskey from Simple Daily Report (SDR) News, and Saurabh Dubey of ActiveWin.  This was a quite eclectic and broad mix of perspectives: David Adams covers operating systems; Ben Rockwood covers Sun goings on (as he mentions in his article, he wore a Sun shirt: bold as brass!); Saurabh Dubey covers Microsoft goings on; and I, as loyal readers may know, cover system administration (with a focus on HP-UX, OpenVMS, and Linux – all of which will run on Superdome). Andy McCaskey over at SDR News also had a nice writeup on his experiences.

It is possible I was the most familiar with the architecture and with the capabilities, though I’ve not seen or worked with a Superdome in the past: the capabilities of the Superdome are largely based on the fact that it is cell-based.  The rp7420 cluster which I have maintained over the last several years uses the same technology, though cells from the rp7420 are incompatible with the Superdome.  The software is the same: prstatus, etc.  The System Management Homepage (SMH) was also shown, although it was almost shown as a benefit of the Superdome (it’s actually in every HP-UX since 11i v2, and is an option for OpenVMS 8.x).

There was a lot of talk about “scaling up” (that is, use a larger, more powerful system) instead of “scaling out” (using a massive cluster of many machines).  The Superdome is a perfect example of “scaling up” and is possibly one of the best examples.  I was impressed by what I saw as the capabilities of the Superdome.  There was a lot of comparison with the IBM zSeries, which is the epitome of the current crop of mainframes.  The presenters made a very strong case for using Superdome over zSeries.

They did seem to focus on running Linux in an LPAR, however; this creates a limit of 60 Linux installations.  Using z/VM as a hypervisor, one can run many more Linux systems.  I have heard of a test run in Europe (years ago) where a zSeries was loaded with one Linux installation after another – when the testers reached into the tens of thousands (30,000?) the network failed or was overloaded; the zSeries system was still going strong.  Trouble is, I’m not able to back this up with a source at the moment: I’m sure it was available as part of a print (Linux) journal – it may have been called “Project Charlie.”  Can anyone help?

The usability features of the Superdome were in prime display: for example, the power supplies were designed so that they could not be inserted upside-down.  Another example: the cells for the Superdome are in two parts: the “system” (including CPU, memory, and chip glue) and the power supply.  This makes it much easier to remove in the typical datacenter row and makes each part lighter, making it easier for users. There are innumerable items like this that the designers took into account during the design phase.  The engineering on these systems are amazing; usability has been thought of from the start.  In my opinion, both HP and Compaq have been this way for  a long time.

Speaking of the tour, this system that they showed us was a prototype of the original HP Superdome that shipped for the first time in 2000.  This system was still going and was using modern hardware: these systems are not designed for a 3-4 year lifecycle, but a much longer, extended lifecycle.

There were a lot of features of the system that I’ll cover in the next few days; it was enjoyable and educational.  I very much appreciate the work that went into it and hope to see more.

By the way, if you read Ben Rockwood’s article at Cuddletech, look at the first photograph: your author is center left, with the sweater.

Update: thanks to Michael Burschik for the updated information on Test Plan Charlie, which saw 41,000 Linux machines running on the IBM zSeries back in 2000. I’ll have a report on it soon. Strangely enough, I still haven’t found the article I was thinking of – but of course, a project like that isn’t reported in just one periodical…

Powered by ScribeFire.

NVIDIA Introduces a Supercomputer: Tesla

This is just incredible. According to the specifications and what NVIDIA is claiming for the Tesla Supercomputer, this will be like putting a supercomputer on every desk: 240 cores at your side. NVIDIA is harnessing the compute power of the GPU in amazing ways with this product.

The Tesla product is available standalone from TigerDirect at $1699.99 or as part of computers from Microway or Penguin Computing (among others).