Canonical Kills Ubuntu Maverik Meerkat (10.10) for Itanium (and Sparc)

It wasn’t long ago that Red Hat and Microsoft released statements that they would no longer support Itanium (with Red Hat Enterprise Linux and Windows respectively). Now Canonical has announced that Ubuntu 10.04 LTS (Long Term Support) will be the last supported Ubuntu on not only Itanium, but Sparc as well.

Itanium has thus lost three major operating systems (Red Hat Enterprise Linux, Windows, and Ubuntu Linux) over the past year. For HP Itanium owners, this means that Integrity Virtual Machines (IVMs) running Red Hat Linux or Microsoft Windows Server will no longer have support from HP (since the operating system designer has ceased support).

The only bright spot for HP’s IVM is OpenVMS 8.4, which is supported under an IVM for the first time. However, response to OpenVMS 8.4 has been mixed.

Martin Hingley has an interesting article about how the dropping of RHEL and Windows Server from Itanium will not affect HP; I disagree. For HP’s virtual infrastructure – based on the IVM product – the two biggest environments besides HP-UX are no longer available. An interesting survey would be to find out how many IVMs are being used and what operating systems they are running now and in the future.

With the loss of Red Hat and Microsoft – and now Canonical’s Ubuntu – this provides just that many fewer options for IVMs – and thus, fewer reasons to use an HP IVM. OpenVMS could pick up the slack, as many shops may be looking for a way to take OpenVMS off the bare metal, letting the hardware be used for other things.

If HP IVMs are used less and less, this could affect the Superdome line as well, as running Linux has always been a selling point for this product. As mentioned before, this may be offset by OpenVMS installations.

This also means that Novell’s SUSE Linux Enterprise Server becomes the only supported mainstream Linux environment on Itanium – on the Itanium 9100 processor at least.

From the other side, HP’s support for Linux seems to be waning: this statement can be found in the fine print on their Linux on Integrity page:

HP is not planning to certify or support any Linux distribution on the new Integrity servers based on the Intel Itanium processor 9300 series.

Even if HP doesn’t feel the effect of these defections, the HP’s IVM product family (and Superdome) probably will.

HP Introduces New Blades – Including Superdome!

At Tech@Work in Germany, HP introduced a number of new Itanium blades with the new Tukwila chip (or Itanium 9300). The new blades will work in blade chassis that also support x86 blades.

The real news is that Superdome servers are also available in the same chassis. Thus, x86 and Itanium servers can be side by side with Superdome servers.

On top of this, all blade types can use the same power supply and other parts. This means that parts can be swapped, and means lower costs for HP (fewer types of parts) and lower costs for customers as well.

Added to all this, there was the March 2010 update to HP-UX.

This truly is exciting. Imagine managing x86, Itanium, and Superdome from the same interface…

Intel Itanium Tukwila CPU Out Soon?

ComputerWorld reports that Intel has started shipping the Itanium Tukwila processor. The Itanium processor drives the HP Integrity line of servers, as well as the HP NonStop servers.

In the near future (2nd or 3rd quarter?) HP is expected to announce Integrity servers based on the Tukwila processor. These new servers are predicted to be blade servers, and it is also suggested that Superdome will receive a complete overhaul – which is uncomfortably close to suggesting a “forklift upgrade” (i.e., pull out the entire server and replace) for Superdome. The Superdome system infrastructure is 10 years old, so it may be time – but an expensive upgrade like that is never welcome.

At the International Solid-State Circuits Conference next week, both Sun (UltraSPARC “Rainbow Falls”) and IBM (Power 7) are expected to announce new chips. Some coverage of both these chips went on at the HotChips Conference in August; ExtremeTech covered both chips well in its conference preview. In September, the Register managed to snap up a copy of the Sun SPARC roadmap; it shows the Rainbow Falls chip being introduced in 2010. As for Tukwila, Intel is rumored to be making the formal announcement of Tukwila at the ISSC.

We shall see…

HP Superdome and Green Computing

The HP Superdome is designed with a much different basis than most of its competition – and indeed, many computers. The design principles behind the HP Superdome lead to a lesser impact on the environment, and thus are a "greener" choice for heavy computing.

Why? The HP Superdome is designed in such a way that its pieces can be replaced as needed, and the need to replace the entire system (common with other systems, including mainframes) can be dramatically reduced. The HP Superdome is designed with at least a 10-year lifespan, meaning that it when other systems have to be replaced the Superdome will (at most) only need "refreshing" with new cells or perhaps other parts.

For example, in 2009, the original HP Superdome prototype is still running – and even has HP Integrity cells operating.

Most other systems will have to be replaced once or twice before a Superdome has to be replaced. Replacing the system generates, as a result, a certain amount of electronic waste – and a mainframe will create a large amount of waste.

This is on top of the fact that the HP Superdome uses less electricity than a mainframe. It is also possible to only use the cells that you need, leaving the others either inactive via iCap (no power) if they exist at all.

All of these facts suggest that an HP Superdome would be a good choice for green computing in contrast to its mainframe competition.

A update on the recent HP Superdome Tech Day: turns out that Jacob Van Ewyk blogged about it in a two part article (part 1 and part 2) on the blog, Mission Critical Computing. John Pickett wrote about the energy savings inherent in using an HP Superdome on the blog Legacy Transformation.

Mainframe Linux: Pros and Cons

Why would one want to move Linux to the mainframe (such as IBM’s z10)? There are many reasons – and many reasons not to. Computerworld Australia had a good article describing (in part) some of the reasons the insurance company Allianz did just that. IBM has been pushing Linux on the z series for some time, and Red Hat and SUSE offer Linux variants for that purpose.

One common reason to move to a mainframe is that Linux servers have proliferated in the data center, taking up valuable space and becoming quite numerous. When all you need for a server is the hardware and a low-cost or no-cost Linux, then servers start popping up all over the place.

A single mainframe such as the z10 can handle thousands of servers (a test done in 2000 put 41,400 Linux servers on one IBM mainframe). The replaced servers can then be eliminated from the data center, freeing up valuable space and reducing the workload of current system administrators.

A common instance is where the company already has a mainframe in-house, running COBOL applications. Thus, the purchase cost of a mainframe (in the millions of dollars) has already been absorbed. Such a scenario also makes the case for a new mainframe much more appealing, as it puts the enhanced power to work immediately.

Replacing thousands of Intel-based Linux servers with a single mainframe will reduce cooling costs, power costs, physical space requirements, and hardware costs.

So why would anyone not want to use a mainframe?

If there is not already a mainframe in the data center, purchasing a mainframe just for the purpose of consolidation can be too much – mainframes typically cost in the millions of dollars, and require specially trained staff to maintain. Adding a mainframe to the data center would also require training current staff or adding new staff. A new mainframe also requires a new support contract. All of this adds up to not just millions of dollars of additional cost up front, but additional costs every year.

Another consideration is the number of Linux servers in the data center that would be moved. If there are dozens – or a hundred or two – it may not be entirely cost-effective to focus a lot of energy on moving these servers to the mainframe.

A supercomputer such as HP’s Superdome (with its attendant iCap and Integrity Virtual Machine capabilities) would probably be a better choice to consolidate dozens of Linux servers. The costs are lower, and the power requirements are lower – and you can purchase as much or as little as you need and grow with iCap. Most companies also already have UNIX staff on hand, and adapting to HP-UX is not generally a problem if needed.

Another benefit is that a server such as the Superdome offers virtualization of not just Linux systems, but Microsoft Windows and HP-UX as well – and soon, OpenVMS as well.

Using a large Intel-based server can virtualize a large number of servers with software from companies like VMWare and Sun.

These options won’t necessarily allow you to virtualize thousands of servers – but then, do you need to?

HP Tech Day: HP Superdome

I was recently invited to take part in an HP Tech Day in San Jose, California, celebrating the 10th anniversary of the HP Superdome (the most advanced server in the Integrity line).  This was an event designed to introduce several of us in the blog world to the HP Superdome.  The attendees included David Adams from OSNews, Ben Rockwood from Cuddletech, Andy McCaskey from Simple Daily Report (SDR) News, and Saurabh Dubey of ActiveWin.  This was a quite eclectic and broad mix of perspectives: David Adams covers operating systems; Ben Rockwood covers Sun goings on (as he mentions in his article, he wore a Sun shirt: bold as brass!); Saurabh Dubey covers Microsoft goings on; and I, as loyal readers may know, cover system administration (with a focus on HP-UX, OpenVMS, and Linux – all of which will run on Superdome). Andy McCaskey over at SDR News also had a nice writeup on his experiences.

It is possible I was the most familiar with the architecture and with the capabilities, though I’ve not seen or worked with a Superdome in the past: the capabilities of the Superdome are largely based on the fact that it is cell-based.  The rp7420 cluster which I have maintained over the last several years uses the same technology, though cells from the rp7420 are incompatible with the Superdome.  The software is the same: prstatus, etc.  The System Management Homepage (SMH) was also shown, although it was almost shown as a benefit of the Superdome (it’s actually in every HP-UX since 11i v2, and is an option for OpenVMS 8.x).

There was a lot of talk about “scaling up” (that is, use a larger, more powerful system) instead of “scaling out” (using a massive cluster of many machines).  The Superdome is a perfect example of “scaling up” and is possibly one of the best examples.  I was impressed by what I saw as the capabilities of the Superdome.  There was a lot of comparison with the IBM zSeries, which is the epitome of the current crop of mainframes.  The presenters made a very strong case for using Superdome over zSeries.

They did seem to focus on running Linux in an LPAR, however; this creates a limit of 60 Linux installations.  Using z/VM as a hypervisor, one can run many more Linux systems.  I have heard of a test run in Europe (years ago) where a zSeries was loaded with one Linux installation after another – when the testers reached into the tens of thousands (30,000?) the network failed or was overloaded; the zSeries system was still going strong.  Trouble is, I’m not able to back this up with a source at the moment: I’m sure it was available as part of a print (Linux) journal – it may have been called “Project Charlie.”  Can anyone help?

The usability features of the Superdome were in prime display: for example, the power supplies were designed so that they could not be inserted upside-down.  Another example: the cells for the Superdome are in two parts: the “system” (including CPU, memory, and chip glue) and the power supply.  This makes it much easier to remove in the typical datacenter row and makes each part lighter, making it easier for users. There are innumerable items like this that the designers took into account during the design phase.  The engineering on these systems are amazing; usability has been thought of from the start.  In my opinion, both HP and Compaq have been this way for  a long time.

Speaking of the tour, this system that they showed us was a prototype of the original HP Superdome that shipped for the first time in 2000.  This system was still going and was using modern hardware: these systems are not designed for a 3-4 year lifecycle, but a much longer, extended lifecycle.

There were a lot of features of the system that I’ll cover in the next few days; it was enjoyable and educational.  I very much appreciate the work that went into it and hope to see more.

By the way, if you read Ben Rockwood’s article at Cuddletech, look at the first photograph: your author is center left, with the sweater.

Update: thanks to Michael Burschik for the updated information on Test Plan Charlie, which saw 41,000 Linux machines running on the IBM zSeries back in 2000. I’ll have a report on it soon. Strangely enough, I still haven’t found the article I was thinking of – but of course, a project like that isn’t reported in just one periodical…

Powered by ScribeFire.