Mainframe Linux: Pros and Cons

Why would one want to move Linux to the mainframe (such as IBM’s z10)? There are many reasons – and many reasons not to. Computerworld Australia had a good article describing (in part) some of the reasons the insurance company Allianz did just that. IBM has been pushing Linux on the z series for some time, and Red Hat and SUSE offer Linux variants for that purpose.

One common reason to move to a mainframe is that Linux servers have proliferated in the data center, taking up valuable space and becoming quite numerous. When all you need for a server is the hardware and a low-cost or no-cost Linux, then servers start popping up all over the place.

A single mainframe such as the z10 can handle thousands of servers (a test done in 2000 put 41,400 Linux servers on one IBM mainframe). The replaced servers can then be eliminated from the data center, freeing up valuable space and reducing the workload of current system administrators.

A common instance is where the company already has a mainframe in-house, running COBOL applications. Thus, the purchase cost of a mainframe (in the millions of dollars) has already been absorbed. Such a scenario also makes the case for a new mainframe much more appealing, as it puts the enhanced power to work immediately.

Replacing thousands of Intel-based Linux servers with a single mainframe will reduce cooling costs, power costs, physical space requirements, and hardware costs.

So why would anyone not want to use a mainframe?

If there is not already a mainframe in the data center, purchasing a mainframe just for the purpose of consolidation can be too much – mainframes typically cost in the millions of dollars, and require specially trained staff to maintain. Adding a mainframe to the data center would also require training current staff or adding new staff. A new mainframe also requires a new support contract. All of this adds up to not just millions of dollars of additional cost up front, but additional costs every year.

Another consideration is the number of Linux servers in the data center that would be moved. If there are dozens – or a hundred or two – it may not be entirely cost-effective to focus a lot of energy on moving these servers to the mainframe.

A supercomputer such as HP’s Superdome (with its attendant iCap and Integrity Virtual Machine capabilities) would probably be a better choice to consolidate dozens of Linux servers. The costs are lower, and the power requirements are lower – and you can purchase as much or as little as you need and grow with iCap. Most companies also already have UNIX staff on hand, and adapting to HP-UX is not generally a problem if needed.

Another benefit is that a server such as the Superdome offers virtualization of not just Linux systems, but Microsoft Windows and HP-UX as well – and soon, OpenVMS as well.

Using a large Intel-based server can virtualize a large number of servers with software from companies like VMWare and Sun.

These options won’t necessarily allow you to virtualize thousands of servers – but then, do you need to?

Test Plan Charlie: 41000 Linux Servers on One Box

There was a test done many years ago by David Boyes, an engineer working out of Virginia.  The test was simply to run as many Linux servers on one IBM zSeries mainframe – and to keep adding them until something broke.

The test hit the limit at 41,400 Linux servers – and nothing ever “broke.” This project was widely reported at the time, though it seems to be forgotten now. However, the test caught my fancy. That’s a lot of Linux machines.

As was mentioned, this report was widely reported: Linux Journal had an article on 1 June titled The Penguin and the Dinosaur from Adam Thornton. That same day, Daisy Whitney authored an article, Linux on Big Iron – possibly in Datamation. Scott Courtney (the Technical Editor for wrote S/390: The Linux Dream Machine on 23 February and wrote It’s Official: IBM Announces Linux for the S/390 on 17 May. What really stands out? All of these articles reporting on the S/390 and on Test Plan Charlie occurred nine years ago, in 2000.

Scott Courtney followed his articles up with an interview with David Boyes in 2001.

There is one more thing about David Boyes: following Test Plan Charlie, he went on to create Sine Nomine Associates and showcased OpenSolaris running on the IBM zSeries in November of 2007 – with attendant press releases from IBM. Certainly, David is not one to sit idle – and is a figure to contend with in the IBM zSeries arena. IBM has, since the original announcement nine years ago, pushed Linux on zSeries with vigor.  One irony: Test Plan Charlie was part of a study for an IBM customer that was deciding whether to use their existing S/390 or whether to use a new Sun set up.

There is even an open source IBM mainframe emulator called Hercules, which allows the rest of us to try it out and see what happens – even though you won’t be able to run under z/VM, as that is an IBM product.

Update: there was a nice set of updates about OpenSolaris on zSeries over on DancingDinosaur: Here comes (and goes) the Sun (12 April 2009) and Slow times for OpenSolaris on System z (21 July 2009).

Update: More articles on Test Plan Charlie. In the November 2000 issue of Technical Support, Adam Thornton wrote a nice two-part article (part one and part two) on it. Adam is a major contributor to the Linux on S/390 effort and worked for David Boyes at Rice University.

A good source of information for Linux on S/390 is

Powered by ScribeFire.

HP Tech Day: HP Superdome

I was recently invited to take part in an HP Tech Day in San Jose, California, celebrating the 10th anniversary of the HP Superdome (the most advanced server in the Integrity line).  This was an event designed to introduce several of us in the blog world to the HP Superdome.  The attendees included David Adams from OSNews, Ben Rockwood from Cuddletech, Andy McCaskey from Simple Daily Report (SDR) News, and Saurabh Dubey of ActiveWin.  This was a quite eclectic and broad mix of perspectives: David Adams covers operating systems; Ben Rockwood covers Sun goings on (as he mentions in his article, he wore a Sun shirt: bold as brass!); Saurabh Dubey covers Microsoft goings on; and I, as loyal readers may know, cover system administration (with a focus on HP-UX, OpenVMS, and Linux – all of which will run on Superdome). Andy McCaskey over at SDR News also had a nice writeup on his experiences.

It is possible I was the most familiar with the architecture and with the capabilities, though I’ve not seen or worked with a Superdome in the past: the capabilities of the Superdome are largely based on the fact that it is cell-based.  The rp7420 cluster which I have maintained over the last several years uses the same technology, though cells from the rp7420 are incompatible with the Superdome.  The software is the same: prstatus, etc.  The System Management Homepage (SMH) was also shown, although it was almost shown as a benefit of the Superdome (it’s actually in every HP-UX since 11i v2, and is an option for OpenVMS 8.x).

There was a lot of talk about “scaling up” (that is, use a larger, more powerful system) instead of “scaling out” (using a massive cluster of many machines).  The Superdome is a perfect example of “scaling up” and is possibly one of the best examples.  I was impressed by what I saw as the capabilities of the Superdome.  There was a lot of comparison with the IBM zSeries, which is the epitome of the current crop of mainframes.  The presenters made a very strong case for using Superdome over zSeries.

They did seem to focus on running Linux in an LPAR, however; this creates a limit of 60 Linux installations.  Using z/VM as a hypervisor, one can run many more Linux systems.  I have heard of a test run in Europe (years ago) where a zSeries was loaded with one Linux installation after another – when the testers reached into the tens of thousands (30,000?) the network failed or was overloaded; the zSeries system was still going strong.  Trouble is, I’m not able to back this up with a source at the moment: I’m sure it was available as part of a print (Linux) journal – it may have been called “Project Charlie.”  Can anyone help?

The usability features of the Superdome were in prime display: for example, the power supplies were designed so that they could not be inserted upside-down.  Another example: the cells for the Superdome are in two parts: the “system” (including CPU, memory, and chip glue) and the power supply.  This makes it much easier to remove in the typical datacenter row and makes each part lighter, making it easier for users. There are innumerable items like this that the designers took into account during the design phase.  The engineering on these systems are amazing; usability has been thought of from the start.  In my opinion, both HP and Compaq have been this way for  a long time.

Speaking of the tour, this system that they showed us was a prototype of the original HP Superdome that shipped for the first time in 2000.  This system was still going and was using modern hardware: these systems are not designed for a 3-4 year lifecycle, but a much longer, extended lifecycle.

There were a lot of features of the system that I’ll cover in the next few days; it was enjoyable and educational.  I very much appreciate the work that went into it and hope to see more.

By the way, if you read Ben Rockwood’s article at Cuddletech, look at the first photograph: your author is center left, with the sweater.

Update: thanks to Michael Burschik for the updated information on Test Plan Charlie, which saw 41,000 Linux machines running on the IBM zSeries back in 2000. I’ll have a report on it soon. Strangely enough, I still haven’t found the article I was thinking of – but of course, a project like that isn’t reported in just one periodical…

Powered by ScribeFire.

IBM z/Series and OpenSolaris

In the past, the z/Series has been known for its virtualization of Linux servers (over 10,000 possible!). Now, someone is showing OpenSolaris running on z/Series at the Gartner Data Center Conference in Las Vegas. There is an excellent write-up from the Mainframe blog, and another post from Marc Hamilton.

There is also an excellent open source IBM zSeries emulator called Hercules which can run Linux, MVS, DOS/VSE and others – and now, soon, OpenSolaris.

There is a very interesting speech given at Linuxworld by Guru Vasudeva from Nationwide about how his company implemented virtual Linux machines on IBM zSeries with z/VM.

When I first saw this (from IBM’s web site) I thought it was laugh out loud funny – and it still is. It also gives a hint at the power of the IBM z/Series, and shows one of the reasons I’m excited about the power one of these has.

I just love the deadpan Joe Friday responses from the cops, and some of the quotes:

  • “Could it be an inside job? We’re inside, right?”
  • “I need my pills!”
  • “What’s a server?”