Mainframe Linux: Pros and Cons

Why would one want to move Linux to the mainframe (such as IBM’s z10)? There are many reasons – and many reasons not to. Computerworld Australia had a good article describing (in part) some of the reasons the insurance company Allianz did just that. IBM has been pushing Linux on the z series for some time, and Red Hat and SUSE offer Linux variants for that purpose.

One common reason to move to a mainframe is that Linux servers have proliferated in the data center, taking up valuable space and becoming quite numerous. When all you need for a server is the hardware and a low-cost or no-cost Linux, then servers start popping up all over the place.

A single mainframe such as the z10 can handle thousands of servers (a test done in 2000 put 41,400 Linux servers on one IBM mainframe). The replaced servers can then be eliminated from the data center, freeing up valuable space and reducing the workload of current system administrators.

A common instance is where the company already has a mainframe in-house, running COBOL applications. Thus, the purchase cost of a mainframe (in the millions of dollars) has already been absorbed. Such a scenario also makes the case for a new mainframe much more appealing, as it puts the enhanced power to work immediately.

Replacing thousands of Intel-based Linux servers with a single mainframe will reduce cooling costs, power costs, physical space requirements, and hardware costs.

So why would anyone not want to use a mainframe?

If there is not already a mainframe in the data center, purchasing a mainframe just for the purpose of consolidation can be too much – mainframes typically cost in the millions of dollars, and require specially trained staff to maintain. Adding a mainframe to the data center would also require training current staff or adding new staff. A new mainframe also requires a new support contract. All of this adds up to not just millions of dollars of additional cost up front, but additional costs every year.

Another consideration is the number of Linux servers in the data center that would be moved. If there are dozens – or a hundred or two – it may not be entirely cost-effective to focus a lot of energy on moving these servers to the mainframe.

A supercomputer such as HP’s Superdome (with its attendant iCap and Integrity Virtual Machine capabilities) would probably be a better choice to consolidate dozens of Linux servers. The costs are lower, and the power requirements are lower – and you can purchase as much or as little as you need and grow with iCap. Most companies also already have UNIX staff on hand, and adapting to HP-UX is not generally a problem if needed.

Another benefit is that a server such as the Superdome offers virtualization of not just Linux systems, but Microsoft Windows and HP-UX as well – and soon, OpenVMS as well.

Using a large Intel-based server can virtualize a large number of servers with software from companies like VMWare and Sun.

These options won’t necessarily allow you to virtualize thousands of servers – but then, do you need to?

HP Instant Capacity (iCap)

One of the things that may affect any clusters you have – or other systems – is that management does not want to spend enough to handle any possible load.  With a cluster, this means that you may not be able to handle a fail-over because there is not enough spare processing power to handle the extra load when it happens.

HP’s Instant Capacity (“capacity on demand”) is an answer to this dilemma.  The base idea is that you have extra hardware already in the data center that is not available for use until it is necessary.  The switch that will enable this expanded capacity can be automatic or manual; when some portion of the extra capacity is enabled, you pay for it and it can be used from then on.

Yet, Instant Capacity (iCAP) is more flexible than this.  The capacity may be enabled only temporarily instead of permanently – this is known as TiCAP (temporary iCAP).  Thus, you can save even more by buying extra hardware but enabling only a small portion of it.  During the recent HP Tech Days that I attended in San Jose, California, a situation was described where an HP Superdome could be purchased with a large amount of the hardware already in place – but only a small amount of the hardware enabled.  When the extra power is needed, for example, a cell in the Superdome could be enabled until such time as the power is no longer necessary.

There is also Global Instant Capacity (GiCAP) which even allows the movement of power from one system to another.  For example, if a CPU on one system is underutilized and another system needs the resource more – then the CPU resource can be “logically” moved from one system to the other through GiCAP.  Alternately, if one system dies and another system needs its power, the dead system’s resources can be used by the active system by moving them through GiCAP.

iCAP and TiCAP are available for HP-UX (on PARISC and Itanium) and for OpenVMS (only on Itanium). GiCAP is only available for HP-UX. 

I find iCAP and TiCAP to be very interesting.  From a cost perspective, you pay only a minimal amount to keep the resource; when it is enabled, you then pay for it for the duration – or buy the hardware outright for permanent use as needed.

Powered by ScribeFire.