Statistical Analysis is Valuable for Understanding

In System Administration – and many other areas – statistics can assist us in understanding the real meaning hidden in data. There are many places that statistical data can be gathered and analyzed, including from sar data and custom designed scripts in Perl or Ruby or Java.

How about the number of processes, when they are started, when they finish, and how much processor time they take over the length of time they operate? Programs like HP’s Performance Agent (now included in most HP-UX operating environments) and SGI’s fabulous Performance CoPilot can help here. In fact, products like these (and PCP in particular) can gather incredibly valuable sorts of data. For example, how much time does each disk spend above a certain amount of writing, and when? How much time does each CPU spend above 80% utilization and when?

Using statistical data from a system could, with the proper programming, be fed back into a learning neural network or a bayesian network and provide a method of providing alarms for stastically unlikely events.

There are other areas where statistical analysis can provide useful data than just performance. How about measuring the difference between a standard image and a golden image based on packages used? How about analyzing the number of users that use a system, when they use it, and for how long? (Side note: I had a system once that had 20 or 30 users that each used the system heavily for one straight week or two in every six months… managing password aging was a nightmare…)

There are many places for analyzing a system and providing statistical data; this capability, however, has not been utilized appropriately. So what are you waiting for?

Canonical Kills Ubuntu Maverik Meerkat (10.10) for Itanium (and Sparc)

It wasn’t long ago that Red Hat and Microsoft released statements that they would no longer support Itanium (with Red Hat Enterprise Linux and Windows respectively). Now Canonical has announced that Ubuntu 10.04 LTS (Long Term Support) will be the last supported Ubuntu on not only Itanium, but Sparc as well.

Itanium has thus lost three major operating systems (Red Hat Enterprise Linux, Windows, and Ubuntu Linux) over the past year. For HP Itanium owners, this means that Integrity Virtual Machines (IVMs) running Red Hat Linux or Microsoft Windows Server will no longer have support from HP (since the operating system designer has ceased support).

The only bright spot for HP’s IVM is OpenVMS 8.4, which is supported under an IVM for the first time. However, response to OpenVMS 8.4 has been mixed.

Martin Hingley has an interesting article about how the dropping of RHEL and Windows Server from Itanium will not affect HP; I disagree. For HP’s virtual infrastructure – based on the IVM product – the two biggest environments besides HP-UX are no longer available. An interesting survey would be to find out how many IVMs are being used and what operating systems they are running now and in the future.

With the loss of Red Hat and Microsoft – and now Canonical’s Ubuntu – this provides just that many fewer options for IVMs – and thus, fewer reasons to use an HP IVM. OpenVMS could pick up the slack, as many shops may be looking for a way to take OpenVMS off the bare metal, letting the hardware be used for other things.

If HP IVMs are used less and less, this could affect the Superdome line as well, as running Linux has always been a selling point for this product. As mentioned before, this may be offset by OpenVMS installations.

This also means that Novell’s SUSE Linux Enterprise Server becomes the only supported mainstream Linux environment on Itanium – on the Itanium 9100 processor at least.

From the other side, HP’s support for Linux seems to be waning: this statement can be found in the fine print on their Linux on Integrity page:

HP is not planning to certify or support any Linux distribution on the new Integrity servers based on the Intel Itanium processor 9300 series.

Even if HP doesn’t feel the effect of these defections, the HP’s IVM product family (and Superdome) probably will.

Three Technologies We Wish Were in Linux (and More!)

Recently, an AIX administrator named Jon Buys talked about three tools he wishes that were available in Linux. Mainly, these technologies (not tools) are actually part of enterprise class UNIX environments in almost every case.

One was a tool to create a bootable system recovery disk. AIX calls the tool to do this makesysb; in my world – HP-UX – this is called make_tape_recovery. In HP-UX, this utility allows you to specify what part of the root volume (vg00) to save and other volumes. Booting the tape created from the make_tape_recovery utility will allow you to recreate the system – whether as part of a cloning process or part of a system recovery.

Another technology missing from Linux is the ability to rescan the system buses for new hardware. In Jon’s article, he describes the AIX utility cfgmgr. HP-UX utilizes the tool ioscan to scan for new I/O devices. Jon mentions LVM (which has its roots in HP-UX) but this does not preclude scanning for new devices (as any HP-UX administrator can attest).

Jon then discusses Spotlight (from MacOS X) and laments that it is missing from Linux. Linux has Beagle and Tracker, and all are quite annoying and provide nothing that locate does not – and on top of this, locate is present on AIX, HP-UX, Solaris, and others. I for one would like to completely disable and remove Spotlight from my MacOS X systems – Quicksilver and Launchbar are both better than Spotlight. In any case, all of these tools don’t really belong on an enterprise-class UNIX system anyway.

As for me, there are some more technologies that are still missing from Linux. One is LVM snapshots: while they exist in Linux, they are more cumbersome. In HP-UX (the model for Linux LVM) a snapshot is created from an empty logical volume at mount time, and the snapshot disappears during a dismount. In Linux, the snapshot created during logical volume create time (whatever for??) and then is destroyed by a logical volume delete. The snapshot operation should mirror that of HP-UX, which is much simpler.

Another thing missing from Linux which is present in every HP-UX (enterprise) system is a tool like GlancePlus: a monitoring tool with graphs and alarms (and the alarms include time-related alarms).

Consider an alarm to send an email when all disks in the system average over 75% busy for 5 minutes running. This can be done in HP-UX; not so in a standard Linux install. There are many others as well.

Personally, I think that Performance Co-Pilot could fill this need; however, I’m not aware of any enterprise class Linux that includes PCP as part of its standard supported installation. PCP has its roots in IRIX from SGI – enterprise UNIX – and puts GlancePlus to shame.

Perhaps one of the biggest things missing from Linux – though not specifically related to Linux – is enterprise-class hardware: the standard “PC” platform is not suitable for a corporate data center.

While the hardware will certainly work, it remains unsuitable for serious deployments. Enterprise servers – of all kinds – offer a variety of enhanced abilities that are not present in a PC system. Consider:

  • Hot-swappable hard drives – i.e., hard drives that can be removed and replaced during system operation without affecting the system adversely.
  • Hot-swappable I/O cards during system operation.
  • Cell-based operations – or hardware-based partitioning.

For Linux deployment, the best idea may be to go with virtualized Linux servers on enterprise-class UNIX, or with Linux on Power from IBM – I don’t know of any other enterprise-class Linux platform (not on Itanium and not on Sparc) – and Linux on Power may not support much of the enterprise needs listed earlier either.

What are your thoughts?

HP ITRC to Enter Read-Only for Three Days

HP announced that the HP ITRC is to undergo maintenance late in May, during which time the ITRC will be read-only.

Maintenance will start on May 19 at 6:30 am GMT, and end on May 22 at 3:00 pm GMT. During the time that ITRC is read-only, no new forum messages can be posted, and no changes to user profiles, favorites, or notifications will be possible.

All of those that use HP support should be using HP ITRC as much as possible; I’ve found that the HP-UX and OpenVMS support is fantastic. There is quite a lot of expertise behind the readers and responders of the forums.

HP Introduces New Blades – Including Superdome!

At Tech@Work in Germany, HP introduced a number of new Itanium blades with the new Tukwila chip (or Itanium 9300). The new blades will work in blade chassis that also support x86 blades.

The real news is that Superdome servers are also available in the same chassis. Thus, x86 and Itanium servers can be side by side with Superdome servers.

On top of this, all blade types can use the same power supply and other parts. This means that parts can be swapped, and means lower costs for HP (fewer types of parts) and lower costs for customers as well.

Added to all this, there was the March 2010 update to HP-UX.

This truly is exciting. Imagine managing x86, Itanium, and Superdome from the same interface…

Perl 5 Development Resumes: 5.12 Released

Perl 6 development began in 2000, and ten years later it remains unready for production; thus several developers have come along and kick-started Perl 5 development once again – and now Perl 5.12 has been released.

Jesse Vincent made the announcement on the Perl development mailing list; he also announced the new release schedule for Perl 5, which is a production release in the spring and a development release monthly. The official release page for Perl 5.12 is over at CPAN:
http://search.cpan.org/~jesse/perl-5.12.0/

Over at ActiveState – the best-known supporter of commercial scripting languages such as Perl, Ruby, and Tcl – the ActiveState blog announced the release of Perl 5.12, followed by the release of ActiveState Perl 5.12.

For HP-UX, Merlijn Brand announced he was building Perl 5.12 for HP-UX, and the HP-UX Porting Centre already has Perl 5.12 packaged for download.

HP uses ActiveState Perl for HP-UX, but uses standard Perl on OpenVMS. I don’t see any word about 5.12 on OpenVMS, but no doubt it will come. Likewise, Perl 5.12 on HP-UX will have to run through the vetting process before it is officially introduced into HP-UX.

I see that Ubuntu has not rolled out Perl 5.12 into Karmic Koala. Their software roll-outs also depend on Debian, so we’ll see how long this takes.

Part of the reason that Perl 5 was revived is because the development of Perl 6 – a complete rewrite and redesign from scratch – is taking so long. Arguably, the complete redesign of Perl is contributing to the stagnation of Perl development (until this year). A complete redesign is a difficult thing, and some people believe that the redesign of the Netscape browser lead to Netscape’s downfall.

It really does appear that a complete redesign of a successful software project is rarely successful; more successful is the evolving process that most software goes through – including, in some cases, refactoring and subsystem replacements (for example, replacing the virtual memory subsystem in the Linux kernel or the replacement of the Ruby execution interpreter for 1.9).

We’ll just see what the future holds for Perl 6 – but I’m not holding my breath.

Microsoft Joins Red Hat in Dropping Itanium Support

Red Hat announced at the end of 2009 that Red Hat Enterprise Linux 6 will not support Itanium, and now Microsoft has announced that Windows Server 2008 R2 will be the last version to support Itanium.

This is not good. HP is the largest vendor of Itanium systems – they should be, since Itanium was an HP-Intel joint venture. Intel just introduced the new Tukwila chip in January, and now Windows and Red Hat Enterprise Linux will not be found on the chip.

Most pertinently for HP, this means that Integrity Virtual Machines running Microsoft Windows and Red Hat Enterprise Linux will neither be available nor supported.

SUSE Linux Enterprise Server (SLES) is still available for Itanium, as is HP-UX, and OpenVMS is due soon. Time will tell if this bailout by Red Hat and Microsoft will affect HP’s bottom line; Intel should be relatively unscathed.

UPDATE: Fixed factual error.

Mainframe Linux: Pros and Cons

Why would one want to move Linux to the mainframe (such as IBM’s z10)? There are many reasons – and many reasons not to. Computerworld Australia had a good article describing (in part) some of the reasons the insurance company Allianz did just that. IBM has been pushing Linux on the z series for some time, and Red Hat and SUSE offer Linux variants for that purpose.

One common reason to move to a mainframe is that Linux servers have proliferated in the data center, taking up valuable space and becoming quite numerous. When all you need for a server is the hardware and a low-cost or no-cost Linux, then servers start popping up all over the place.

A single mainframe such as the z10 can handle thousands of servers (a test done in 2000 put 41,400 Linux servers on one IBM mainframe). The replaced servers can then be eliminated from the data center, freeing up valuable space and reducing the workload of current system administrators.

A common instance is where the company already has a mainframe in-house, running COBOL applications. Thus, the purchase cost of a mainframe (in the millions of dollars) has already been absorbed. Such a scenario also makes the case for a new mainframe much more appealing, as it puts the enhanced power to work immediately.

Replacing thousands of Intel-based Linux servers with a single mainframe will reduce cooling costs, power costs, physical space requirements, and hardware costs.

So why would anyone not want to use a mainframe?

If there is not already a mainframe in the data center, purchasing a mainframe just for the purpose of consolidation can be too much – mainframes typically cost in the millions of dollars, and require specially trained staff to maintain. Adding a mainframe to the data center would also require training current staff or adding new staff. A new mainframe also requires a new support contract. All of this adds up to not just millions of dollars of additional cost up front, but additional costs every year.

Another consideration is the number of Linux servers in the data center that would be moved. If there are dozens – or a hundred or two – it may not be entirely cost-effective to focus a lot of energy on moving these servers to the mainframe.

A supercomputer such as HP’s Superdome (with its attendant iCap and Integrity Virtual Machine capabilities) would probably be a better choice to consolidate dozens of Linux servers. The costs are lower, and the power requirements are lower – and you can purchase as much or as little as you need and grow with iCap. Most companies also already have UNIX staff on hand, and adapting to HP-UX is not generally a problem if needed.

Another benefit is that a server such as the Superdome offers virtualization of not just Linux systems, but Microsoft Windows and HP-UX as well – and soon, OpenVMS as well.

Using a large Intel-based server can virtualize a large number of servers with software from companies like VMWare and Sun.

These options won’t necessarily allow you to virtualize thousands of servers – but then, do you need to?

HP Instant Capacity (iCap)

One of the things that may affect any clusters you have – or other systems – is that management does not want to spend enough to handle any possible load.  With a cluster, this means that you may not be able to handle a fail-over because there is not enough spare processing power to handle the extra load when it happens.

HP’s Instant Capacity (“capacity on demand”) is an answer to this dilemma.  The base idea is that you have extra hardware already in the data center that is not available for use until it is necessary.  The switch that will enable this expanded capacity can be automatic or manual; when some portion of the extra capacity is enabled, you pay for it and it can be used from then on.

Yet, Instant Capacity (iCAP) is more flexible than this.  The capacity may be enabled only temporarily instead of permanently – this is known as TiCAP (temporary iCAP).  Thus, you can save even more by buying extra hardware but enabling only a small portion of it.  During the recent HP Tech Days that I attended in San Jose, California, a situation was described where an HP Superdome could be purchased with a large amount of the hardware already in place – but only a small amount of the hardware enabled.  When the extra power is needed, for example, a cell in the Superdome could be enabled until such time as the power is no longer necessary.

There is also Global Instant Capacity (GiCAP) which even allows the movement of power from one system to another.  For example, if a CPU on one system is underutilized and another system needs the resource more – then the CPU resource can be “logically” moved from one system to the other through GiCAP.  Alternately, if one system dies and another system needs its power, the dead system’s resources can be used by the active system by moving them through GiCAP.

iCAP and TiCAP are available for HP-UX (on PARISC and Itanium) and for OpenVMS (only on Itanium). GiCAP is only available for HP-UX. 

I find iCAP and TiCAP to be very interesting.  From a cost perspective, you pay only a minimal amount to keep the resource; when it is enabled, you then pay for it for the duration – or buy the hardware outright for permanent use as needed.

Powered by ScribeFire.

HP Tech Day: HP Superdome

I was recently invited to take part in an HP Tech Day in San Jose, California, celebrating the 10th anniversary of the HP Superdome (the most advanced server in the Integrity line).  This was an event designed to introduce several of us in the blog world to the HP Superdome.  The attendees included David Adams from OSNews, Ben Rockwood from Cuddletech, Andy McCaskey from Simple Daily Report (SDR) News, and Saurabh Dubey of ActiveWin.  This was a quite eclectic and broad mix of perspectives: David Adams covers operating systems; Ben Rockwood covers Sun goings on (as he mentions in his article, he wore a Sun shirt: bold as brass!); Saurabh Dubey covers Microsoft goings on; and I, as loyal readers may know, cover system administration (with a focus on HP-UX, OpenVMS, and Linux – all of which will run on Superdome). Andy McCaskey over at SDR News also had a nice writeup on his experiences.

It is possible I was the most familiar with the architecture and with the capabilities, though I’ve not seen or worked with a Superdome in the past: the capabilities of the Superdome are largely based on the fact that it is cell-based.  The rp7420 cluster which I have maintained over the last several years uses the same technology, though cells from the rp7420 are incompatible with the Superdome.  The software is the same: prstatus, etc.  The System Management Homepage (SMH) was also shown, although it was almost shown as a benefit of the Superdome (it’s actually in every HP-UX since 11i v2, and is an option for OpenVMS 8.x).

There was a lot of talk about “scaling up” (that is, use a larger, more powerful system) instead of “scaling out” (using a massive cluster of many machines).  The Superdome is a perfect example of “scaling up” and is possibly one of the best examples.  I was impressed by what I saw as the capabilities of the Superdome.  There was a lot of comparison with the IBM zSeries, which is the epitome of the current crop of mainframes.  The presenters made a very strong case for using Superdome over zSeries.

They did seem to focus on running Linux in an LPAR, however; this creates a limit of 60 Linux installations.  Using z/VM as a hypervisor, one can run many more Linux systems.  I have heard of a test run in Europe (years ago) where a zSeries was loaded with one Linux installation after another – when the testers reached into the tens of thousands (30,000?) the network failed or was overloaded; the zSeries system was still going strong.  Trouble is, I’m not able to back this up with a source at the moment: I’m sure it was available as part of a print (Linux) journal – it may have been called “Project Charlie.”  Can anyone help?

The usability features of the Superdome were in prime display: for example, the power supplies were designed so that they could not be inserted upside-down.  Another example: the cells for the Superdome are in two parts: the “system” (including CPU, memory, and chip glue) and the power supply.  This makes it much easier to remove in the typical datacenter row and makes each part lighter, making it easier for users. There are innumerable items like this that the designers took into account during the design phase.  The engineering on these systems are amazing; usability has been thought of from the start.  In my opinion, both HP and Compaq have been this way for  a long time.

Speaking of the tour, this system that they showed us was a prototype of the original HP Superdome that shipped for the first time in 2000.  This system was still going and was using modern hardware: these systems are not designed for a 3-4 year lifecycle, but a much longer, extended lifecycle.

There were a lot of features of the system that I’ll cover in the next few days; it was enjoyable and educational.  I very much appreciate the work that went into it and hope to see more.

By the way, if you read Ben Rockwood’s article at Cuddletech, look at the first photograph: your author is center left, with the sweater.

Update: thanks to Michael Burschik for the updated information on Test Plan Charlie, which saw 41,000 Linux machines running on the IBM zSeries back in 2000. I’ll have a report on it soon. Strangely enough, I still haven’t found the article I was thinking of – but of course, a project like that isn’t reported in just one periodical…

Powered by ScribeFire.