IBM Introduces Power7 Blades and new AIX

IBM recently introduced Power7 blade servers to go with the Power6 and x86 blades already available. The Power7 blades come in 4-core, 8-core, or a “double-wide” 16-core configuration (with two 8-core servers tied together). However, the 4-core configuration – with four disabled cores – cannot be upgraded to eight active cores directly (the four extra cores are permanently disabled). The 16-core configuration is two Power7 blades combined together.

Also introduced was AIX 6 Express, a new (and lower cost) version of AIX for small business.

I’ve always been partial to Power since Apple started using it; it was sad to see Apple stop using the PowerPC.

AIX has never struck me as a well-regarded environment, but now IBM has made it more affordable for more folks; we’ll see how this goes. The AIX admins I knew were frequently complaining about the clustering environment (although HP ServiceGuard has lots of interesting problems too). Last time I used AIX, the printing environment was very odd, like the rest of it.

However, no UNIX can be all bad… right?

HP Tech Day: HP Superdome

I was recently invited to take part in an HP Tech Day in San Jose, California, celebrating the 10th anniversary of the HP Superdome (the most advanced server in the Integrity line).  This was an event designed to introduce several of us in the blog world to the HP Superdome.  The attendees included David Adams from OSNews, Ben Rockwood from Cuddletech, Andy McCaskey from Simple Daily Report (SDR) News, and Saurabh Dubey of ActiveWin.  This was a quite eclectic and broad mix of perspectives: David Adams covers operating systems; Ben Rockwood covers Sun goings on (as he mentions in his article, he wore a Sun shirt: bold as brass!); Saurabh Dubey covers Microsoft goings on; and I, as loyal readers may know, cover system administration (with a focus on HP-UX, OpenVMS, and Linux – all of which will run on Superdome). Andy McCaskey over at SDR News also had a nice writeup on his experiences.

It is possible I was the most familiar with the architecture and with the capabilities, though I’ve not seen or worked with a Superdome in the past: the capabilities of the Superdome are largely based on the fact that it is cell-based.  The rp7420 cluster which I have maintained over the last several years uses the same technology, though cells from the rp7420 are incompatible with the Superdome.  The software is the same: prstatus, etc.  The System Management Homepage (SMH) was also shown, although it was almost shown as a benefit of the Superdome (it’s actually in every HP-UX since 11i v2, and is an option for OpenVMS 8.x).

There was a lot of talk about “scaling up” (that is, use a larger, more powerful system) instead of “scaling out” (using a massive cluster of many machines).  The Superdome is a perfect example of “scaling up” and is possibly one of the best examples.  I was impressed by what I saw as the capabilities of the Superdome.  There was a lot of comparison with the IBM zSeries, which is the epitome of the current crop of mainframes.  The presenters made a very strong case for using Superdome over zSeries.

They did seem to focus on running Linux in an LPAR, however; this creates a limit of 60 Linux installations.  Using z/VM as a hypervisor, one can run many more Linux systems.  I have heard of a test run in Europe (years ago) where a zSeries was loaded with one Linux installation after another – when the testers reached into the tens of thousands (30,000?) the network failed or was overloaded; the zSeries system was still going strong.  Trouble is, I’m not able to back this up with a source at the moment: I’m sure it was available as part of a print (Linux) journal – it may have been called “Project Charlie.”  Can anyone help?

The usability features of the Superdome were in prime display: for example, the power supplies were designed so that they could not be inserted upside-down.  Another example: the cells for the Superdome are in two parts: the “system” (including CPU, memory, and chip glue) and the power supply.  This makes it much easier to remove in the typical datacenter row and makes each part lighter, making it easier for users. There are innumerable items like this that the designers took into account during the design phase.  The engineering on these systems are amazing; usability has been thought of from the start.  In my opinion, both HP and Compaq have been this way for  a long time.

Speaking of the tour, this system that they showed us was a prototype of the original HP Superdome that shipped for the first time in 2000.  This system was still going and was using modern hardware: these systems are not designed for a 3-4 year lifecycle, but a much longer, extended lifecycle.

There were a lot of features of the system that I’ll cover in the next few days; it was enjoyable and educational.  I very much appreciate the work that went into it and hope to see more.

By the way, if you read Ben Rockwood’s article at Cuddletech, look at the first photograph: your author is center left, with the sweater.

Update: thanks to Michael Burschik for the updated information on Test Plan Charlie, which saw 41,000 Linux machines running on the IBM zSeries back in 2000. I’ll have a report on it soon. Strangely enough, I still haven’t found the article I was thinking of – but of course, a project like that isn’t reported in just one periodical…

Powered by ScribeFire.

How much memory is in the box? (all UNIX, OpenVMS)

How much memory is in this machine?

It would seem that answering this question ought to be easy; it is – but every system has the answer in a different place. Most put an answer of some sort into kernel messages reported by dmesg (AIX apparently does not).

Most systems have a program for system inventory which reports a variety of things, including memory.

Rather than go into great detail about each one, we’ll just put these out there for all of you to reference. Each environment has multiple commands that give available memory; each command is listed below.

Without further ado, here are a few answers to this burning question:

Solaris

  1. dmesg | grep mem
  2. prtdiag | grep Memory
  3. prtconf -v | grep Memory

AIX

  1. bootinfo -r
  2. lsattr -E1 sys0 -a realmem
  3. getconf REAL_MEMORY

HPUX

  1. dmesg | grep Physical
  2. /opt/ignite/bin/print_manifest | grep Memory
  3. machinfo | grep Memory

Linux

  1. dmesg | grep Memory
  2. grep -i memtotal /proc/meminfo
  3. free

OpenVMS

  1. show mem /page

Update:

FreeBSD

  1. dmesg | grep memory
  2. grep memory /var/run/dmesg.boot
  3. sysctl -a | grep mem

New operating system releases!

This is just amazing: did everybody coordinate this? Within the last three weeks or so, we’ve seen these releases come out:

Several of these were released on the same day, November 1.

What next? Am I really supposed to choose just one? Sigh. And I just installed OpenBSD 4.1 and Fedora 7, too – not to mention installing FreeBSD 6.2 not too long ago.

From all the talk, I’ll have to try Kubuntu again. So many systems, so little time.

I have been using OpenSUSE 10.3 (with KDE). I just love it – and I love the new menu format, too.

Update: Sigh. I should have known. Microsoft Windows Vista celebrated its 1st Anniversary on Nov. 8.

What’s Your Favorite Operating System?

I was asked this question recently. Everyone likely has an answer: Red Hat Linux, Debian GNU/Linux, Solaris… My answer surprised the questioner: UNIX and UNIX-workalikes. This includes FreeBSD… and Red Hat… and Solaris… and HP-UX… and AIX… and so forth. When I first became interested in UNIX, not one of the aforementioned products existed. First UNIX system I got my hands on briefly was Eunice (look it up 🙂 and the next (a few years later) was Microport System V (for the IBM AT).

Perhaps you might think Solaris is better than Linux – or NetBSD is better than OpenBSD. I suggest it doesn’t matter. Each UNIX (or UNIX-like) environment has its pluses and minuses. Individual choices are personal and enterprise choices are practical – in either case, which is truly better doesn’t matter.

If your enterprise is using Oracle, for example, the choice of which UNIX system you use is dramatically reduced: which system will Oracle support? You won’t be using Oracle on FreeBSD unless you forgo the Oracle maintenance contract. Choices like this continually appear in the enterprise. Perhaps the new version of Red Hat Enterprise Linux has everything you want – but Oracle doesn’t yet support that version.

Alternately, which system you use for your own desktop is a personal choice. Which one is “better” is which one feels better to you. UNIX is, at its heart, unified – that is, it is a single environment – but it provides a wide choice of user interfaces, user programs, and even technical items such as filesystems and virtual memory management schemes. Use whichever one seems better.

What do I use on my personal desktop? Mac OS X. However, in line with the ideas posited above, I’ve just expanded my “desktop” with Synergy, linking my “other” desktop (first Fedora Core 5, now BeleniX with OpenSolaris core) to my Mac OS X desktop. More about Synergy later.

So next time someone tells you what their favorite operating environment is – why not find out what it is they’re so excited about? You might find something exciting yourself.