OpenSolaris on a MacBook

OpenSolaris is very interesting, and since the introduction of dtrace and ZFS has enthralled many. I tried to install it onto my HP Compaq E300 laptop (which it was unsuitable for), and tried to install it onto an HP Compaq 6910p laptop. In this case, the networking was unsupported: both the ethernet and the wireless drivers were not included with OpenSolaris Express (Developer Edition).

In any case, I expect I might just be shopping for a laptop in the next year – and it’s nice to see that OpenSolaris does run on the Apple MacBook.  This article goes into detail about how the writer got it to work, and each of the steps that were taken to make it happen.  Paul Mitchell from Sun discusses dual-partitioning a MacBook in this context as well.  Alan Perry (also from Sun) had done the same thing with a Mac Mini, and Paul extended it to the MacBook.  Both entries are detailed and have to do with MacOS X and Solaris dual-booting.

An a different note, check out the graph of library calls from dtrace in this article.  From what I’ve heard of dtrace, it’s the ultimate when it comes to debugging…

New header yeah!

We’ll see how this looks.  I changed the header – now it is all picture, and I put it together myself using the Gimp and a public domain photo.  We’ll see how it looks – I may yet change the text (it just doesn’t seem smooth here…).

Open Source Math

This recent post talks about a paper from the American Mathematical Society, arguing in favor of using more open source software in mathematics. Traditionally, academia has been open; indeed, the Internet was created (in part) – the network was created – even Usenet was created – to allow researchers to share information back and forth.

If today’s prevailing ideas of patents and making profits off of ideas had been prevalent then, none of these technologies would have been created. It was scientists (including computer scientists, mathematicians, and others) who shared their research, with researchers from one university freely exchanging with researchers at other universities.

In mathematics, we have proprietary software like MATLAB, Mathematica, S-PLUS, and others.  However, there are indeed suitable replacements for most everything: there is R (instead of S) and GNU Octave (instead of MATLAB) for example.

The author of the previously mentioned post also mentions SciLab, one I’ve not heard of before.

As a system administrator, statistical data can be gathered from various sources (uptime, disk usage, trends, etc.) and plotted with some of these tools.  I’ve seen articles about using R to do such things.

Otherwise, encouraging others to use open source is always (in my mind) a Good Thing.

Tips on using the UNIX find command

When I used find, it took a while before I was able to use it regularly without looking it up.  For a smart introduction, this article from the Debian/Ubuntu Tips and Tricks site is good.  The GNU project has all of their manuals on the web, including the GNU find manual.

There is much more to the find command than just these introductory topics, however.  First, let us consider the tricks and traps of the find command:

  • The original find command required the -print option or nothing was printed at all.  Today, the GNU find does not require -print, and most other find commands seem to have followed suit.
  • Using the -exec option to find is less efficient than using the xargs command; in the Sun Manager’s mailing list there was a nice summary from Steve Nelson of this contrast.
  • Watch out for filenames with spaces and other things; the GNU find contains a -print0 option (and GNU xargs has a -0 option to match) just for this reason.  These options use an ASCII NUL to separate filenames.

Some tips for using find:

Multiple options can be placed in sequence with AND and OR boolean options (and parenthesis). For example, to find all files containing “house” in the name that are newer than two days and are larger than 10K, try this:

find . -name “*house*” -size +10240 -mtime -2

This is where some of the power of find can be seen.

Use all appropriate options.  The more you can narrow down the selection, the less you have to look.  For example, the -type and -xdev options can be quite useful.  The -type options selects a file based on its type, and the -xdev prevents the file “scan” from going to another disk volume (refusing to cross mount points, for example).  Thus, you can look for all regular directories on the current disk from a starting point like this:

find /var/tmp -xdev -type d -print

Get to know all of find’s options.

Use xargs instead of -exec.  Find will spawn a new process for each execution of -exec (though GNU find might be different).  xargs will load a single process (binary) into memory, parcels out the arguments (one to a line on stdin) into a set of command arguments, and runs the binary as necessary – repeating this process as often as necessary.

For example, an “exec” of rm would spawn a process for rm, load the rm binary for each file, run it once for each file, and release process memory.  Using xargs, the rm binary is loaded once, then as many arguments as possible are read from the standard input, rm is run with these arguments.  If there are more arguments, xargs repeats the process.

Don’t use find / .  Doing a find on a large number of files can slow the system down drastically.  Typically this is used by an administrator in order to find a file somewhere on the hard drive.  Better yet is to perform this command sequence overnight:

find / -print > /.masterfile

Then the /.masterfile can be searched using grep instead of tying the system up with lots of disk I/O during the day when users are counting on excellent system performance.

Remember to quote special characters.  In particular, any regular expressions and the left and right parenthesis should be quoted.  Typically, the regular expressions are put into double quotes, and left and right parens are quoted with a backslash.

Be wary of extensions to POSIX.1 find.  It’s not that they are bad, but rather that you cannot count on them being present.  Unfortunately, some of the most useful options fall into this category – but as long as you are aware of them, they can be used appropriately.  Some options in this category are:

  • -print0
  • -maxdepth
  • -mindepth
  • -iname
  • -ls

In particular, the -print0 is the most useful of the lot.

The BSD man page also brings up an interesting point about find and find options:

Historically, the -d, -L and -x options were implemented using the pri-
maries -depth, -follow, and -xdev. These primaries always evaluated to
true. As they were really global variables that took effect before the
traversal began, some legal expressions could have unexpected results.
An example is the expression -print -o -depth. As -print always evalu-
ates to true, the standard order of evaluation implies that -depth would
never be evaluated. This is not the case.

This has been a source of confusion in the past; considering them as global options (and placing them first) will provide some relief. Note that the -d, -L and -x options are likely BSD-specific.

Ubuntu on an Apple MacBook (Intel)

Recently, I discovered this excellent article on putting Ubuntu 7.10 onto an Apple Macbook. I’ve tried Ubuntu in the past, and wasn’t too enthused about its user interface.

There seems to be a large number of people who are enthused about Ubuntu. Me, I’ve been sticking with Red Hat or SUSE (or Yellow Dog) but I’m a sucker for trying new distributions.  The screen shot of rEFIt is nice…

Both Fedora and OpenSUSE have put renewed life into their PowerPC versions. Perhaps I should try them again….

I should mention: first thing everyone mentions about Ubuntu (or Debian) is the ease of using APT. I’ve used APT on my RPM distributions for years. Not a good enough reason to switch…

5 reasons to want a core dump!

There are several reasons to want to make the kernel dump core – the central one being there is some kernel or hardware based problem which continues to occur. What happens during a kernel panic (when properly configured) is that the kernel itself “dumps core” and the core can be used after reboot for analysis.

So here are some reasons:

  • Intermittent kernel reboots
  • Hard drive “lockups” (constant access, system frozen)
  • Apparent hardware failures
  • Speed problems in the kernel
  • Kernel panic debugging

All except the last depend on a user (administrator) generated kernel panic with associated kernel dump. Of course, this is hard on filesystems, though Linux at least has the option of performing a “sync” from the same location as the user generated panic.

Most UNIX operating systems have the capability for the administrator to generate a kernel-based core dump. Linux users must have a kernel that supports the Magic SysReq key. Solaris on SPARC is set to go; Solaris on Intel processors requires booting the Solaris kernel with the kmdb kernel module loaded (through parameters and settings in the boot loader).

Applications will also generate core dumps, and a lot of the core dump analysis tools used for applications and the methods used can be useful in analyzing kernel dumps as well. BEA has an excellent (multi-platform) description of creating and analyzing core dumps – even though it is oriented towards their Tuxedo product, it seems still useful.

Sun has an excellent article, Core Dump Management on the Solaris OS, that covers both application core dumps and system kernel core dumps written by Adam Zhang at Sun.

For HP-UX, there isn’t as much on crash dump analysis, though the whitepaper Debugging Core Files using HP WDB (PDF) may be useful.

I don’t know AIX (nor z/OS) that well myself, but there are some free RedBooks that include core dump analysis as part of the book. There is z/OS Diagnostic Data Collection and Analysis for z/OS (if you just happen to have a mainframe in house) and Problem Solving and Troubleshooting in AIX 5L for AIX.

Likely I’ll be covering some of these tools in depth.  For most versions of UNIX and Linux, there are man pages for core(5).  Some systems offer the commands gcore and savecore as well.  As always, the FreeBSD man pages web page covers HP-UX (HP-UX 11.22), Solaris (Solaris 9), and Red Hat (Red Hat Linux 9) and others as well as FreeBSD. Unfortunately, it appears that other Linux and UNIX versions are not being updated (for whatever reason – space?).

Stress relief for System Administrators

Who, me, stress?  Nah….. sysadmins never have any stress, right?

Well, just in case you do – my feelings are that laughter is the best medicine – and I mean laugh out loud funny.  So what do you do?

There are a lot of ways to tickle the funny bone, and they’re personal as well.  Here are several that I use:

  • Subscribe to RSS feeds like Sharky’s Column and The Daily WTF.
  • Favorite comics (for me, that means: Liberty Meadows, Calvin and Hobbes, and Get Fuzzy)
  • Personal copies of laugh out loud funny comics
  • Funny movies

If you can indulge in some of these away from the office, it is much better.  Fresh air is good, too – as is doing something totally unrelated to work (even if it is other work).

Burn-out is a real danger to system administrators; so get away from the tension and away from the office!

Help! 11 places to get help.

Where do you go when you don’t know the answer to a problem?  Most admins know a few places – but not many seem to go after all that are available to them.  See which of these you know and use:

  • Local instructions.  Corporate adminsitration teams often have their own documentation, and even if there isn’t any on paper or on any digital media, your coworkers may be able to help.
  • Previous experiences.  If you’ve been recording your technical successes and recording documentation et al, perhaps there is something in there.  If not – well, then, you lose, don’t you?  So start recording today!
  • Books. There are many books about system administration topics that may help.  Certification books are often good for technical materials as well.
  • Google. A search on Google (or your search engine of choice) may turn up something somewhere.
  • Personal Network.  Do you have a friend that is a wizard with these sort of problems?  Ask them.  If you don’t have a friend like this… find one!
  • Vendor Support and Documentation Pages. Often, the documentation and support pages from the manufacturer may include pertinent information.  Many of these will not be found in search engines, but can be found by performing a search at the manufacturers web site.  HP has the ITRC; Sun has SunSolve as well as BigAdmin.
  • Vendor Forums. Many vendors (such as Apple and HP) have forums that allow users to help each other. Do not neglect these! They are often searchable as well.
  • Usenet. Most (perhaps all?) systems, especially UNIX and Windows, are represented on Usenet newsgroups.  These can be a source of information, and can be searched (or used) through Google Groups.
  • User Groups.  User groups, whether national or local, can be a nice place to find resources to help.  There are Linux user groups (LUGs), Macintosh user groups, HP groups such as Encompass.
  • Mailing lists. This is similar to Usenet, but via email.
  • IRC.  Internet Relay Chat provides realtime communication with professionals that may be able to help.

Listing shared libraries in running processes

The utility lsof is a very useful utility, and can be used to list the shared libraries being used by a running process. It can be important to know if a running process is using a particular library, perhaps for forensics reasons or for library upgrades.

To list all the libraries in a particular process, try this command:

lsof -a -c name +D /usr/lib

This will list all files used by name in /usr/lib. To list all files used by name, just use:

lsof -c name

Alternately, to find all processes using a file (library) in /usr/lib, use this command:

lsof /usr/lib/libname

The -c option specifies the beginning of a name of a process to list. The -a option is used to create a boolean AND set; otherwise, lsof assumes a boolean OR set of options. With the +D option (which scans for files recursively down the directory tree), the first example looks for the process name that also has open files from the /usr/lib directory tree.

Another good use of lsof has to do with finding files that are open but deleted. Such a situation could potentially happen with a shared library if the library was deleted while a file was using it. This could perhaps happen during a library upgrade. Use this command to do this:

lsof +L1

The +L option specifies files with a specific number of links; here, any file with less than one link (that is, zero links) will be listed. Files with zero links are not listed in the filesystem but are open and in use by a file. The blocks from such files remain marked as in use by the filesystem, but the file cannot be found by name anywhere and has no inode.

There is a nice concise article by Joe Barr at Linux.com about what you can do with lsof. Lsof is available for download.

New operating system releases!

This is just amazing: did everybody coordinate this? Within the last three weeks or so, we’ve seen these releases come out:

Several of these were released on the same day, November 1.

What next? Am I really supposed to choose just one? Sigh. And I just installed OpenBSD 4.1 and Fedora 7, too – not to mention installing FreeBSD 6.2 not too long ago.

From all the talk, I’ll have to try Kubuntu again. So many systems, so little time.

I have been using OpenSUSE 10.3 (with KDE). I just love it – and I love the new menu format, too.

Update: Sigh. I should have known. Microsoft Windows Vista celebrated its 1st Anniversary on Nov. 8.

Follow

Get every new post delivered to your Inbox.

Join 43 other followers