An Easier Way to Mount Disks in Linux (by UUID)

When mounting a disk, the traditional way has always been to use the name given to it by the operating system – such as hda1 (first partition on first hard drive) or sdc2 (second partition on third SCSI drive). However, with the possibility that disks may move from one location to another (such as from sdc to sda) what can be done to locate the appropriate disk to mount?

Most or all Linux systems now install with the /etc/fstab set up to use the Universally Unique Identifier (UUID). While the disk’s location could change, the UUID remains the same. How can we use the UUID?

First, you have to find out the UUID of the disk that you want to mount. There are a number of ways to do this; one traditional method has been to use the tool vol_id. This tool, however, was removed from udev back in May of 2009. There are other ways to find the UUID of a new disk.

One way is to use blkid. This command is part of util-linux-ng. The vol_id command was removed in favor of this one, and is the preferred tool to use instead of vol_id. Find the UUID of /dev/sda1 like this:

$ blkid /dev/sda1
/dev/sda1: UUID="e7b85511-58a1-45a0-9c72-72b554f01f9f" TYPE="ext3"

You could also use the tool udevadm (which takes the place of udevinfo) this way:

$ udevadm info -n /dev/sda1 -q property | grep UUID
ID_FS_UUID=e7b85511-58a1-45a0-9c72-72b554f01f9f
ID_FS_UUID_ENC=e7b85511-58a1-45a0-9c72-72b554f01f9f

Alternately, the tune2fs command (although specific to ext) can be used to get the UUID:

# tune2fs -l /dev/sda1 | grep UUID
Filesystem UUID:          e7b85511-58a1-45a0-9c72-72b554f01f9f

The tune2fs utility is also only available to the superuser, not to a normal user.

There are utilities similar to tune2fs for other filesystems – and most probably report the UUID. For instance, the XFS tool xfs_info reports the UUID (in the first line) like this:

$ xfs_info /dev/sda6
meta-data=/dev/disk/by-uuid/c68acf43-2c75-4a9d-a281-b70b5a0095e8 isize=256    agcount=4, agsize=15106816 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=60427264, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=29505, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

The easiest way is to use the blkid command. There is yet one more way to get the UUID – and it also allows you to write programs and scripts that utilize the UUID. The filesystem contains a list of disks by their UUID in /dev/disk/by-uuid:

$ ls -l /dev/disk/by-uuid
total 0
lrwxrwxrwx 1 root root 10 2011-04-01 13:08 6db9c678-7e17-4e9e-b10a-e75595c0cacb -> ../../sda5
lrwxrwxrwx 1 root root 10 2011-04-01 13:08 c68acf43-2c75-4a9d-a281-b70b5a0095e8 -> ../../sda6
lrwxrwxrwx 1 root root 10 2011-04-01 13:08 e7b85511-58a1-45a0-9c72-72b554f01f9f -> ../../sda1

Since a disk drive is just a file, these links can be used to “find” a disk device by UUID – and to identify the UUID as well. Just use /dev/disk/by-uuid/e7b85511-58a1-45a0-9c72-72b554f01f9f in your scripts instead of /dev/sda1 and you’ll be able to locate the disk no matter where it is (as long as /dev/disk/by-uuid exists and works).

For example, let’s say you want to do a disk image copy of what is now /dev/sda1 – but you want the script to be portable and to find the disk no matter where it winds up. Using dd and gzip, you can do something like this:

UUID=e7b85511-58a1-45a0-9c72-72b554f01f9f
dd if=/dev/disk/by-uuid/${UUID} of=- | gzip -c 9 > backup-$UUID.gz

You can script the retrieval of the UUID – perhaps from a user – this way:

eval `blkid -o udev ${DEV}`
echo "UUID of ${DEV} is ${ID_FS_UUID}"

In the /etc/fstab file, replace the mount device (such as /dev/sda1) with the phrase UUID=e7b85511-58a1-45a0-9c72-72b554f01f9f (or whatever the UUID actually is).

Be aware that the UUID is associated with the filesystem, not the device itself – so if you reformat the device with a new filesystem (of whatever type) the UUID will change.

Linux Filesystem Comparison Test

Recently, Google announced it would move from ext3 to ext4 and also announced hiring the developer of ext4, Theodore Tso.

Now the technology site Phoronix has benchmarked ext3, ext4, XFS, ReiserFS, and Btrfs (no word on why JFS was not included).

The interesting thing about this set of benchmarks is that it seems to be quite a mixed set of results. Almost every filesystem came in first place at least once.

Another interesting thing is that the tests were done on a solid state SATA drive, the OCZ Agility EX 60GB SSD. One wonders what the tests would show with an IDE hard drive.

I’ve always been partial to XFS, given its capability for on-line expansion, its large capacity, and its reasonable performance. Of course, the targets are continuously moving onward – but XFS has always been respectable. Unfortunately, these benchmarks don’t show it as being the best.

The article recommends using ext3 over ext4 for now, given the loss in filesystem performance with ext4. This is particularly interesting given Google’s choice to migrate from ext3 to ext4; however, their choice was based on the fact that ext3 is showing data loss and they need better protection.

Now if someone would only benchmark OpenVMS ODS-5

Google Moves to ext4 Filesystem

Michael Rubin announced that internally Google had decided to move from ext2 to ext4 after careful consideration of ext4, IBM’s JFS, and SGI’s XFS.

Along with this, Google hired Theodore T’so, the man behind ext2 and ext4 to help with the migration.

The decision came down to between XFS and ext4, and the easier migration to ext4 was the deciding factor for Google. I am partial to XFS – it’s older and is perhaps more stable – but ext4 should be good as well.

I switched to OpenSUSE at one time because they offered XFS and Red Hat did not – and converted a Red Hat 7.1 install to XFS as well. Never had any problems with either installation at all.

About ZFS

I’ve known that ZFS was a revolutionary filesystem, but never understood the details. Now there is an article that explains why ZFS is so desirable, and does so well.

Apple started supporting ZFS read-only in Leopard, and has released beta versions of Leopard with writable ZFS.

FreeBSD committed ZFS to the 7.0 tree in April of 2007. There is an excellent article that describes how to install FreeBSD 7.0 with ZFS. The FreeBSD Project also has a wiki page that describes the current state of ZFS under FreeBSD, and has some nice links about ZFS.

So why isn’t ZFS in the Linux kernel tree? Because the license for ZFS, the Sun CDDL, conflicts with the Linux kernel’s GPL license. There was an interesting discussion on the Linux Kernel Mailing List (lkml) summarized at kerneltrap.

One way to avoid the license issues is to run Linux inside a Solaris zone; while the Linux system is not aware of the filesystem used as the backing store for the zone, the Solaris system could use ZFS as the zone’s filesystem.