Getting Logitech Unifying Wireless to Work in Xubuntu 15.04 Vivid Vervet

I had a mouse (M215) and keyboard (K400) that were both Logitech devices. What you may not know is that the devices made by Logitech share the same wireless protocol, and one transceiver on the computer can handle multiple devices – about like Bluetooth. Since the mouse had no transceiver that I could find – tying it to the transceiver that came with the keyboard was very appealing.

You have to pair the devices with the transceiver on the computer, and a product called Solaar will help you do that. It has many useful features to help you use your Logitech wireless devices fully.

To install, you need to add a third party source. You can do this at the terminal like this:

sudo add-apt-repository ppa:daniel.pavel/solaar
sudo apt-get update
sudo apt-get install solaar

This adds the repository, then updates the data – then installs solaar from this repository. After that, you will need to run solaar:

nohup solaar &

This will add an icon to your top menu panel. This question from AskUbuntu has detailed pictures and descriptions of solaar.

One surprise is the power plug-in for the menu bar will recognize the power found in the Logitech devices and report it along with system power. The solaar plug-in does as well, but seeing it in the power plugin is a nice touch.


	

Putting Fedora 16 onto a Dell Optiplex 745

I purchased a Dell Optiplex 745 (ultra small form factor) from a used equipment sale at the local university. I had hoped that it would be low power as well, but that does not seem to be the case – although a more modern computer is always going to be more efficient (or at least you would think so).

First, I had to reset the BIOS as it had been password protected against changes. Resetting the BIOS was simple: remove the password jumper in the system, boot fully once, then replace the password jumper. The full description is available at eHow. The only sticking point was trying to find the jumper in the case; it turned out to be roughly in the center of the board underneath one of the airflow covers. If you are looking at the system from the top – with the front facing you – the relevant cover is in the back left corner, and the jumper (a tiny blue shiny plastic jumper with extended grasping handle) is towards the center of the board.

I’ve not yet found how to reset the “title” that comes up when the system is booted; this does not seem to be in the BIOS settings anywhere. I could fully reset the CMOS entirely (rather than just the password) but that always scares me – what else will be lost?

Trying to use the CDROM, I ran into some difficulties. It appears it may be easy to put the CDROM in incorrectly; be sure to put it in the right way and seat it fully.

I decided I would put Fedora 16 on this system – and went with Fedora 16 64bit. This went quite smoothly and the system runs well. Specifically, I went with the Fedora 16 XFCE spin – which means it runs fast and light. Running a lightweight desktop on a fast machine is even nicer than I would have expected. I did load WindowMaker but haven’t yet tried it. Who knows – I might try 9wm once.

I loaded up everything that one needs for a home desktop: DVD playback, MP3 playback, and so on. I couldn’t get Parole to work with DVDs, so I went with Totem instead. In the same manner, I installed RhythmBox to play MP3s. I also had no problems getting Flash or Java to work. On the web, an excellent resource for all of these steps is at LinuxForDummies. Video and sound were recognized without problem.

This is definitely a nice setup: both the hardware (the Optiplex 745) and the software (Fedora XFCE 64-bit spin) are recommended.

Why You Should NOT Ditch Windows XP

Nathan Bauman over at PCWorld had an article titled Why You Should Ditch Your Windows XP Laptop Right Now. This sort of pitch has always interested me after a fashion – the thinking just escapes me (as a personal Windows user). The reasoning for a corporate environment would be different, of course.

Here are the reasons Nathan lists for switching to Windows 7:

  1. Windows 7 is easier to use.
  2. Windows 7 is more secure.
  3. Windows 7 supports disks with 4K blocks.
  4. Windows 7 supports more than 2Gb memory.
  5. Windows 8 is a disaster – so get Windows 7 before it goes away.

There are many reasons to stay with Windows XP for now. Be aware that I’ve not yet purchased my own Windows XP – I still have Windows 2000 for when I need Windows (which is almost never).

One reason is that Windows XP runs on virtually anything you can pick up – even one-year old and two-year old (gasp!) hardware. Requirements are 128Mb memory recommended and 1.5Gb disk on a Pentium at 233MHz or better. Windows 7 requires four times the memory, approximately 16 times the disk space, and four times the CPU power.

This variance in requirements leads to much lower costs for Windows XP hardware. A search on eBay for laptops with Windows XP shows a huge number of laptops for less than $300 – some as low as $120. These were laptops that presumably once sold for $1200 or $1800 or better. If we assume that a $300 laptop once sold for $1800, that is an 83% reduction in price from original retail – $1500 that stays in your pocket. New laptops with Windows 7 start at $350 or so for minimal systems; for a full-power system with Windows 7 it could be well over $1000.

The software itself is cheaper. Again, on eBay one can find Windows XP SP2 for $30-$40 whereas Windows 7 Ultimate is $75 and up – a savings of over %50.

Lastly, why buy Windows 7 now at retail prices when you can wait for Windows 8 – and get Windows 7 at fire-sale prices for hardware that by then will have lost 80% of its value. Just by waiting you can save thousands of dollars.

There is also the fact that a lot of software may not yet fully support Windows 7, and the software you count on the most may run only on Windows XP.

So now – that’s why you should stick with Windows XP (just remember to properly secure it!). Let everyone else spend their thousands of dollars and you can get their old equipment for a fraction of its original cost.

However, for an enterprise, the reasoning would be different – and the results might be different.

Making the Case for Partitioning

What is it about partitioning? The old school rule was that there were separate partitions for /, /usr, /home, /var, and /tmp. In fact, default server installations (including Solaris) still use this partitioning set up.

Has partitioning outlived its usefulness?

This question has come up before. There are negative and positive aspects to partitioning, and the case for partitioning might not be as strong as it once was.

Partitioning means that you may have a situation with no space in one partition and lots in another. This, in fact, is the most common argument against partitioning. However, using LVM or ZFS where disks can be grown dynamically makes this a moot point. With technologies such as ZFS and LVM, you can expand a disk and filesystem any time you need to.

However, this still means that the system will require maintenance – but that is what administrators are for, right? If a filesystem fills – or is going to fill – it is up to a system administrator to find the disk space and allocate it to the disk.

Another argument against partitioning says “Disk is cheap.” Well, if this is true, then why do companies still balk at getting terabytes of disk into their SANs? The phrase “Disk is cheap” is trite but untested: in truth, buying 144Gb disks is not done in bulk. Companies still have to watch the budget, and getting more disk space is not necessarily going to be a high priority until disk runs out.

So, what are the benefits to partitioning disks? There are many.

Each partition can be treated seperately – so the /usr filesystem can be mounted read-only, the /home directory can be mounted with the noexec and nosuid options, which makes for a more secure and more robust system.

Also, if there are disk errors, then a single partition can be affected rather than the entire system. Thus, on a reboot, the system still comes up instead of being blocked because the root filesystem is trashed. In this same vein, if a filesystem requires a check, going through a 144Gb filesystem check could take a very long time – whereas, if the partition was 10Gb it would be not nearly as long and the system would come back up that much faster.

Backups – and restores – are another thing that is simplified by having multiple partitions. For example, when backing up an HP-UX system using make_tape_recovery, you specify which partitions to back up to tape. These partitions are then restored when the tape is booted. If you used a single partition for everything (data, home, etc.) then you would probably not be able to make this sort of backup at all.

One of the nicest reasons to partition is the ability to separate user data from system data. This allows the reinstallation of the system while keeping user data (and application data) untouched. This saves time and effort. I recently installed Ubuntu Server in place of Red Hat Enterprise Linux, and since the system was a single partition, there was no way to install Ubuntu Server without wiping out 200Gb of application data and restoring it – which took around 9 hours each way on a gigabit network link (if nothing else was sharing the network). Alternately, I converted my OpenSUSE laptop to using Xubuntu – and was able to keep all of my user settings because /home was on a separate partition. Keeping a system on a single partition cost the company somewhere in the order of a full day’s worth of time and effort – how much money would the company have saved by having a separate partition for /var/lib/mysql?

Performance is another reason for partitioning – but this is only relevant for separate disks. If you have multiple disks, you can segregate them into separate partitions, which means then that if a disk is heavily used for one purpose it can be dedicated to that purpose – you won’t have your database accesses slowing down because of system log writes, for example. However, this problem can be reduced or eliminated – or made moot – by moving data to a striped volume, and possibly with disk cache as well. Yet, as long as there are disk accesses, do you want them competing with your database?

Having said all of this, how does using a virtual machine change these rules? Do you need partitioning in a virtual machine?

A virtual machine makes all of the arguments relating to the physical disk moot – performance on a virtual disk doesn’t have a high correlation with true physical hardware unless the virtual machine host is set up with segregated physical disk. Even so, the disks may actually be separate LUNs created in a RAID.

However, the ability to secure a filesystem (such as /usr), save filesystem check time, prevent excessive /home usage, and other reasons suggest that the case for partitions is still valid.

Making the Case for IPMI

IPMI is a system that allows you to maintain a system where it would not otherwise be able to. However, you have to convince others that it will be useful. As obvious as it is for a professional system administrator, there are others who will not see the usefulness of IPMI. This process – making the case for a technology – comes up more than most system administrators might realize.

There are numerous situations that require having a person to be actually present at a machine to do things – such as entering the setup, accessing the UNIX/Linux maintenance shell, changing boot devices, and so forth. So how do you prove how beneficial implementing full IPMI support is?

Provide a business case – and perhaps an informal user story – to show how IPMI can reduce the need for a person to actually be present.

To really make the case for IPMI, compute the actual costs in making a trip to the data center – the hourly cost of the administrator, the driving costs in gasoline (both to and from!), and the costs associated with handling the expense report. Report the other unquantifiable costs – the cost of an administrator unavailable for other tasks during the four hour round trip, including projects being delayed and problems not getting resolved. Combine this with a user story.

For example, create a user story around a possible kernel panic. A user story requires an actual user – an individual – whose story is followed. Here is our example continued as a user story:

Alma received an email that a system (db20) was unresponsive. Checking the system found that there was no response from the network at all, and checking the KVM showed that there was a panic on the display. No keystrokes were accepted by the system, and there was no way to power-cycle the system.

So Alma sends an email stating that she will be unavailable for the rest of the day, and calls her babysitter to take come and take care of her three children for the evening. Then she gets into her car and drives the two hours to the data center and parks in a lot ($10 for one hour). She power-cycles the machine by pressing its front panel power button, and checks the system response using her laptop. She finds that the server is responding and logs in.

Then Alma checks the server: logs show no problems restarting, the system has restarted cleanly, the subsystems are all running, and the monitoring system shows all systems are good.

Alma leaves the data center and drives the two hours back home.

If Alma is paid $60,000 yearly, the cost of her time spent on this event is US$144.23. If she drove 320 miles round trip at .76 cents a mile, she gets US$243.20 as an expense – in addition to the US$10 in parking fees. This makes a total direct cost of US$397.43.

If something like this happens six times a year, then the total yearly cost is US$2384.58 – and total downtime for the server is 24 hours for an uptime rate of 99.72%.

This account doesn’t include the indirect costs – such as projects being delayed because Alma was unable to work on them, nor does it include the personal costs involved such as babysitting and time away from family. It also doesn’t include the time that HR staff spent on yet another expense report. It also doesn’t include the costs associated with the server being unavailable for three hours.

On the other hand, Polly received word that another server in the data center was unresponsive, and also found that the kernel had panicked and there was no response from the console. She then used a command line tool to access the baseboard management controller (BMC) through IPMI. With an IPMI command, she rebooted the server, and watched the response on the KVM. Checking the system over, Polly found that the server had booted cleanly, subsystems were operational, and all looked good.

If Polly is paid the same amount as Alma, and her response took 15 minutes, we get a total cost of US$7.21.  Downtime was reduced by 92% (along with an associated reduction in costs tied to the server being down). If this happens to Polly six times a year, the total yearly cost is US$43.27 – and a downtime of 1.5 hours for an uptime rate of 99.98%.

Thus, IPMI and SoL would have saved Alma’s company US$2377.37 per year.

The strongest case can be made if a recent event could have been solved with the technology you are proposing. If you can point to a situation that could have been resolved through ten minutes instead of hours or days without – then the usefulness of the technology will be apparent.

With this user story and business case, the case for IPMI should be readily apparent to just about anybody. Similarly, the case can be made for other technologies in the same way.

An Easier Way to Mount Disks in Linux (by UUID)

When mounting a disk, the traditional way has always been to use the name given to it by the operating system – such as hda1 (first partition on first hard drive) or sdc2 (second partition on third SCSI drive). However, with the possibility that disks may move from one location to another (such as from sdc to sda) what can be done to locate the appropriate disk to mount?

Most or all Linux systems now install with the /etc/fstab set up to use the Universally Unique Identifier (UUID). While the disk’s location could change, the UUID remains the same. How can we use the UUID?

First, you have to find out the UUID of the disk that you want to mount. There are a number of ways to do this; one traditional method has been to use the tool vol_id. This tool, however, was removed from udev back in May of 2009. There are other ways to find the UUID of a new disk.

One way is to use blkid. This command is part of util-linux-ng. The vol_id command was removed in favor of this one, and is the preferred tool to use instead of vol_id. Find the UUID of /dev/sda1 like this:

$ blkid /dev/sda1
/dev/sda1: UUID="e7b85511-58a1-45a0-9c72-72b554f01f9f" TYPE="ext3"

You could also use the tool udevadm (which takes the place of udevinfo) this way:

$ udevadm info -n /dev/sda1 -q property | grep UUID
ID_FS_UUID=e7b85511-58a1-45a0-9c72-72b554f01f9f
ID_FS_UUID_ENC=e7b85511-58a1-45a0-9c72-72b554f01f9f

Alternately, the tune2fs command (although specific to ext) can be used to get the UUID:

# tune2fs -l /dev/sda1 | grep UUID
Filesystem UUID:          e7b85511-58a1-45a0-9c72-72b554f01f9f

The tune2fs utility is also only available to the superuser, not to a normal user.

There are utilities similar to tune2fs for other filesystems – and most probably report the UUID. For instance, the XFS tool xfs_info reports the UUID (in the first line) like this:

$ xfs_info /dev/sda6
meta-data=/dev/disk/by-uuid/c68acf43-2c75-4a9d-a281-b70b5a0095e8 isize=256    agcount=4, agsize=15106816 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=60427264, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=29505, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

The easiest way is to use the blkid command. There is yet one more way to get the UUID – and it also allows you to write programs and scripts that utilize the UUID. The filesystem contains a list of disks by their UUID in /dev/disk/by-uuid:

$ ls -l /dev/disk/by-uuid
total 0
lrwxrwxrwx 1 root root 10 2011-04-01 13:08 6db9c678-7e17-4e9e-b10a-e75595c0cacb -> ../../sda5
lrwxrwxrwx 1 root root 10 2011-04-01 13:08 c68acf43-2c75-4a9d-a281-b70b5a0095e8 -> ../../sda6
lrwxrwxrwx 1 root root 10 2011-04-01 13:08 e7b85511-58a1-45a0-9c72-72b554f01f9f -> ../../sda1

Since a disk drive is just a file, these links can be used to “find” a disk device by UUID – and to identify the UUID as well. Just use /dev/disk/by-uuid/e7b85511-58a1-45a0-9c72-72b554f01f9f in your scripts instead of /dev/sda1 and you’ll be able to locate the disk no matter where it is (as long as /dev/disk/by-uuid exists and works).

For example, let’s say you want to do a disk image copy of what is now /dev/sda1 – but you want the script to be portable and to find the disk no matter where it winds up. Using dd and gzip, you can do something like this:

UUID=e7b85511-58a1-45a0-9c72-72b554f01f9f
dd if=/dev/disk/by-uuid/${UUID} of=- | gzip -c 9 > backup-$UUID.gz

You can script the retrieval of the UUID – perhaps from a user – this way:

eval `blkid -o udev ${DEV}`
echo "UUID of ${DEV} is ${ID_FS_UUID}"

In the /etc/fstab file, replace the mount device (such as /dev/sda1) with the phrase UUID=e7b85511-58a1-45a0-9c72-72b554f01f9f (or whatever the UUID actually is).

Be aware that the UUID is associated with the filesystem, not the device itself – so if you reformat the device with a new filesystem (of whatever type) the UUID will change.

DOS Partitions (fdisk) and the 2TB Limit

If you are trying to create disk volumes over two terabytes (2TB) you’ll find that fdisk won’t let you do it. The problem lies not with fdisk, but with the old PCDOS disk label used on disks for the last three decades or so. Back in 1981 when the IBM PC was introduced, a disk of over two terabytes would have seemed inconcievable.

Thus, we struggle with the limitations of PCDOS disk labels today.

Some (newer?) versions of fdisk report the problem with large drives, giving this warning:

WARNING: The size of this disk is 8.0 TB (8001389854720 bytes).
DOS partition table format can not be used on drives for volumes
larger than (2199023255040 bytes) for 512-byte sectors. Use parted(1) and GUID
partition table format (GPT).

To get around the size limitation, there is only one solution: dump the PCDOS disk label for another label. The usual recommendation is the GPT (the GUID Partition Table) created by Intel. The GPT has a much larger limit, making 2TB partitions feasable.

However, the Linux utility fdisk does not work with drives that use GPT; thus, you have to use a different partitioning tool. The usual recommendation to Linux users is GNU parted. GNU parted handles multiple partition formats, including GPT. Documentation is available in multiple formats (PDF is one).

The steps to getting a large partition done with parted are simple:

  1. Create a disklabel (partitioning) on disk.
  2. Create a partition of the appropriate size.
  3. Create a filesystem (if needed).
  4. Mount.

First, create the GPT disklabel – in other words, create the partitioning tables to support GPT:

# parted /dev/sdc
GNU Parted 2.2
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: Ext Hard  Disk (scsi)
Disk /dev/sdc: 8001GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
 
Number  Start   End     Size    Type     File system  Flags
 
(parted) mklabel gpt
Warning: The existing disk label on /dev/sdc will be destroyed and all data on this disk will be lost. Do you want to
continue?
Yes/No? yes

Then after this, create a partition:

(parted) mkpart primary xfs 0 8001.4GB
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? c

This is what happens when the disk is not aligned properly on the appropriate boundary. Traditionally, the boundary was 512 bytes; now it is changing to 4K. GPT also apparently use a significant portion of the beginning of the disk.

To get around the alignment problem, you can use a start position of 1MB and an end position 1MB from the end:

(parted) mkpart primary xfs 1 -1
(parted) p
Model: Ext Hard  Disk (scsi)
Disk /dev/sdc: 8001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  8001GB  8001GB               primary

Parted supports a wide variety of units (and defaults to megabytes), all of which can be specified during numeric input – such as for start and end points. Using an end point of “100%” is probably just as good as using “-1” (1MB from end).

Jamie McClelland has a nice article about 2TB drives and alignment. Don’t blindly trust the tools to get the partitioning right; specify the appropriate numbers and units in order to force correct alignment.

GNU parted also supports an option that at least suggests it will use the best alignment when it can:

parted --align optimal

Again, don’t blindly trust it: check the numbers.