Running Firestorm on Ubuntu Vivid (15.04)

I recently installed Ubuntu Vivid 15.04 (using UNetBootin to create install media). First thing to install was my Second Life viewer of choice: usually Firestorm or CtrlAltStudio.

First problem is that CtrlAltStudio doesn’t have a Linux version – they say that:

Windows and Mac OSX installers are provided, and if you have the know-how the source code should be readily updateable for Linux.

So off to Firestorm it is. Firestorm has a Linux version, but only 32-bit – not 64-bit. What this means is, if you want to run Firestorm on a 64-bit Linux environment, you have to have the 32-bit libraries for it to use.
In the early days of 64-bit Ubuntu, this was easier – you just loaded the package ia32-libs and you were done. However, that package is no longer present in recent versions.
There are great directions from the Phoenix Firestorm Project on how to do this on Trusty Tahr (14.04) and Utopic Unicorn (14.10) by using the packages from Raring Ringtail (13.04). However… Raring Ringtail isn’t in the Ubuntu archive FTP site, and I was installing onto Vivid Vervet instead.
If you follow the directions, but use precise instead of raring – you can install the libraries from Precise Pangolin (12.04 LTS) which is still supported, and they work well with Vivid Vervet.
The real solution is to create a 64-bit Linux version of Firestorm – but that hasn’t happened yet.
The final step was to fix the screen resolution – for whatever reason, it took a lower resolution and blew it up to fill the screen – it was painful to look at. Within the Debug menu on login – or the Advanced menu after login – there is an entry labeled “Set Screen Size…” Use that to set the appropriate size and log out of Firestorm and back in.
In my case, that’s all it needed. I’m a happy camper.


Apt-Cacher on Ubuntu Trusty



I have been upgrading a data center server from Lucid Lynx to Ubuntu Trusty, and have been working with the new Icinga, Puppet, and other tools.

One nice change was switching from apt-proxy to apt-cacher. apt-cacher is a cache for your apt package fetches, and speeds up access to apt repositories by keeping the relevant packages inside your network instead of in the great Internet beyond.

Configuring apt-cacher was easy. The set up is done in /etc/apt-cacher/apt-cacher.conf – here is the active configuration in my apt-cacher.conf:

cache_dir = /var/cache/apt-cacher
log_dir = /var/log/apt-cacher
group = www-data
user = www-data
allowed_hosts =
ubuntu_release_names = dapper, edgy, feisty, gutsy, hardy, intrepid, jaunty, karmic, lucid, maverick, natty, oneiric, precise, quantal, raring, saucy, trusty, utopic

Most important is the allowed_hosts parameter; I also updated ubuntu_release_names to include everything after quantal. The others are predefined values.

The default port is 3142 and you can set up your clients to use the proxy very simply with an additional file in /etc/apt/apt.conf.d – such as 98apt-cacher:

Acquire::http::Proxy "";

This assumes that your apt-cacher server host is and you are using the default port of 3142 – which is configurable.

If you want to test it out – without any actual installs – you might try an upgrade this way:

apt-get update
apt-get -d upgrade

This updates the local package listings – through the proxy! – then downloads all packages needed for an upgrade of your local system, but without installing anything.

Handling an Internet Bandwidth Hog



I noticed that our company internet was very slow – and it wasn’t long before one of the higher-ups also noticed and asked me about it.

I went to and ran a test – the speeds measured were a fraction of what we should have been getting.

So I went to our pfSense firewall and looked at the traffic graphs (in the Status menu). Sure enough, outbound traffic was maxed out. I noticed that one particular host was responsible for virtually all traffic across the firewall.

This means that not only is Internet traffic for all being slowed down, but so is any traffic bound for the remote data center.

I added a rule to block the host temporarily and then reset all of their connections using the States tab (under the Diagnostics menu).

Eventually the user came and we straightened everything out. I asked them what they were doing, and it was a massive download they had started. Handling the user and educating the user is as important as bringing the Internet back to normal.

A Return from Hiatus!

It’s been a long time since I wrote, and a lot has happened. I won’t bore you with the details, though one very big change has happened and will show itself shortly.

My life has had its ups and downs, and I feel like I’ve come out the other side now.

Most recently, I’ve been working with OpenVPN, pfSense, and am getting rid of Endian.

I love writing, and it’s about time I returned to it.

The Nagios Ecosystem: Nagios, Shinken, and Icinga


, ,

Nagios has been a standard-bearer for a long time, being developed originally by Ethan Galstad and included in Debian and Ubuntu for quite some time. In 2007, Ethan created a company built around providing enhancements to Nagios called Nagios Enterprises. However, for several years now there have been competitors to the original Nagios.

The first to come along was Icinga. This was a direct fork of the Nagios code that happened in May of 2009; the story of what lead to the fork was admirably reported by Free Software Magazine in April of 2012. In short, many developers were unhappy with the way that Nagios was being developed and with what they perceived as its many shortcomings which Ethan could not or would not fix. From Ethan’s standpoint, it was more about the enforcement of the Nagios trademark. The article summed it up best at the end: it’s complicated.

The H-Online also had an interview with Ethan Galstad about the future of Nagios and some of the history of the project.

Icinga is now in Ubuntu Universe and has been since Natty. It is also available for Debian Squeeze (current stable release).

Another project is Shinken: rather than a fork, it is a compatible replacement for the core Nagios code. When the Python-based Shinken code was rejected (vigorously) in summer of 2010 as a possible Nagios 4, it became an independent project. This project is newer than Icinga, but shows serious promise. It too, is now available in Ubuntu Universe and in Debian Wheezy (current testing release).

It is unfortunate that such animosity seems to swirl about Nagios; however, Icinga and Shinken appear to be quite healthy projects that provide much needed enhancements to Nagios users – and both are available in Ubuntu Precise Pangolin, the most recent Ubuntu LTS release.

I don’t know if Icinga or Shinken still work with Nagios mobile applications. If it’s just the URL, then the web server could rewrite the URL; if there is no compatible page for the mobile applications, then they can’t be used. However, I’d be surprised if there was no way to get the mobile apps working.

I’m going to try running Shinken and/or Nagios on an installation somewhere; we’ll see how it goes. I’ll report my experiences at a later date.

Puppet error: already in progress; skipping


Sometimes, you may try to run your puppet agent, and get an error like this:

# puppet agent --test
notice: Run of Puppet configuration client already in progress; skipping

If there is indeed another puppet agent running, it is simple enough to stop it and try again. However, what if this message appears, and there aren’t any other puppet instances running?

This happens because there is a flag stored in a file that didn’t get erased. Do this – but only if puppet is not in fact running:

# cd /var/lib/puppet/state
# rm -f puppetdlock

This will delete the lock, and puppet should start cleanly the next time. This tip works with puppet version 2.6.3.

Puppet refuses to run: “run already in progress”


Recently, one of the servers appeared to not be keeping up with configuration changes. Since it runs Puppet, this is a problem – it means that the changes at the puppet server are not getting propagated to the clients. The server is running Ubuntu Lucid Lynx Server 10.04.3 and Puppet 2.6.3.

So I shut down the puppet agent and tried running it manually:

# service puppet stop
 * Stopping puppet agent
# puppet agent --test
notice: Run of Puppet configuration client already in progress; skipping

Since puppet is definitively not running, I had to do some research and find out why it was not running.

I found this bug (Puppet bug #2888) that stated sometimes puppet does not remove its lockfile /var/lib/puppet/state/puppetdlock. Sure enough, on my system, the lockfile was still there. I deleted it and puppet ran normally.

There was also a bug report (Puppet bug #5246) that suggested puppet sometimes does not remove its pidfile /var/lib/puppet/run/ Some of the testing suggests that this bug is confined to running puppet --onetime (without other options). I don’t think this affected me: after removing the lockfile, puppet ran normally.

Converting a Red Hat Linux 5.8 install to CentOS


Recently, we wanted to convert from Red Hat Enterprise Linux to CentOS. CentOS is a build of Red Hat Enterprise Linux from all of the open source packages that are released by Red Hat. There are a number of instructions in this regard, but the overall process is the same. My conversion was a Red Hat Enterprise Linux 5.8 to CentOS 5.8.

I started by following these instructions from (or However, I had to adjust the instructions for RHEL 5.8 – just look in the directory on for the proper version of the packages you need. You won’t be able to use yum to download the packages because you want to pull not from RHEL but from CentOS – and yum will be getting updated as well.

Firstly, do a cleanup:

yum clean all

Then, create a working space where RPMs can be downloaded:

mkdir ~/centos
cd ~/centos

Now download the relevant CentOS key and import it:

rpm --import RPM-GPG-KEY-CentOS-5

Then, get relevant packages from CentOS – note these instructions will pull i386 packages or x86_64 packages depending on your system:

wget`uname -m`/CentOS/centos-release-5-8.el5.centos.i386.rpm
wget`uname -m`/CentOS/centos-release-notes-5.8-0.i386.rpm
wget`uname -m`/CentOS/yum-3.2.22-39.el5.centos.noarch.rpm
wget`uname -m`/CentOS/yum-updatesd-0.9-2.el5.noarch.rpm
wget`uname -m`/CentOS/yum-fastestmirror-1.1.16-21.el5.noarch.rpm

You don’t have to use wget either; you could – if you want – instead use a text browser like elinks to get the same packages. Using elinks allows you to get the most recent version without stumbling over the version number – if the package is updated, you don’t have to guess at the version numbers in the filename.

elinks`uname -m`/CentOS/

Delete unnecessary packages from RHEL – in particular, those that use the Red Hat Network (RHN):

rpm -e yum-rhn-plugin rhn-client-tools rhn-setup rhn-check rhnsd

If there are any other packages that require yum-rhn-plugin or related packages, add it to the list of packages to remove.

Now update all of the packages that were downloaded:

rpm -Uvh --force *.rpm

Lastly, perform an upgrade to fully update the system from the new CentOS repositories:

yum upgrade

For best practices, you probably should reboot here as well – thus loading new libraries, deleting old files, and activating new kernels.

Installing Percona Server 5.1 on Lucid (and after MariaDB)


, , , ,

I installed MariaDB 5.1 onto a server. It worked well, but I wanted to move towards Percona Server, looking towards the future and possibly later using Percona XtraDB Cluster.

My first attempts at doing this involved removing the APT repository for MariaDB and adding one for Percona DB:

deb lucid main
deb-src lucid main

Trying to install percona-server-server-5.1 and percona-server-client-5.1 with libmysql16 didn’t work. The command complained that there were unmet dependencies: mysql-common. According to Bug #877018, an install of the Percona Server version of libmysql16 was needed.

Turns out that my version of libmysql16 was a MariaDB version, not a Percona version – and the Percona version wasn’t to be installed:

# apt-cache policy libmysqlclient16
  Installed: 5.1.62-mariadb115~lucid
  Candidate: 5.1.62-mariadb115~lucid
  Version table:
 *** 5.1.62-mariadb115~lucid 0
        100 /var/lib/dpkg/status
     5.1.61-rel13.2-430.lucid 0
        500 lucid/main Packages
     5.1.61-0ubuntu0.10.04.1 0
        500 lucid-updates/main Packages
        500 lucid-security/main Packages
     5.1.41-3ubuntu12 0
        500 lucid/main Packages

Forcing the install of the proper version of libmysql16 took care of that:

# apt-get --reinstall install libmysqlclient16=5.1.61-rel13.2-430.lucid
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
Use 'apt-get autoremove' to remove them.
The following packages will be REMOVED:
The following packages will be DOWNGRADED:
0 upgraded, 0 newly installed, 1 downgraded, 1 to remove and 3 not upgraded.
Need to get 3,691kB of archives.
After this operation, 6,259kB of additional disk space will be used.
Do you want to continue [Y/n]? y
WARNING: The following packages cannot be authenticated!
Install these packages without verification [y/N]? y
Get:1 lucid/main libmysqlclient16 5.1.61-rel13.2-430.lucid [3,691kB]
Fetched 3,691kB in 16s (231kB/s)
(Reading database ... 100664 files and directories currently installed.)
Removing libmariadbclient16 ...
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place
dpkg: warning: downgrading libmysqlclient16 from 5.1.62-mariadb115~lucid to 5.1.61-rel13.2-430.lucid.
(Reading database ... 100657 files and directories currently installed.)
Preparing to replace libmysqlclient16 5.1.62-mariadb115~lucid (using .../libmysqlclient16_5.1.61-rel13.2-430.lucid_i386.deb) ...
Unpacking replacement libmysqlclient16 ...
Setting up libmysqlclient16 (5.1.61-rel13.2-430.lucid) ...

Processing triggers for libc-bin ...
ldconfig deferred processing now taking place

However, there were parts of the MySQL installation that were not accounted for by the removal of MariaDB nor by the installation of Percona Server. Removing these would also remove everything that depended on MySQL server – and Percona Server could not be installed until they were removed. I took care of this impasse by incorporating them into a single `apt-get` command using the syntax to remove and add packages at the same time (note the plus and minus package suffixes):

apt-get install mysql-client-core-5.1- percona-server-client-5.1+ percona-server-server-5.1+ mysql-server-core-5.1-

After a copious amount of output, this final command took care of everything: Percona Server was live. I restarted things that might have broken with MySQL going down and all was well with Percona Server 5.1.

Moving a VMware ESXi 4.0 Guest From One Host to Another


, ,

To move an ESXi 4.0 guest is not all that hard – but you must be aware of several things along the way. Taken one step at a time, it won’t be difficult. In this discussion, we assume that you are moving from one ESXi 4.0 host to another – both with the same architecture. (Anything other than that gets much more complicated.)

First, make sure there are no snapshots. Snapshots are not compatible with this process and must be eliminated altogether.

Then, shut down the guest system. We don’t want the guest changing as it is copied across.

The next step is to copy the guest files from the original host to the destination host. This is the longest step considering you probably have gigabytes of data to transfer. This is also done from the ESXi command line.

I would normally use rsync but it doesn’t exist on an ESXi 4.0 system; use scp to copy the files. Your files for the guest should be located in /vmfs/volumes/datastoreX/guest/ where datastoreX is the data store containing the guest and guest is the name of the guest host. If you renamed the host in one of the GUIs such as VSphere Client, then this will reflect the original name.

Make a directory in the remote host (using the ESXi command line interface) in one of the data stores, and then use commands like these from the original host:

cd /vmfs/volumes/datastoreX/guest
scp * remotehost:/vmfs/volumes/datastoreY/guest/

This will copy the files to the remote host.

However, copying is not enough. Log into the remote host and go to the place you copied the files to. Check over the file ending in .vmx for any references to disks that must be changed. Convert remote host paths to local disk paths – you will probably need to know the long hexadecimal path for any paths, so list that before you start editing. If you execute a cd command to the directory containing the guest host, the long path will be in the prompt.

Next you must register the host so the system knows about it. Use this command at the command line to do this:

vim-cmd solo/registervm /vmfs/volumes/datastoreY/guest/guest.vmx

Now, to get the host started: start the host from VSphere Client. The client will give you a question to answer about where the guest came from. Click on the guest’s Summary tab and select I copied it. (which should be the default) and click Ok.

The guest will start up – and discover that the MAC address of its network interface has changed. For Linux, this means a new ethernet interface, and the configuration of the old interface is ignored: that means that there will be no network connectivity. Enter the console and change the old configuration from eth0 to eth1 (or whatever is appropriate; find out with ifconfig -a). This change varies by which Linux distribution you use; for Ubuntu, the configuration is in /etc/network/interfaces.

While you don’t have to reboot, it doesn’t hurt to do so after this change – and it tests the system in a clean reboot. The system should start up cleanly.

Now don’t forget to remove the original. Using the VSphere Client, right-click on the host and select Delete from Disk. This will remove the guest host entirely from the system and delete all of its files. If you want to retain the files, instead select Remove from Inventory which essentially unregisters the host, so that the system is not managing it – but the disk files remain.


Get every new post delivered to your Inbox.

Join 149 other followers