Using Meta-packages to Standardize Servers

Both Ubuntu and Red Hat offer meta-packages which have no files of their own, but depend on others – thus requiring a set of packages to be present. You can use these packages to require a set of software to be present on a server, especially those that are not normally installed by the vendor’s install process.

A meta-package can save you time – you don’t have to install each package one at a time. A meta-package can also be included as part of a Puppet environment, so that all servers will be kept up-to-date with the current set of packages. A meta-package can also be a part of an automatic install process, also bringing in all necessary software and simplifying the installation steps.

In Ubuntu, creating your own meta-packages is made easy by using the equivs package. With RPM, you’ll have to create a meta-package using rpm-build and the appropriate SPEC file.

The only way that using a meta-package will truly save time is if you are using a program like APT or YUM to do installations because they will automatically compute the dependencies required and install them automatically when the meta-package is installed.

With a meta-package, you can require a set of packages that should be on every server – as well as force some packages to be removed. You can create a meta-packages that includes all packages that should be on a server, but aren’t usually installed (like gawk, logrotate, sysstate, ntp, logwatch, make, and m4 for instance). When the package server-main is then installed, all of its dependencies will also be installed. Any packages that are listed as conflicting packages will also be removed: packages like unattended-upgrades and command-not-found for instance.

Meta-packages could be created for packages that are from the Ubuntu Main repository, and for those packages that are in the Ubuntu Universe repository. This makes it simple to only include software from the Main repository and preclude those packages that are from the Universe repository.

These meta-packages could then be added to a local repository and added to a system during installation; this simplifies the package installation part of the install process and allows you to update any currently installed systems simply.

As an example, here’s my list for requirements (put into server-main) from the Ubuntu Main Repository:

  • lvm2
  • byobu
  • ruby
  • vim
  • snmpd
  • snmp
  • mlocate
  • postfix
  • ltrace
  • strace
  • wget
  • ntp
  • m4
  • make
  • ifenslave
  • dnsutils
  • procps
  • sysstat
  • logrotate
  • logwatch
  • sharutils
  • pdksh
  • dc
  • bsd-mailx
  • nut
  • finger
  • xfsdump
  • xfsprogs

And from the Universe repository, these are my suggested requirements (used as dependencies for server-universe):

  • iperf
  • jwhois
  • apt-file
  • chkconfig
  • atop
  • dstat
  • maatkit

The Metapackage Problem and apt-get autoremove

Seems that this is a common problem among people using APT. During an apt-get run, it might tell you that there are a number of programs that can be removed using apt-get autoremove. Users that are too trusting may take these recommendations at face value and thus wind up removing too much – like all of GNOME and X for instance.

First, a little description about metapackages and package dependencies. A metapackage is one that only exists to provide dependencies: a metapackage requires a set of other packages, usually to provide a collective set of packages (such as a GNOME Desktop, a KDE Desktop, or other things). When you install a package like gnome-desktop-environment, it installs everything you need to have a complete set (in this case, a complete GNOME Desktop). This is expected behaviour.

The other thing to understand is the concept of an automatically installed package versus a manually installed package. When you manually install a package – that is, installed it explicitly by name – the package is considered a package that you, the user, wanted on the system. An automatically installed package is only there because a manually installed package required it. If you wanted one of the automatically installed packages, you would have installed it explicitly – right?

The problem comes when these two processes collide: when you install a metapackage, you are really saying that you want all of the dependencies installed, even though all of the dependencies are marked as automatically installed. A metapackage is an easy way of saying that you want all of these packages (listed as dependencies) installed without explicitly saying so. However, the system does not know this.

This problem then manifests itself when you remove one of the dependencies.

Let’s continue with this example: having installed the gnome-desktop-environment package, let’s say you wanted to remove GNOME Evolution – the evolution package. APT will warn you that the gnome-desktop-environment package will also be removed. It is at this point that you should pause and seriously consider the ramifications of what is about to happen – but we’ll continue onwards.

Once you have removed Evolution – and the “GNOME Desktop Environment” metapackage – there are a lot of automatically installed packages that are not required by any packages on the system. What does this mean exactly? Normally, an automatically installed package is not one that you wanted to have installed but it was required by something you did want. However, in this case, these automatically installed packages (such as vino, evince, and totem for example) are in actuality software that you want.

If you try to remove packages through the use of the apt-get autoremove command, you will see a list of packages that are marked as automatically installed and that are not needed by any package currently loaded. In our example, this is a long list of packages that you actually want to keep!

If you have already removed these packages…

There are a few ways to fix this problem once it has occurred. One is to mark all packages in the system as manually installed:

aptitude keep-all

This marks everything in the system as a manually installed package. This defeats the “autoremove” process entirely and may cause your system to contain unnecessary packages over time.

Another option is to use tools to mark packages as manually installed. There are a variety of ways to do this. One way is to actually try to install the package (again): APT recognizes this and just flags the already installed package as manually installed. Another way is to use the apt-mark utility or to use the aptitude unmarkauto command to change the marking on the package (such that the system labels it as manually installed).

However, the best way is to reinstall the metapackage: this will reinstall all of the bits that are needed for operation. If necessary, using the --reinstall option:

apt-get install --reinstall gnome-desktop-environment

This command can be done from the command line and can be done without X or other graphical environment. It can also be done from the rescue shell – and possibly without networking if you have never run apt-get clean.

If you are determined enough (and knowledgeable enough) you can get around this problem by building your own metapackages with the software you desire. Building a metapackage is simple enough and is made trivial through the use of the equivs package. However, if you want a package like gnome-desktop-environment but don’t want specific packages, a better idea might be to get the source to the package and rebuild it with just the software desired listed as dependencies.

You could also “fake it” by using the equivs tools to generate a metapackage that fulfils the role of the package or packages you wish to remove. This is not recommended, however.

An excellent article about using equivs was written in the Ubuntu Forums way back in March of 2008 by “epimeteo” from Portugal.

Even with the capabilities of the equivs package and other metapackages, the best thing to do is to keep the normal metapackage: this allows you to keep the system updated with current packages, prevents future surprises, and saves a lot of work.

Putting Debian packages on hold

When administering a Debian (or Ubuntu) system, putting packages on hold can be very useful. For example, if a critical part of the system is used by developers, and is continually updated, the developers will want to be aware of updates and will want to check their code in the new environment. Programs like Tomcat, Cocoon, MySQL will be in this category.

Similarly, if a critical portion of the system is to be updated, you wouldn’t want it to be part of the automatic updates – though you really shouldn’t automatically update, since you don’t know what can break until you test it.

To hold a package or packages, you should use dpkg --set-selections. If you run the command dpkg --get-selections you can see what is set already:

# dpkg --get-selections | head
acct                                            install
adduser                                         install
apparmor                                        install
apparmor-utils                                  install
apt                                             install
apt-transport-https                             install
apt-utils                                       install
aptitude                                        install
at                                              install
auditd                                          install

As an example, let’s consider the package dnsutils. Let’s see what would happen before we do anything:

# apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
  bind9-host dnsutils libbind9-60 libdns64 libisc60 libisccc60 libisccfg60 liblwres60
8 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 1,257kB of archives.
After this operation, 0B of additional disk space will be used.
Do you want to continue [Y/n]? n
Abort.

Now let’s change this. We’ll put the package dnsutils on hold using dpkg --set-selections:

# echo dnsutils hold | dpkg --set-selections

Let us check the results:

# dpkg --get-selections | grep dnsutils
dnsutils                                        hold

Now, when we try the system update again, things have changed:

# apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages have been kept back:
  bind9-host dnsutils libbind9-60 libdns64 libisc60 libisccfg60 liblwres60
The following packages will be upgraded:
  libisccc60
1 upgraded, 0 newly installed, 0 to remove and 7 not upgraded.
Need to get 29.9kB of archives.
After this operation, 0B of additional disk space will be used.
Do you want to continue [Y/n]? n
Abort.

Now, dnsutils – and its related packages – are being held back, just as we wanted. The other packages are being held back because they are only required by dnsutils; without upgrading dnsutils, they won’t be upgraded either.

NO PUBKEY errors from APT (Ubuntu Linux)

When you are using APT (the package manager for Debian and Ubuntu Linux), you might receive some NO PUB_KEY errors – warnings that no public key could be found for one software repository or another. You can keep on going without fixing these warnings, but in the interest of security you should fix the problem. It is rather simple, after all.

Firstly, if you are running Ubuntu, you can use the Ubuntu key server to help you fix this problem. The server uses the SKS Key Server software to provide this valuable service.

The method is to first import the key into GPG (GNU Privacy Guard) via the Ubuntu key server (or indeed, any other), and then import it into APT via the apt-key command:

gpg --keyserver subkeys.pgp.net --recv 0123456789ABCDEF
gpg --export --armor 89ABCDEF | sudo apt-key add -

In place of 0123456789ABCDEF put the tag given in the NO PUB KEY message, and then insert the code given in the response from gpg in place of 89ABCDEF for the export.

However: apt-key has an apparently little known command option that will pass options directly to gpg. Using this option (“adv”), the update of the key can be done in one step instead of two:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 0123456789ABCDEF

If you already have the appropriate gpg key – or can get it from an appropriate web site or FTP site – you can add it directly to your APT key ring using apt-key. For example, here is an example adapted from Google’s Linux Repository Configuration page:

wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -

One could just as easily use any other method to get the file that supported the given URL. Remember that you must add the key as root; it won’t work otherwise. If you have the file locally, this is sufficient:

sudo apt-get add somekeyfile.gpg

Hope this helps you.

Core Linux – packages

The new version of Core Linux comes with packages and appears to be fully comprised of packages (like Red Hat Linux, and unlike FreeBSD which has a core application set). These packages are simple: they are just tar.bz2 files that contain files relevant to the application, and a set of files that go under /etc/coretools/pkg.

The directory /etc/coretools contains everything related to core packages; the pkg directory has the details on each package, and the directory exec.d has plugins for the program corepkg. Plugins are just scripts that are called by corepkg.

The program corepkg lists its help if called with no parameters. Some of the more common usages might be:

  • corepkg --list (list current plugins)
  • corepkg --exec=info --pkgname=pkg (package information by name: pkg)
  • corepkg --exec=list (list all installed packages)

The plugins as installed are:

  • contents – list files created by package named
  • count – count packages matching specified options
  • info – information on specified package
  • install – install specified package
  • list – list installed packages
  • remove – removed specified package

The packaging system is simple and driven fully by shell scripts. It should be possible to ignore it without adverse effects. There don’t seem to be any packages beyond the basic system, but that may not be the case. Anyway, the goal of the original Core Linux – and its descendents – is to build your own system through compiling your own code.