• About

UNIX Administratosphere

~ UNIX and Linux System Administration

UNIX Administratosphere

Monthly Archives: February 2010

5 Programs That Should be in the Base Install

26 Friday Feb 2010

Posted by ddouthitt in Perl, Ruby, Tips

≈ 6 Comments

Tags

ksh, m4, rsync, ssh

There are a number of programs that never seem to be installed with the base system, but should be. In this day and age of click-to-install, these programs will often require an additional install – I maintain that this should not be.

Most of these will be relevant to Linux, but the programs will often be missing on other commercial UNIXes also.

  • Ruby. This is the first that comes to mind. I have been installing ruby onto systems since 1.46 – and ruby is still a fantastic scripting language, and one of the best implementations of object-orientated programming since Smalltalk.
  • m4. I recently wrote about m4, and thought it was already installed on my Ubuntu Karmic system – not so. I used it to create a template for the APT sources.list file.
  • ssh. This should be installed everywhere automatically, and not as an add-on. For many UNIX systems, ssh is an add-on product that must be selected or compiled from source.
  • rsync. Rsync is a fabulous way to copy files across the network while minimizing traffic – even though it is not designed to be a fast way.
  • ksh. This will surprise most commercial UNIX administrators. However, Linux does not come with ksh installed – and the emulation by GNU bash is weak. Now you can install either AT&T ksh-93 (the newest version!) or the old standby, pdksh (which is close to ksh-88).

OpenVMS is a different animal – and some of the things that should be installed by default would be perl, ruby, java, SMH, and ssh. I’m not sure if perl or ssh is installed by default, but they should be. OpenVMS should also support compliant NFS v3 and v4 support out of the box – without making it difficult to connect to other NFS servers.

What programs do you think should be in the base install?

Advertisements

Using make and rsync for Data Replication

24 Wednesday Feb 2010

Posted by ddouthitt in Scripting, ServiceGuard

≈ 1 Comment

Tags

make, rsync, ssh

When maintaining a cluster environment (such as HP Serviceguard) there are often directories and configurations which need to be maintained on two different local disks (on different machines). Using make and rsync (with ssh) are excellent for this.

The rsync command allows you to replicate the local data onto the remote side copying only that which is necessary. This is not necessarily the fastest, but it is the most efficient: rsync was designed for efficiency over slow links, not speed over high speed links. Configure rsync to use ssh encryption automatically in the Makefile, then use rsync as the way to copy the files over:

RSYNC_RSH=/usr/bin/ssh -i /path/to/mykey

rsync -av $(LOCAL_FILES) remoteserver:$(PWD)

To automate this properly, an ssh key will have to be created using keygen and transfered to the other host. The private key (/path/to/mykey in this example) is used by ssh in the background during rsync processing; with the key in place, no interactive login is necessary.

For best purposes, create an “all” tag (at the top of the file) that explains the usable tags, and create a “copy” tag that does the relevant rsync.

I recommend copying only relevant files, not the entire directory: this way, some files can be retained only on one node – this is good for log files and for temporary files.

For example:

LOCAL_FILES=*.pkg *.ctl *.m4
RSYNC_RSH=/usr/bin/ssh -i /path/to/mykey

all:
    echo "To copy files, use the copy tag..."

copy:
    rsync -av $(LOCAL_FILES) remserver:$(PWD)

Make sure to verify the code before you use it in normal operation. Use the rsync option -n to perform a dry run which affects nothing. Also make sure that you don’t update files on different hosts; things might get interesting (and unfortunate…)

After performing the update, the Makefile can trigger a reconfiguration or a reload of the daemons to put the configuration in place.

Downtime Reports: How Did They Respond?

23 Tuesday Feb 2010

Posted by ddouthitt in Disaster recovery

≈ 2 Comments

Tags

jdorganizer, jeri dansky, wordpress, wordpress.com

I like reading downtime reports, because it shows what can happen and how people and departments respond to the crisis. There were two sites that experienced downtimes over the weekend – one very well known and one not.

WordPress.com went down over the weekend, disrupting thousands of blogs, including VIP subscribers. According to the report, the data hosting company had an unscheduled change take place in a router, resulting in wordpress.com responding to a fraction of the requests coming in. This meant that wordpress.com was not down, just inaccessable to 90% of incoming traffic. The failover mechanism was not activated, presumably because the host was not down – rather its ability to serve up web pages was hampered – the server itself was running fine.

This suggests the following improvement areas (speaking overall):

  • Use some sort of change control – and test changes when made. This unscheduled change very likely did not just affect wordpress.com, but perhaps many others.
  • Monitor not just the server, but paths into the server – everything between the customer and the server.
  • Failover mechanisms should be sensitive to not just server performance, but anything that affects the presenting of web pages to the public (or whatever service is being offered).
  • Relying on a single hosting provider (at one time) means that any problems that arise at that hosting provider affect your service in its entirety; relying on multiple providers in a cluster configuration means that if one hosting provider drops, your service continues (though degraded slightly).

The other site that went down was jdorganizer.com (the web site for Jeri Dansky: Professional Organizer). Since she used to be a system administrator before being a professional organizer, she knows IT. As a user, she had to respond to the outage she experienced (again caused by the data hosting provider).

Jeri explains on her blog what happened, and how she responded as a user of services. She lists the things she learned from the experience, in particular preparing a disaster plan and reviewing it.

Another thing she did was to switch providers when she no longer trusted hers to provide reliable services; being of a technical bent, she was able to make the switch and configure things reasonably easily. She had someone check availability and fixed the problems that arose.

Both of these experiences provide a window into how companies and other users of hosting services can respond when things fail. In both of these cases, the providers failed: the response from the users of the hosting provider services can help us to learn what to do if and when it happens to us.

Kudos to the WordPress.com team for keeping the blogs running, and kudos to both for being willing to tell us what happened (in delightfully complete technical detail…).

Upgrading OpenSUSE to New Release

21 Sunday Feb 2010

Posted by ddouthitt in OpenSUSE

≈ Leave a comment

Tags

live upgrade

A live upgrade of OpenSUSE was not officially supported until OpenSUSE 11.2 (the most recent release). However, a live upgrade can be done.

The steps are presented with clarity in this article on the OpenSUSE wiki. The steps quite concisely are these:

  • Change from the old repositories to the new open-source distribution repositories (and refresh).
  • Update zypper – the update tool that takes the place of apt or rpm or yum.
  • Add non-open source repositories (and refresh).
  • Reboot.
  • Add the updates repository (and refresh).

I would add one more step: update the system with the most current RPMs using zypper up – although this may already be done. It can’t hurt to do it.

Upgrading from OpenSUSE 10.3 to 11.0 requires some special steps; so could upgrading 11.0 to 11.1 if you are running KDE. See the article for details.

A live upgrade from OpenSUSE 11.1 to 11.2 is supported.

If you want to get started with OpenSUSE (an excellent distribution!), go to opensuse.org and try it.

m4: 5 Places You Should Use It

21 Sunday Feb 2010

Posted by ddouthitt in Scripting, Tips

≈ Leave a comment

Tags

m4, make

The utility m4 is underutilized and underappreciated, and when paired with make can be indispensible. The mailer sendmail has made m4 legendary, and behind GNU autotools is a lot of serious m4 code. Why not use it for other purposes as well?

Here are some areas in which m4 can be used to ease your work:

  • cfengine. You can template the configuration of a “standard file” or a “standard directory” as m4 macros and save yourself a lot of typing (and errors). You could also define a standard “shell script” configuration and use m4 to create it every time.
  • Nagios. Nagios configurations benefit a lot from heavy use of m4. Macros can be used to template large configurations, and to template standard configurations.
  • DHCP server. A DHCP server can be set up to assign specific addresses to specific hosts; the configuration, however, can be tedious if there are more than a couple of hosts. Macros can be used to simplify static host configurations.
  • rpmrc. The /etc/rpmrc file is used to configure RPM during package build time. While not used as heavily, m4 can be used to create an rpmrc file specially tailored for the type of CPU running – this configuration will then be used when RPMs are built.
  • HTML generation. When creating hand-crafted HTML, m4 macros can simplify the creation of the standard tags, especially taking care of the beginning and ending tags transparently with only the content visible.

m4 makes templating easy. Anywhere that you need to set up a configuration with multiple sizable elements, m4 can help. A single m4 macro can expand to numerous configuration lines, and a couple dozen m4 lines can result in an extensive configuration set.

Using m4 macros in this manner prevents errors (because of less typing) and all appropriate configuration elements are the same (no mistaken copies).

m4 also adds “include” file capabilities to any configuration file where it is used. This permits common configurations to be reused everywhere, even though the configuration file may not support include files directly.

Even though make is not present on most system installs, m4 is. Adding make will complete this pair and set you on your way to automatic configuration. Try it today!

HP’s Wind-Cooled Data Center in Wynyard Opens

17 Wednesday Feb 2010

Posted by ddouthitt in Data Centers, Hardware

≈ Leave a comment

Tags

eds, Energy Efficiency, hewlett-packard, hp, pue, wynyard

This is extremely interesting news, and has been covered widely in the business and technology press. HP designed and built a data center in Wynyard Park, England (near Billingham) which uses wind for nearly all its cooling needs.

EDS (purchased by HP) announced the building of the data center early in 2009, and the technology involved already was making news. DataCenter Knowledge had an article on it; ComputerWorld’s Patrick Thibodeau also had a very nice in-depth article on the planned data center. ComputerWorld followed up with an equally comprehensive article when the data center opened recently.

Another extensive and illuminating article was written by Andrew Nusca at SmartPlanet.

What is so interesting about the Wynyard data center?

  • It is wind-cooled, and uses a 12-foot plenum (with the equipment located on the floor above).
  • All racks are white, instead of black: this requires 40% less lighting in the data center.
  • Rainwater will be captured and filtered, then used to maintain the appropriate humidity.
  • The facility is calculated to have a PUE of 1.2 (one of the lowest ever). New energy-efficient data centers typically have a PUE of 1.5 or so.
  • HP estimates they could save as much as $4.16 million in power annually.

These are indeed exciting times for data center technology.

The Advent of NoSQL

14 Sunday Feb 2010

Posted by ddouthitt in Caché, Cloud Computing, Conferences, Open Source

≈ Leave a comment

Tags

databases, non-relational, nosql, nosql live

The concept of “NoSQL” (that is, non-relational databases) is more of a phenomenon than you might think. The NoSQL Live conference will take place on March 11, 2010, put on by the people behind MongoDB, a non-relational database.

In June 2009, a number of folks gathered in San Francisco to discuss the various NoSQL technologies (such as Cassandra, Voldemort, CouchDB, MongoDB, and HBase). Johan Oskarsson has an article about the meeting, with videos and presentations from the presenters.

ComputerWorld took note of the event, discussing NoSQL and how Amazon.com and Google are using non-relational databases for their data stores. Likewise, too, Facebook converted to non-relational databases.

Digg posted a nice article that talks about their conversion from MySQL to Cassandra, showing how they came to the point of considering non-relational databases.

Possibly the oldest non-relational database is non other than MUMPS (or M). This includes GT.M (open source) and Intersystems Cache. Long before relational databases came on the scene, MUMPS was running and saving data – and it continues to this day, working hard in finance and healthcare settings.

Over at nosql-database.org, they claim to be the Ultimate Guide to the Non-Relational Universe. This may be true; certainly they have an extensive list of links to noSQL articles, and a list of NoSQL events.

The NoSQL world has been covered by Dave Rosenberg, who noted the upcoming NoSQL Live event in his discussion of real-world use of non-relational databases. Dave had reported earlier about the pervasiveness of non-relational databases in the cloud.

Now to go read some more about NoSQL…

Managing Olympic Servers

13 Saturday Feb 2010

Posted by ddouthitt in Industry

≈ Leave a comment

Tags

alvarsson, atos origin, ina fried, it, olympics, vancouver

The Olympics is this week – and we’ll ignore the copyright shenanigans of the Olympics – but there has been some interesting articles about the massive requirements that the Olympics requires of its IT equipment and staff.

The company providing the IT services is Atos Origin, and Magnus Alvarsson is their leader on the spot. CNET’s Ina Fried interviewed Magnus on February 8, and followed up with details of the IT infrastructure required on February 10.

There are a number of unique problems they face. One is that certain media outlets required old equipment (such as ISDN lines) to send their data back home. Another is that voice, data, and video will all traverse over the network using IP – the first time the Olympics has done this.

I always enjoy reading about other’s IT challenges and how they met them.

IBM Introduces Power7

12 Friday Feb 2010

Posted by ddouthitt in Career, Hardware, Industry

≈ Leave a comment

Tags

ibm, power, power7, processors, tukwila, x86

On Monday, IBM introduced the Power7 processor to go up against the new Itanium Tukwila officially introduced by Intel the same day. The general consensus among those reviewing (such as CNET’s Brooke Crothers) these chips is that the Power7 is much better than the Itanium chip. Indeed, the Tukwila chip was delayed for two years.

This new Power chip will provide twice the processing power of its predecessor but with four times the energy efficiency, according to IBM. The Power7 offers eight cores with four threads each, giving 32 processing cores.

However, one notable absence is Sun: no new UltraSparc processor was announced. Of course, with Sun’s recent financial difficulties plus the buyout of Sun by Oracle, there may just be too much going on at the moment. Yet, will a new UltraSparc come too late?

In the meantime, analysts are noting the fact that Unix servers (such as those running Power7, UltraSparc, and Itanium) are declining, and that the x86 servers are increasing in power and capabilities, with the Nehalem-EX (otherwise known as Beckton) due out soon.

What this means for system administrators is that Linux on x86 could be the biggest growing career, in contrast to Unix (such as HP-UX, Solaris, and AIX).

Energy Star Program for Data Centers

08 Monday Feb 2010

Posted by ddouthitt in Data Centers, Energy Efficiency

≈ Leave a comment

Tags

energy star, epa, Google, ibm, pue

The EPA announced that they are expanding the Energy Star Program to include data centers; the measurements are expected to be finalized in June 2010.

The EPA is hoping that the new Energy Star rating for data centers will become a selling point for data centers. The new rating is based largely (but not completely) on the PUE (or Power Usage Effectiveness). William Kosik wrote an article in the September 2007 issue of Engineered Systems Magazine that explains PUE quite well and in detail.

Google talks about their efforts for power-efficient computing in their data centers in some depth; it’s very interesting.

IBM also announced just recently that they are building a new data center in Research Triangle Park where they will test effect of various temperature levels in the data center – and will cool it with outside air as well.

This is definitely an exciting time for data center power research; seems that there is something new every day.

← Older posts

Mei Douthitt

Mei is an experienced UNIX and Linux system administrator, a former Linux distribution maintainer, and author of two books ("Advanced Topics in System Administration" and "GNU Screen: A Comprehensive Manual").
  • Mei's Books
  • LEAF Project (home to the Oxygen distribution)
  • GNU Screen Home Page



View David Douthitt's profile on LinkedIn

Use OpenDNS

Bloggers' Rights at EFF

The Internet Traffic Report monitors the flow of data around the world. It then displays a value between zero and 100. Higher values indicate faster and more reliable connections.

Recent Posts

  • Running Icingaweb2 on Ubuntu 16.04.1 LTS
  • AppStream Error in Ubuntu 16.04 Xenial
  • Return to Window Maker (on Xubuntu 15.10)
  • Sharing Music from Xubuntu 15.10 using Tangerine
  • Mono on Xubuntu 15.04 and 15.10

Top Posts

  • Generating Passwords Using crypt(3)
  • The wheel Group
  • AppStream Error in Ubuntu 16.04 Xenial
  • How much memory is in the box? (all UNIX, OpenVMS)
  • Sparse files - what, why, and how
  • Logging every shell command
  • Wheel Group and Fedora (Red Hat) Linux
  • Rescuing an Interrupted Ubuntu Upgrade
  • Resetting the MacOS X 10.4 (Tiger) Admin Password (without disk!)
  • When root is locked out...

Calendar

February 2010
M T W T F S S
« Jan   Mar »
1234567
891011121314
15161718192021
22232425262728

Recent Comments

ddouthitt on Return to Window Maker (on Xub…
Missing internet and… on Rescuing an Interrupted Ubuntu…
cgnkev on Running Icingaweb2 on Ubuntu 1…
An Archy on Return to Window Maker (on Xub…
Kai on What’s Wrong with Nagios…

Category Cloud

BSD Career Data Centers Debian Debugging Disaster recovery Fedora FreeBSD Hardware HP-UX Industry Linux MacOS X Mobile Computing Monitoring Networking OpenSolaris Open Source OpenVMS Personal Notes Productivity Programming Red Hat Scripting Security Solaris Tips Ubuntu UNIX Virtualization

Archives

  • January 2017 (1)
  • December 2016 (1)
  • July 2016 (1)
  • January 2016 (4)
  • December 2015 (1)
  • August 2015 (1)
  • September 2014 (2)
  • August 2014 (1)
  • June 2012 (2)
  • May 2012 (2)
  • April 2012 (5)
  • March 2012 (3)
  • February 2012 (5)
  • January 2012 (9)
  • December 2011 (2)
  • November 2011 (6)
  • September 2011 (4)
  • August 2011 (2)
  • July 2011 (10)
  • June 2011 (3)
  • May 2011 (12)
  • April 2011 (10)
  • March 2011 (4)
  • February 2011 (3)
  • January 2011 (7)
  • December 2010 (1)
  • November 2010 (5)
  • September 2010 (11)
  • August 2010 (12)
  • July 2010 (8)
  • June 2010 (9)
  • May 2010 (13)
  • April 2010 (12)
  • March 2010 (23)
  • February 2010 (18)
  • January 2010 (14)
  • December 2009 (14)
  • November 2009 (11)
  • October 2009 (6)
  • September 2009 (12)
  • August 2009 (7)
  • July 2009 (8)
  • June 2009 (16)
  • May 2009 (13)
  • April 2009 (3)
  • March 2009 (13)
  • February 2009 (5)
  • January 2009 (16)
  • December 2008 (10)
  • November 2008 (9)
  • October 2008 (5)
  • September 2008 (10)
  • August 2008 (17)
  • July 2008 (6)
  • June 2008 (3)
  • May 2008 (14)
  • April 2008 (11)
  • March 2008 (10)
  • February 2008 (18)
  • January 2008 (17)
  • December 2007 (15)
  • November 2007 (30)
  • October 2007 (25)
  • September 2007 (11)
  • August 2007 (21)
  • July 2007 (9)
  • June 2007 (2)

Blogroll

  • …Details…
  • Aaron's OpenVMS Hobby Site
  • Brazen Careerist
  • Cuddletech
  • Debian Admin
  • Eight-Cubed
  • FreeBSD Diary
  • Hack a Day
  • Hoffman Labs
  • Librenix
  • Linux Kernel Newbies
  • Linux Kernel Weather Forecast
  • Living with IPv6
  • Mission Critical Computing
  • nixCraft
  • PaulDotCom (of Security Weekly Podcast)
  • root prompt
  • Standalone Sysadmin
  • The Book of Dead:[Systems]
  • Tim's Blog
  • Transparent Uptime
  • WordPress.com
  • WordPress.org

RSS Sharky’s Column

  • If you can't take the heat...
  • Throwback Thursday: Just a basic project
  • Guess who didn't take care of it?
  • Hey, it may be the best idea this project has seen!
  • Well, DID she ever change her password?
  • Big Data -- the 1970s version
  • Throwback Thursday: Just one thing
  • No good deed goes unpunished
  • Root Cause Analysis
  • Don't know about the pony, but that dog won't hunt

Pages

  • About

Meta

  • Register
  • Log in
  • Entries RSS
  • Comments RSS
  • WordPress.com
Advertisements