10 Programming Languages Worth Checking Out [from H3RALD]

This article (10 Programming Languages Worth Checking Out) over at H3RALD is very interesting. If you seek out new things to learn, and new computer languages to program in, then this article should pique your interest.

The languages listed are: Haskell, Erlang, Io, PLT Scheme, Clojure, Squeak, OCaml, Factor, Lua, and Scala. There is also a To get you started… section for each language with pertinent links for learning more.

I was surprised to find that at least six of these languages have significantly caught my attention already. I find Lua to be absolutely beautiful and a delight to program in (my PalmPilot has PLua loaded all the time). Squeak is just Smalltalk-80 kept alive – and Smalltalk has been of interest to me ever since I learned of it decades ago. Haskell and Erlang are interesting, too – but I’ve not followed that up with learning yet.

Now Scala and Clojure have my attention. Unfortunately, Clojure almost seems like it takes the simplicity of Common Lisp and trades it in for complexity. I don’t find the “complaints” against Common Lisp to be valid; I’d rather see Common Lisp implemented in Java than a Lisp-derivative.

I expect I’ll be talking more about Scala as time goes on – this language has caught me good.

A vulnerability walk-through

The FreeBSD kernel recently had a issue in the kenv(2) kernel call, and this article describes very well what it is – and why it is bad. The vulnerability itself is not terribly bad, but the problem exposed is a common one and shows how all user data must be vetted before it is used: a programmer must treat all user data as suspect.

In fact, there have been studies done by Professor Barton Miller at the University of Wisconsin showing that both commercial and open source programs (in a variety of operating systems) are vulnerable (to differing extents) to a constant barrage of random data.

If your code is to be secure, you absolutely must treat user data as hostile and unknown: any trust placed in the user will be abused by someone, either accidentally or purposefully. If by accident, the user will think your software broken and unreliable; if by purpose, your system (or someone else’s!) could be compromised.

Two excellent books on this topic (from two different angles) are these: Hacking: the Art of Exploitation (by Jon Erickson) and Secure Coding Principles (by Mark Graff and Kenneth van Wyk). The first will show you how broken code can be taken advantage of; the second will show you how not to write broken code.

Using options in Perl programs (with Getopt)

The utility getopt (or getopts) gets command line parameters for your program. The bash and ksh shells come with getopt builtin; getopts is a separate program.

To use this capability from Perl, use the Getopt library: either Getopt::Std or Getopt::Long. Most of the time you’ll probably want to use Getopt::Long just for its flexibility.

To start using Getopt::Std, use something like this initial fragment:

use Getopt::Std;
%options=();
getopts("vs:i",\%options);

This will give you the options -v, -i, and -s arg. After this fragment executes, the associated hash table entries will be defined if the argument is present – and if it is present, the value will be either 1 or the argument given. For example, $options{v} might be set to 1, and $options{s} could be “arg”.

Using Getopt::Long isn’t much more difficult:

use Getopt::Long;
Getoptions("s" => \$sflag,
"verbose!" => \$verbose,
"file=s" => \$file,
"interval=i" => \$interval,
"auto:i" => \$auto)

This set of options shows most of the features of Getoptions(). The -verbose option is a toggle (as noted by the ‘!’ at the end of the option name), and the alternate can be specified as -noverbose. For the -file option, a string argument is required (specified by the ‘=s’ on the end of the option specification). The ‘=i’ (as exemplified by the -interval option) means that an integer argument is required, and the ‘:i’ for the -auto option means a integer argument is optional. Float values (real numbers) are also possible by using the ‘f’ flag (such as “real=f” – option -real requiring a float argument).

Automation: Live and Breathe It!

Automation should be second nature to a system administrator. I have a maxim that I try to live by: “If I can tell someone how to do it, I can tell a computer how to do it.” I put this into practice by automating everything I can.

Why is this so important? If you craft every machine by hand, then you wind up with a number of problems (or possible problems):

  • Each machine is independently configured, and each machine is different. No two machines will be alike – which means instead of one machine replicated one hundred times, you’ll have one hundred different machines.
  • Problems that exist on a machine may or may not exist on another – and may or may not get fixed when found. If machine alpha has a problem, how do you know that machine beta or machine charlie don’t have the same problem? How do you know the problem is fixed on all machines? You don’t.
  • How do you know all required software is present? You don’t. It might be present on machine alpha, but not machine delta.
  • How do you know all software is up to date and at the same revision? You don’t. If machine alpha and machine delta both have a particular software, maybe it is the same one and maybe not.
  • How do you know if you’ve configured two machines in the same way? Maybe you missed a particular configuration requirement – which will only show up later as a problem or service outage.
  • If you have to recover any given machine, how do you know it will be recovered to the same configuration? Often, the configuration may or may not be backed up – so then it has to be recreated. Are the same packages installed? The same set of software? The same patches?

To avoid these problems and more, automation should be a part of every system wherever possible. Automate the configuration – setup – reconfiguration – backups – and so forth. Don’t miss anything – and if you did, add the automation as soon as you know about it.

Things like Perl, TCL, Lua, and Ruby are all good for this.

Other tools that help tremendously in this area are automatic installation tools: Red Hat Kickstart (as well as Spacewalk), Solaris Jumpstart, HP’s Ignite-UX, and OpenSUSE Autoyast. These systems can, if configured properly, automatically install a machine unattended.

When combined with a tool like cfengine or puppet, these automatic installations can be nearly complete – from turning the system on for the very first time to full operation without operator intervention. This automated install not only improves reliability, but can free up hours of your time.

Setting Goals

If you’ve not any goals, then what are you striving for? How will you know when you get there? What will you have accomplished?

With goals, we can focus on getting the results we want. A goal must be several things:

  • Specific – is your goal specific and clear enough?
  • Measurable – can you measure concretely when the goal is achieved?
  • Achievable – is the goal achievable? (or is it just a dream?)
  • Realistic – is the goal a realistic goal?
  • Timely – is there a time limit on the goal?

The SMART acronym helps you to remember this. The acronym itself is not important; the important thing is the goals you set.

I urge you to sit down and write out some goals and then the specific next actions (using GTD of course!) that you will do to achieve those goals. What steps will you take to accomplish these goals?

COBOL: “Reports of my death are greatly exaggerated.”

Having programmed in COBOL, I can say its not so bad as people think it is – and it remains a powerful force in the business world.  Thousands of lines of COBOL are being used every day.  (Of course, no one ever says thousands of lines of COBOL are being developed every day, but that’s another kettle of fish….)

Having also worked in RPG, and in APL (slightly), I can say without reservation that COBOL is not so bad.  The worst that people can say is that it takes 300 pages to write a program that C can do in one line – and they’re right.  Oh, well – can’t win ’em all eh?

Turns out that back in September, Fujitsu updated their three COBOL compilers: one for .Net, one for Windows, and one for UNIX (including Solaris Sparc!).  It also turns out that they have released (again) a previous version for personal educational use – and at no cost!  I caught wind of this via esotechnica.  It may well be that Fujitsu has provided the easiest way to get started in COBOL anywhere.

If I’d’ve had Fujitsu COBOL on Microsoft Windows XP (for instance) back during my COBOL class days, they’d probably have called it cheating (heh!).

If you want to stay with open source projects, you could always use tinyCOBOL, OpenCOBOL, or GNU COBOL… although I think GNU COBOL is dead (the project was to create a COBOL compiler front-end for GCC).  TinyCOBOL and OpenCOBOL appear to be quite active (I actually packaged the tinyCOBOL RPMs for a while).

Anybody ever use COBOL for system administration?  I doubt it – but who am I to say?  Maybe that important network manager is written in COBOL and we just don’t know it…

Quickly creating large files

I’m surprised how many people never think to do this…. but it makes it quite easy.

If you need a large text file, perhaps with 1,000s of lines (or even bigger) – just use doubling to your advantage! For example, create 10 lines. Then use vi (or other editor) to copy the entire file to itself – now 20 lines. If you remember how a geometric progression goes, you’ll have your 1,000s of lines rather fast:

  1. 10 lines…
  2. 20 lines…
  3. 40 lines…
  4. 80 lines…
  5. 160 lines…
  6. 320 lines…
  7. 640 lines…
  8. 1280 lines…
  9. 2560 lines…
  10. 10240 lines…

Ten steps and we’re at 10,000+ lines. In the right editor (vi, emacs, etc.) this could be a macro for even faster doubling. This doubling could also be used at the command line:

cat file.txt >> file.txt

Combined with shell history, that should double nicely – though using an editor would be more efficient (fewer disk reads and writes).

When writing code, often programmers will want to set things off with a line of asterisks, hash marks, dashes, or equals signs. Since I use vi, I like to type in five characters, then copy those five into 10, then copy those 10 and place the result three times. There you have 40 characters just like that.

If only a certain number of characters is needed, use dd:

dd if=/dev/random of=myfile.dat bs=1024 count=10

With this command (and bs=1024), the count is in kilobytes. Thus, the example will create a 10K file. Using the Korn shell, one can use this command to get megabytes:

dd if=/dev/random of=myfile.dat bs=$(( 1024 * 1024 )) count=100

This command will create a 100M file (since bs=1048576 and count=100).

If you want files filled with nulls, just substitute /dev/null for /dev/random in the previous commands.

You could use a word dictionary for words, or a Markov chain for pseudo-prose. In any case, if you only want a certain size, do this:

~/bin/datasource | dd if=- of=myfile.dat bs=$(( 1024 * 1024 )) count=100

This will give you a 100M file of whatever it is your datasource is pumping out.

OpenSolaris on a MacBook

OpenSolaris is very interesting, and since the introduction of dtrace and ZFS has enthralled many. I tried to install it onto my HP Compaq E300 laptop (which it was unsuitable for), and tried to install it onto an HP Compaq 6910p laptop. In this case, the networking was unsupported: both the ethernet and the wireless drivers were not included with OpenSolaris Express (Developer Edition).

In any case, I expect I might just be shopping for a laptop in the next year – and it’s nice to see that OpenSolaris does run on the Apple MacBook.  This article goes into detail about how the writer got it to work, and each of the steps that were taken to make it happen.  Paul Mitchell from Sun discusses dual-partitioning a MacBook in this context as well.  Alan Perry (also from Sun) had done the same thing with a Mac Mini, and Paul extended it to the MacBook.  Both entries are detailed and have to do with MacOS X and Solaris dual-booting.

An a different note, check out the graph of library calls from dtrace in this article.  From what I’ve heard of dtrace, it’s the ultimate when it comes to debugging…

Five Reasons an Administrator Should be a Programmer Too

There are many reasons to be a programmer, but system administrators have unique reasons for having programming skills. Programming is not just useful when cobbling together shell scripts or Perl scripts, but in many other areas as well.

Consider the case of an application that stores a default directory somewhere, and changes it automatically. What happens when that saved directory becomes a removable disk, and is not changed again? The problem exhibits itself as a long wait for the system to recognize that there is no disk there before presenting a list of directories to choose from when saving a new file. The solution is to make the program choose a new default by recognizing the conditions where the program switches its defaults.

There are many reasons for a system administrator to pick up programming skills:

Programming skills translate into better scripting and further automation of system administration duties. A programmer can put together powerful scripts in Korn Shell, Perl, Ruby, or other languages – and can put them together in novel and powerful ways.

Understanding programming helps administrators to understand program failures. Like above, a program failure can be understood and thus solved easier when the troubleshooter can think like the programmer who designed the program.

With the advent of open source, it is possible to solve particularly intractable problems through source modification. A program can be modified to add new logging capabilities, breakpoints, selections, and other details. Debugging problems can happen at the source level.

Programs can be adapted or enhanced for local needs. If there are special requirements, an open source program can be modified and changed to adapt. This leads to enhanced environments that fulfill the needs of the customer.

Programming is a refreshing change from standard administration work. Putting time in on a personal programming project can refresh your spirits and recharge your batteries. If it is on an open source project much the better.