Expanding your desktop across operating systems

When you use Synergy, it connects one computer (and desktop) to another. Your mouse will flow seamlessly from one desktop to the next. A number of desktops can be combined, although programs remain confined to their desktops.

Synergy is different from multiscreen desktops – a standard multiscreen desktop stretches a single operating system environment across multiple screens or displays. In most normal cases, this is what would be preferred for normal users. However, if you are using multiple systems for different purposes, you can concatenate separate displays together.

When you move your mouse from one desktop (Mac OS X, for instance) to another, it is like moving from one computer to the next. In some ways, it is like a multi-screen software KVM (Keyboard-Video-Mouse) switch. The server runs on the system with the keyboard and mouse, and the clients run on other systems. Each system has its own monitor, and can be placed (virtually) anywhere through proper configuration of the server. For example, the screens could be placed one on top of the other, or side by side. If one display is disconnected, then it will be skipped. For example, if there are three screens in a row, and the middle one loses connection to the server, then it will be skipped over as the mouse moves from one system to the other.

Recently, I had the server running on Mac OS X, a client on Fedora Core 5, and a client under Solaris 8. The mouse could then be moved to the left side of the Mac OS X display, and it would appear on the right on the Fedora Core 5 display. Continuing to move the mouse, it would eventually wind up on the Solaris 8 display. The only drawbacks are the network delay and differing mouse speeds. I’ve grown addicted to it – try it today!

What’s Your Favorite Operating System?

I was asked this question recently. Everyone likely has an answer: Red Hat Linux, Debian GNU/Linux, Solaris… My answer surprised the questioner: UNIX and UNIX-workalikes. This includes FreeBSD… and Red Hat… and Solaris… and HP-UX… and AIX… and so forth. When I first became interested in UNIX, not one of the aforementioned products existed. First UNIX system I got my hands on briefly was Eunice (look it up 🙂 and the next (a few years later) was Microport System V (for the IBM AT).

Perhaps you might think Solaris is better than Linux – or NetBSD is better than OpenBSD. I suggest it doesn’t matter. Each UNIX (or UNIX-like) environment has its pluses and minuses. Individual choices are personal and enterprise choices are practical – in either case, which is truly better doesn’t matter.

If your enterprise is using Oracle, for example, the choice of which UNIX system you use is dramatically reduced: which system will Oracle support? You won’t be using Oracle on FreeBSD unless you forgo the Oracle maintenance contract. Choices like this continually appear in the enterprise. Perhaps the new version of Red Hat Enterprise Linux has everything you want – but Oracle doesn’t yet support that version.

Alternately, which system you use for your own desktop is a personal choice. Which one is “better” is which one feels better to you. UNIX is, at its heart, unified – that is, it is a single environment – but it provides a wide choice of user interfaces, user programs, and even technical items such as filesystems and virtual memory management schemes. Use whichever one seems better.

What do I use on my personal desktop? Mac OS X. However, in line with the ideas posited above, I’ve just expanded my “desktop” with Synergy, linking my “other” desktop (first Fedora Core 5, now BeleniX with OpenSolaris core) to my Mac OS X desktop. More about Synergy later.

So next time someone tells you what their favorite operating environment is – why not find out what it is they’re so excited about? You might find something exciting yourself.

The decTop $100 Computer!

Lifehacker has an article on a product called the decTop. It is billed as a Internet-browsing appliance, but is apparently a complete (and upgradable) computer as well. Sounds like the perfect hacker computer.

It does seem to be slowish by modern standards, and if my experience with 128M is any indication, it won’t run the most current distributions. There are some excellent discussions on how to install Ubuntu 6.06 onto it: one from Jonathon Scott and one from Ray over at Librenix. Juan Romero Pardines from the NetBSD Project has put NetBSD onto the decTop. Someone else put AstLinux onto a decTop – and added great pictures of the internals as well.

Over at Docunext there is a great series on the decTop, including pictures of the guts and of the locked drive (apparently no longer locked in current versions). There is also a set of tips on getting Debian to work on the decTop, as well as the author’s experiences in running the decTop on solar power.

The system advertises an ethernet connection, but it is, in fact, an USB-ethernet dongle. This fact combined with the USB-1.1 means that the ethernet connection is very, very slow. Everything hooks into the USB ports, including keyboard and mouse as well as the Ethernet connection. These two facts appear to be some of the worst drawbacks of the device.

There also appears to be no wireless support at all – the Internet browsing devices I’ve seen all use wireless connectivity as their main connection method – so this appears to be more of a desktop device, rather than a portable device. It is fanless, which means near absolute quiet. Who knows, maybe they’d make a good cluster (heh).

I must admit, when I first heard the name, I thought it might be a minature of one of these instead. Silly me.

Installing Fedora Core or Knoppix to an Compaq Armada E500

The saga continues….

Fedora Core 5

Installing Fedora Core 5 turned out to be very slow. At first I thought perhaps this was because of the CDROM speed (24X) or because of the network speed (10Mb/s).

This, at least, had been the hypothesis… then the system crashed. Using the Alt-F4 key to look at the error messages presented this:

<3>Out of Memory: Kill process 349 (loader) score 2647 and children.
<3>Out of Memory: Killed process 524 (anaconda).

Anaconda is the system installation process, written in Python. So, of course, when it is killed, installation stops – though it most likely stopped after the loader was killed.

This reminds me of running yum under my CentOS 3.8 laptop with 48M of memory – it too became unusable due to memory constraints. APT-RPM never had these problems. Is Python being a hog?


Knoppix 3.3 refused to see my PS/2 mouse – at least, the trackpad on the laptop is supposed to be a PS/2 style mouse. Nothing worked.

Knoppix 3.9 worked fine, but it was very slow (10-15 seconds or more to load Konqueror) and the hard drive installer was labeled a very early version. Just starting it up took something like 30-60 seconds. Knoppix 3.9 also gave up the professional backdrop and graphics, and gave up WindowMaker besides. Why would you give up WindowMaker and retain twm for instance?

A Rant….

It used to be that UNIX systems worked on machines for years – even through several upgrades. With this machine I can only wonder. Memory of 128M is substantial – why do current systems require a ghastly amount of memory? Is UNIX and Linux taking after Windows? Are we going to need upgrades every time a new version is released?

I often wonder what the developers and testers of new systems (whether Windows, Solaris, or whatever) are using. For example, on my desk I’ve a Pentium 4 with 256M of memory – and this is pretty much the fastest Intel machine I’ve got. Do you think a Solaris developer is using a machine like this? Or a FreeBSD developer? Or a Red Hat developer?

I tend to think of the developers as spending their hard earned money on the biggest, fastest, fanciest machines they can get – then programming for them – and then forcing the rest of us to carry along. It’s not avarice, just lack of forethought.

One side rant: whatever the benefits or disadvantages of Python are as a language, it often seems to take a lot more memory than Perl or Ruby or Korn shell – and yet it is what everybody is using. I can’t run yum under Centos 3.x because I’ve only 48M of memory – and I’m talking about a text mode environment. I can’t run Anaconda (Fedora Core 5) because I’ve only 128M of memory. One hundred twenty-eight megabytes.

Do we need gigabytes of memory just to install now?

Installing Kubuntu 7.04 on a Compaq Armada E500


My goal is to use a variety of different UNIX and Linux systems on this Compaq Armada E500 laptop. I plan to use each system as a primary desktop for a while, then save a copy of the disk to another system (a Mac Mini PowerPC G4). Systems that are likely to be installed include Ubuntu, Kubuntu, CentOS, Solaris, OpenSolaris, Fedora Core, OpenBSD, and FreeBSD. Perhaps that is why I gave the laptop the host name chameleon.


This is a nice machine, and came with Windows 2000. I installed all of the proper applications for use under Windows (Firefox, Thunderbird, Eraser, drivers) – and then saved the entire disk off to another location. To do this, I did:

nc 28088 < /dev/hda

and on I did:

nc -p 28088 -l | gzip -c – > hda.gz
While this transmits the hard drive data in the clear and uncompressed across the network, the machine (a Pentium III) is a lot slower than the destination (a PowerPC G4) – or at least I would assume so. So I let the destination do the compression. This network is also used by no one but me and is not normally Internet connected (dial-up!).

Installation of Kubuntu

I thought installation would be simple – but not to be. Turns out the CDROM is ghastly slow, and the Kubuntu live distribution runs quite slowly. This made the other problems seem much worse. In fact, Gnoppix (the GNOME version of Knoppix) suffers from the same. Knoppix is the only CD based system so far to appear to not be slow. Knoppix is also the only one to offer multiple window managers – so you can skip KDE and use Icewm instead – both were very nice on this laptop.

However, my goal with this laptop (at least initially) is to install a variety of Linux and UNIX distributions on it in sequence, using each one for a time and saving the results off to the Mac Mini G4. Perhaps that is why I gave it the host name chameleon. I hope to install fairly recent distributions in order to try them out and see how they work on here and to use them daily (at least for a time). After a while I will settle in one, and keep the others for some other time (or some other system).

The first attempt at installation of Kubuntu 7.04 did go well for a little while, but then seemed to freeze at 94% during a hardware scan. I’d used no special kernel parameters, and let it go for several hours – which it needed. I watched it go from 90% to 91% on up to 94% before it froze (or appeared to).

The second attempt at installation went worse. I used additional kernel parameters (at the beginning of the line): dma noacpi noapic – and attempted to install. At first, I used the auto partition setup. This didn’t work. Then I used manual partitioning. First I used the auto partition setup (swap plus root) and used the xfs filesystem. The installer stated that GRUB often crashes when /boot is xfs, and returned me to the partitioning screen – with no hint on how to use LILO instead as recommended.

So I changed the setup so /boot was ext3 and used xfs for root – then jfs – then ext3. The result was always the same – creating the filesystem on the partition failed (no reason given). I later found that /dev/hda3 and /dev/hda4 (device files) were missing. I recreated them with no apparent effect on the install program.

So again I rebooted – and waited.

I keep thinking, too – that the hard drive light on this machine must be indicating disk access (as compared to hard disk access alone). It seems to keep pace with the CDROM – but who knows for sure.

And the graphics in Kubuntu are stunning – I always did like the look of KDE and the Kubuntu followed suit with the same fabulous artwork. Thanks fellas!

Now if only it would be faster booting (though the hardware must have something to do with that). An error pops up on boot – “The process for the file protocol died unexpectedly.” This has happened each time, though I’ve forgotten about that until now. This is not a promising sign – either for the installation or for the reliability of Kubuntu itself.

I rechecked the requirements for Kubuntu (and Ubuntu) to be on the safe side – turns out both require 256M of RAM (!). It would appear that Linux has been steadily becoming bloated just like other operating systems out there. My system “only” has 128M of RAM – that’s all, just “only” 128M.

If Windows 2000 can run, why not Ubuntu? Sigh. It was then that I decided that I would not be installing Kubuntu 7 (or Ubuntu 7) on this laptop.

Next attempt: Solaris 8.

BARcamp Chicago!

Got back from BARcamp Chicago Sunday night. It was a good time, and had a lot of good workshops. Met some good people, and used the nice high-speed bandwidth (but had to bypass the slow DNS!).

If you want an excellent DNS service, fast and unrestricted, use OpenDNS. This service also offers phishing protection, abbreviations, and spell-correction.

At BARcamp, some folks went to sleep – and some did not (like yours truly…). Several brought sleeping bags and went to sleep.

There were talks on Testing, the Bayes Theorem, Groovy, LISP, the rPath Linux distribution and Conary, and more. There was also the “InstallFest” – Linux installs made easy with help on hand. Even so, my machine was maxed out with CentOS 3 (a Red Hat 2.1AS source-compiled distro), even though I did upgrade it to CentOS 3.8. My machine is probably memorable as it had to be the oldest machine present (a Pentium-150 IBM Thinkpad) – and had no graphical interface – at least, on the machine itself. The graphical interface on the Thinkpad 760XL is rather odd – the full screen is used by “stretching” the actual display to the full size; otherwise, it only takes up about 75% of the LCD display space.

It was interesting to see (at BARcamp) that the Mountain Dew disappeared and was hard to get at the end, while there was plenty (plenty!) of Red Bull left. We know which is favored….

Next up is the Chicago Linux Group (which also hosts the Chicago Lisp Group), as well as the Madison LOPSA chapter meeting.