Installing Fedora Core or Knoppix to an Compaq Armada E500

The saga continues….

Fedora Core 5

Installing Fedora Core 5 turned out to be very slow. At first I thought perhaps this was because of the CDROM speed (24X) or because of the network speed (10Mb/s).

This, at least, had been the hypothesis… then the system crashed. Using the Alt-F4 key to look at the error messages presented this:

<3>Out of Memory: Kill process 349 (loader) score 2647 and children.
<3>Out of Memory: Killed process 524 (anaconda).

Anaconda is the system installation process, written in Python. So, of course, when it is killed, installation stops – though it most likely stopped after the loader was killed.

This reminds me of running yum under my CentOS 3.8 laptop with 48M of memory – it too became unusable due to memory constraints. APT-RPM never had these problems. Is Python being a hog?

Knoppix

Knoppix 3.3 refused to see my PS/2 mouse – at least, the trackpad on the laptop is supposed to be a PS/2 style mouse. Nothing worked.

Knoppix 3.9 worked fine, but it was very slow (10-15 seconds or more to load Konqueror) and the hard drive installer was labeled a very early version. Just starting it up took something like 30-60 seconds. Knoppix 3.9 also gave up the professional backdrop and graphics, and gave up WindowMaker besides. Why would you give up WindowMaker and retain twm for instance?

A Rant….

It used to be that UNIX systems worked on machines for years – even through several upgrades. With this machine I can only wonder. Memory of 128M is substantial – why do current systems require a ghastly amount of memory? Is UNIX and Linux taking after Windows? Are we going to need upgrades every time a new version is released?

I often wonder what the developers and testers of new systems (whether Windows, Solaris, or whatever) are using. For example, on my desk I’ve a Pentium 4 with 256M of memory – and this is pretty much the fastest Intel machine I’ve got. Do you think a Solaris developer is using a machine like this? Or a FreeBSD developer? Or a Red Hat developer?

I tend to think of the developers as spending their hard earned money on the biggest, fastest, fanciest machines they can get – then programming for them – and then forcing the rest of us to carry along. It’s not avarice, just lack of forethought.

One side rant: whatever the benefits or disadvantages of Python are as a language, it often seems to take a lot more memory than Perl or Ruby or Korn shell – and yet it is what everybody is using. I can’t run yum under Centos 3.x because I’ve only 48M of memory – and I’m talking about a text mode environment. I can’t run Anaconda (Fedora Core 5) because I’ve only 128M of memory. One hundred twenty-eight megabytes.

Do we need gigabytes of memory just to install now?

Leave a comment