The decTop $100 Computer!

Lifehacker has an article on a product called the decTop. It is billed as a Internet-browsing appliance, but is apparently a complete (and upgradable) computer as well. Sounds like the perfect hacker computer.

It does seem to be slowish by modern standards, and if my experience with 128M is any indication, it won’t run the most current distributions. There are some excellent discussions on how to install Ubuntu 6.06 onto it: one from Jonathon Scott and one from Ray over at Librenix. Juan Romero Pardines from the NetBSD Project has put NetBSD onto the decTop. Someone else put AstLinux onto a decTop – and added great pictures of the internals as well.

Over at Docunext there is a great series on the decTop, including pictures of the guts and of the locked drive (apparently no longer locked in current versions). There is also a set of tips on getting Debian to work on the decTop, as well as the author’s experiences in running the decTop on solar power.

The system advertises an ethernet connection, but it is, in fact, an USB-ethernet dongle. This fact combined with the USB-1.1 means that the ethernet connection is very, very slow. Everything hooks into the USB ports, including keyboard and mouse as well as the Ethernet connection. These two facts appear to be some of the worst drawbacks of the device.

There also appears to be no wireless support at all – the Internet browsing devices I’ve seen all use wireless connectivity as their main connection method – so this appears to be more of a desktop device, rather than a portable device. It is fanless, which means near absolute quiet. Who knows, maybe they’d make a good cluster (heh).

I must admit, when I first heard the name, I thought it might be a minature of one of these instead. Silly me.

Installing Fedora Core or Knoppix to an Compaq Armada E500

The saga continues….

Fedora Core 5

Installing Fedora Core 5 turned out to be very slow. At first I thought perhaps this was because of the CDROM speed (24X) or because of the network speed (10Mb/s).

This, at least, had been the hypothesis… then the system crashed. Using the Alt-F4 key to look at the error messages presented this:

<3>Out of Memory: Kill process 349 (loader) score 2647 and children.
<3>Out of Memory: Killed process 524 (anaconda).

Anaconda is the system installation process, written in Python. So, of course, when it is killed, installation stops – though it most likely stopped after the loader was killed.

This reminds me of running yum under my CentOS 3.8 laptop with 48M of memory – it too became unusable due to memory constraints. APT-RPM never had these problems. Is Python being a hog?

Knoppix

Knoppix 3.3 refused to see my PS/2 mouse – at least, the trackpad on the laptop is supposed to be a PS/2 style mouse. Nothing worked.

Knoppix 3.9 worked fine, but it was very slow (10-15 seconds or more to load Konqueror) and the hard drive installer was labeled a very early version. Just starting it up took something like 30-60 seconds. Knoppix 3.9 also gave up the professional backdrop and graphics, and gave up WindowMaker besides. Why would you give up WindowMaker and retain twm for instance?

A Rant….

It used to be that UNIX systems worked on machines for years – even through several upgrades. With this machine I can only wonder. Memory of 128M is substantial – why do current systems require a ghastly amount of memory? Is UNIX and Linux taking after Windows? Are we going to need upgrades every time a new version is released?

I often wonder what the developers and testers of new systems (whether Windows, Solaris, or whatever) are using. For example, on my desk I’ve a Pentium 4 with 256M of memory – and this is pretty much the fastest Intel machine I’ve got. Do you think a Solaris developer is using a machine like this? Or a FreeBSD developer? Or a Red Hat developer?

I tend to think of the developers as spending their hard earned money on the biggest, fastest, fanciest machines they can get – then programming for them – and then forcing the rest of us to carry along. It’s not avarice, just lack of forethought.

One side rant: whatever the benefits or disadvantages of Python are as a language, it often seems to take a lot more memory than Perl or Ruby or Korn shell – and yet it is what everybody is using. I can’t run yum under Centos 3.x because I’ve only 48M of memory – and I’m talking about a text mode environment. I can’t run Anaconda (Fedora Core 5) because I’ve only 128M of memory. One hundred twenty-eight megabytes.

Do we need gigabytes of memory just to install now?

Follow

Get every new post delivered to your Inbox.

Join 43 other followers