DOS Partitions (fdisk) and the 2TB Limit

If you are trying to create disk volumes over two terabytes (2TB) you’ll find that fdisk won’t let you do it. The problem lies not with fdisk, but with the old PCDOS disk label used on disks for the last three decades or so. Back in 1981 when the IBM PC was introduced, a disk of over two terabytes would have seemed inconcievable.

Thus, we struggle with the limitations of PCDOS disk labels today.

Some (newer?) versions of fdisk report the problem with large drives, giving this warning:

WARNING: The size of this disk is 8.0 TB (8001389854720 bytes).
DOS partition table format can not be used on drives for volumes
larger than (2199023255040 bytes) for 512-byte sectors. Use parted(1) and GUID
partition table format (GPT).

To get around the size limitation, there is only one solution: dump the PCDOS disk label for another label. The usual recommendation is the GPT (the GUID Partition Table) created by Intel. The GPT has a much larger limit, making 2TB partitions feasable.

However, the Linux utility fdisk does not work with drives that use GPT; thus, you have to use a different partitioning tool. The usual recommendation to Linux users is GNU parted. GNU parted handles multiple partition formats, including GPT. Documentation is available in multiple formats (PDF is one).

The steps to getting a large partition done with parted are simple:

  1. Create a disklabel (partitioning) on disk.
  2. Create a partition of the appropriate size.
  3. Create a filesystem (if needed).
  4. Mount.

First, create the GPT disklabel – in other words, create the partitioning tables to support GPT:

# parted /dev/sdc
GNU Parted 2.2
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: Ext Hard  Disk (scsi)
Disk /dev/sdc: 8001GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number  Start   End     Size    Type     File system  Flags
(parted) mklabel gpt
Warning: The existing disk label on /dev/sdc will be destroyed and all data on this disk will be lost. Do you want to
Yes/No? yes

Then after this, create a partition:

(parted) mkpart primary xfs 0 8001.4GB
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? c

This is what happens when the disk is not aligned properly on the appropriate boundary. Traditionally, the boundary was 512 bytes; now it is changing to 4K. GPT also apparently use a significant portion of the beginning of the disk.

To get around the alignment problem, you can use a start position of 1MB and an end position 1MB from the end:

(parted) mkpart primary xfs 1 -1
(parted) p
Model: Ext Hard  Disk (scsi)
Disk /dev/sdc: 8001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  8001GB  8001GB               primary

Parted supports a wide variety of units (and defaults to megabytes), all of which can be specified during numeric input – such as for start and end points. Using an end point of “100%” is probably just as good as using “-1” (1MB from end).

Jamie McClelland has a nice article about 2TB drives and alignment. Don’t blindly trust the tools to get the partitioning right; specify the appropriate numbers and units in order to force correct alignment.

GNU parted also supports an option that at least suggests it will use the best alignment when it can:

parted --align optimal

Again, don’t blindly trust it: check the numbers.

Intel’s New Upgradeable CPU: Not a New Idea – But is it a Good One?

There has been some discussion about the new processor from Intel which comes with some features disabled and unlockable only by purchasing an unlock code from Intel. Peter Bright has an excellent write-up on the idea of an upgradeable processor.

If you administer mainframes or enterprise servers, you’ve likely already seen this idea. HP Superdomes, for example, can be purchased with deactivated processors and so forth, then the processors can be turned on temporarily or purchased outright at a later date. IBM Z System also comes with a similar capability – often called something like Capacity on Demand.

The main question is whether the consumer will find this a desirable thing or not; it is possible that the idea will not sell. I find that system “upgrades” are actually done by replacing the system completely.

It is also probably a better idea to increase system memory than it is to upgrade to a faster, more capable processor. More memory means more can be done without going to disk, which is always important as disk is the slowest element.

Largest Data Center Consolidation Ever…

The United States federal government recently announced that there was to be a reduction in the number of data center facilities. The United States CIO Vivek Kundra sent a memo to agencies announcing the move and required preparations for it.

In 1999, there were 432 facilities; eleven years later, the number has nearly tripled to more than 1,100.

Reasons given for the massive reduction include costs and energy efficiency. With the changes in the federal government that happen every four years (otherwise known as “electing a new president”), it should not be a surprise that the consolidation is to happen by 2012.

With the economy as it is currently, any massive change like this will affect many sectors. Data center providers that have inefficient facilities will find themselves losing a major customer if federal agencies leave for other providers.

This shift to more efficient data centers, on the other hand, can spur the building of new data centers: this will affect the building trades in a positive way, and quite likely shift the related data center providers towards a more positive outlook.

Intel is also going through a data center consolidation; they’ve an entire web site dedicated to the process which has valuable information.

A Book Review: “Green IT”

The book Green IT: Reduce Your Information System’s Environmental Impact While Adding to the Bottom Line by Velte, Velte, and Elsenpeter is extremely interesting. Unlike some other books that might go in this direction, this is not a book of theory, nor of political change, nor of persuasion. This is a book for IT staff about how to create a “green” data center and more.

Because of the nature of IT, going “green” can mostly be summed up in one word: electricity. A vast amount of what makes an IT department “green” consists of using less electricity wherever possible. This includes such areas as the corporate data center, the corporate desktops, and much more.

However, the book also gives significant attention to the other big environmental impact of computing: paper. There are a lot of ways to reduce paper use, and this book seems to cover all of them.

The book is in five parts: part I explains why to implement conservation in IT; part II talks about consumption; part III discusses what we as IT users can do individually to help the environment; part IV covers several corporate case studies; and part V expounds on the process of becoming “green” and how to stay that way.

It would have been nice to see more information about how the authors exemplified their suggestions during the creation of the book. The only hint of any environmentally sound practices is the recycled paper logo on the back cover (100% post-consumer fiber). That leaves more questions: did they use thin clients? Did they work from home? Did they use soy ink? Perhaps lastly, where is the e-book?

There is a web site that is set up for the book, but the current breadth of the site is disappointingly anemic. Some of the best web sites for Green IT would be Dell Earth, Intel, as well as IBM’s Green IT and Energy, the Environment, and IBM web sites.

It was interesting to note that HP’s Eco Solutions web site is “heavy” compared to the others – that is, it requires much more processing power to display, and requires a lot more time to download – which translates into more power consumption to view the web site. In addition, IBM and HP are the #1 and #2 in Computerworld’s list of Top Green-IT Vendors – whereas Dell is #6… HP also topped Newsweek’s 2009 list of Greenest Big Companies in America (along with IBM, Intel, and Dell in the top 5).