Bringing the Network up to (Gigabit) Speed

When looking at increasing network speed in the enterprise, there are a lot of things to consider – and missing any one of them can result in a slowdown in part or all of the network.

It is easy enough to migrate slowly by replacing pieces with others that support all of the relevant standards (such as 10/100/1000 switches). However, such a migration can bog down and leave old equipment in place and slowing down everyone.

First, determine if the infrastructure can handle the equipment. Is the “copper” of a sufficient grade to handle the increased demands? If not, then the cables will have to be replaced – perhaps with Cat-6 or better – or even fiber if your needs warrant it. Check for undue interference – fiber will not receive interference that copper would.

After the cabling is ready, check the infrastructure – all of it. It can be easy to miss one. Also check the capabilities of all. For example, can the switch handle full gigabit speeds on all ports at once? You might be surprised at the answer.

Once the equipment is in place – make sure that all clients are using gigabit speeds. Most switches should have indicators that tell if a port is running at a gigabit or not.

Make doubly sure that servers are running at full speed, as a slowdown there will affect everyone who uses that server. This becomes doubly important in the case of firewalls because of the impact.

Lastly, don’t forget telco equipment. If the connection to the T1 is still running at 100 megabits, then this will slow Internet access for the entire enterprise down.

One more thing – an upgrade such as this would be a perfect time to get more advanced equipment in house. Just be concious of the corporate budget. In such cases, it also helps to present improvements that the executives can see and experience personally rather than some elusive benefits that only the IT staff will see.

Good luck in your speed improvement project!

Expanding OpenVMS Memory

When you expand OpenVMS memory, there are a number of other parameters you may wish to revisit. If you increase your memory dramatically, you will certainly have to change these SYSGEN parameters. You can also look each parameter up using HELP:

HELP SYS VMS_MAX_CACHE

(The parameter SYS is short for SYS_PARAMETERS.)

Some parameters to consider changing are the following:

  • GBLPAGES. If you don’t increase this, you’ll be getting warning messages when you try to take advantage of all that memory. In short, this parameter sets the amount of memory that the kernel can keep track of; if you use too much this parameter is a limiting factor.
  • GBLPAGFIL. The page file needs to be able to take all of the pages that it might be called upon to reserve; increase this parameter.
  • VCC_CACHE_MAX. If you’ve not tuned your cache (XFC) then you’ll find half of your memory to be taken by the cache. This is almost certainly not what you want; modify this parameter to reduce the amount of memory your cache is allowed to take. Even so, do remember that your cache will decrease and increase dynamically in any case – but if you scale it back, then you’re not wasting memory so much.
  • MAXPROCESSCNT. This sets the maximum number of process slots – in essence, the maximum process count (which is what the parameter is called, after all). If you have a lot more memory, you’ll want to use it to run more, right? That’s not any good if you use too many processes and can’t run any more.
  • BALSETCNT. If you set MAXPROCESSCNT, you should set BALSETCNT to the same amount minus two – and never higher.

These changes can be made in the SYS$SYSTEM:MODPARAMS.DAT file and then use the AUTOGEN command to configure the sysetm. The MODPARAMS.DAT file uses a simple format; for our purposes, you can use something like this:

ADD_GBLPAGES=1000
ADD_GBLPAGFIL=1000
VCC_CACHE_MAX=2048
ADD_MAXPROCESSCNT=1024
ADD_BALSETCNT=1024

In place of ADD_* you can also use MAX_* or MIN_*. You can see more examples in HELP AUTOGEN MODPARAMS.DAT. AUTOGEN is described in the HELP; be careful using it! You don’t want to muck up the system so bad you have to reboot or to reinstall.

Powered by ScribeFire.

The kconfig utility (HP-UX)

The kconfig utility is a utility which allows you to save a complete set of kernel tunables, ready for use in configuring other systems or in returning to an older configuration. These kernel configurations can be saved, copyed, deleted, and restored using the kconfig utility.

For example, consider a HP-UX virtual machine host that was pressed into service early as a general host. How to return to the original installation kernel configuration? Use the original configuration automatically created during installation, “last_install”.

For another example, consider a host configured for the applications you use. Save the configuration and it can be replicated elsewhere with a single command and perhaps a reboot.

For a current list, use the kconfig command by itself:


# kconfig
Configuration Title
backup Automatic Backup
ivm Virtual Machine Configuration
last_install Created by last OS install

The kernel configuration can be exported to a file:

# kconfig -e ivm ivm.txt

…and later imported (possibly on a new machine):

# kconfig -i ivm.txt

The current configuration can be saved to a particular name (such as ivm):

# kconfig -s ivm

All of the usual manipulations are possible, as mentioned before: copy, delete, rename, save, load, and so forth. The manual page is kconfig(1m) and should be available on your HP-UX 11i v2 or v3 system.

Automation: Live and Breathe It!

Automation should be second nature to a system administrator. I have a maxim that I try to live by: “If I can tell someone how to do it, I can tell a computer how to do it.” I put this into practice by automating everything I can.

Why is this so important? If you craft every machine by hand, then you wind up with a number of problems (or possible problems):

  • Each machine is independently configured, and each machine is different. No two machines will be alike – which means instead of one machine replicated one hundred times, you’ll have one hundred different machines.
  • Problems that exist on a machine may or may not exist on another – and may or may not get fixed when found. If machine alpha has a problem, how do you know that machine beta or machine charlie don’t have the same problem? How do you know the problem is fixed on all machines? You don’t.
  • How do you know all required software is present? You don’t. It might be present on machine alpha, but not machine delta.
  • How do you know all software is up to date and at the same revision? You don’t. If machine alpha and machine delta both have a particular software, maybe it is the same one and maybe not.
  • How do you know if you’ve configured two machines in the same way? Maybe you missed a particular configuration requirement – which will only show up later as a problem or service outage.
  • If you have to recover any given machine, how do you know it will be recovered to the same configuration? Often, the configuration may or may not be backed up – so then it has to be recreated. Are the same packages installed? The same set of software? The same patches?

To avoid these problems and more, automation should be a part of every system wherever possible. Automate the configuration – setup – reconfiguration – backups – and so forth. Don’t miss anything – and if you did, add the automation as soon as you know about it.

Things like Perl, TCL, Lua, and Ruby are all good for this.

Other tools that help tremendously in this area are automatic installation tools: Red Hat Kickstart (as well as Spacewalk), Solaris Jumpstart, HP’s Ignite-UX, and OpenSUSE Autoyast. These systems can, if configured properly, automatically install a machine unattended.

When combined with a tool like cfengine or puppet, these automatic installations can be nearly complete – from turning the system on for the very first time to full operation without operator intervention. This automated install not only improves reliability, but can free up hours of your time.

Are You Ready for the Onslaught? (or Scaling Your Environment)

Is your environment ready for the onslaught that you may or may not be aware is coming your way?

One commonly known example of this is what is called “the Slashdot effect.” This is what happens when the popular site Slashdot (or others like it) links to a small site. The combined effect of thousands of people attempting to view the site all at once can bring it to its knees – or fill up the traffic quota in a hurry.

Other situations may be the introduction of a popular product (the introductions of the Iphone and of Halo 3 come to mind), or a popular conference (such as EAA‘s Airventure, which had some overloading problems).

Examine what happens each time a request is made. Does it result in multiple database queries? Then if there are x requests, and each results in y queries, there will be x*y database queries. This shows that as requests go up, database queries go up dramatically.

Or let’s say each request results in a login which may be held for 5 minutes. If you get x requests per second, then in 5 minutes you’ll have 300x connections if none drop. Do you have buffers and resources for this?

Check your kernel tunables, and run real world tests to see. Examine every aspect of the system in order to see what resources it will take. Check TCP buffers for networking connections, number of TTYs allowed, and anything else that you can think of. Go end to end, from client to server to back-end software and back.

Some of the choices in alleviating pressure would be using caching proxies, clusters, rewriting software, changing buffers, and others.

James Hamilton already has collected a wide number of articles about how the big guys have handled such scaling problems already (although focused on database response), including names such as Flickr, Twitter, Amazon, Technorati, Second Life, and others. Do go check his article out!