ApacheBench: Proper Usage

When you are benchmarking a web server, you may fall into the trap of benchmarking against a small file (maybe the debian default “It’s working!” index.html file). I decided to write about this pitfall, so that my friends & readers will get a more realistic benchmark.

I’ve found the following general guidelines are a good idea to follow when running a benchmark on a web server:

  1. You should benchmark with gzip enabled, since that will more realistically simulate what’s going on between most browsers and your web server.
  2. You should benchmark from another machine (preferably remote).
  3. You should benchmark against a real page on your site, not a very small test file (for example benching with robots.txt is a bad idea).

With Apache Bench (“ab”), you would do this by adding the following switch: -H “Accept-Encoding: gzip,deflate”

An example “ab” command would look like this:

ab -k -n 1000 -c 100 -H "Accept-Encoding: gzip,deflate" "http://test.com/page/"

This command will simulate 100 concurrent users, each one performing 10 requests for a total of 1000 requests. The -k switch will instruct Apache Bench to use KeepAlive.

If your sample web page weighs 250k uncompressed (just for the HTML body), that’s a lot of data to transfer between your web server and the machine from which you are performing the benchmark tests. The problem is that the network interface (or other transfer medium), will probably choke well before you achieve the maximum requests per second, and you may find yourself confused and spending time trying to tweak nginx or varnish, where in fact you are just hitting the limit of your network card or maybe even an enforced maximum speed policy by your hosting company (some might limit your network card at the Switch level, and some others will use Rate Limiting at the Firewall level).

But in the real world, it would probably be compressed anyway (especially if your config enables gzip, which in most cases it does by default). Such a page might be compressed to 10% of its original size (in this case to 25k). This will give you a much nicer and more realistic “Requests Per Second” number.

A proper, realistic benchmark should test a page that goes all the way through to the database and performs one or more queries. This allows you to make sure that you have proper caching enabled at the various levels of your web applications.

An important note about KeepAlive

With Apache: If you are experiencing very high traffic (many concurrent users), it may be a good idea to keep this value rather small. Say between 3 and 5 seconds in most reasonable cases. You do this so the Web Server can free worker threads for new connections, rather than keeping them alive with a user who may have already finished loading the page. If not enough threads are free for new connections, new users will have to wait for a while until such threads are free. If new users are coming at a pace faster than the pace in which threads are freed up, they will all be blocked and never reach your page. On the other hand, you may not want to turn KeepAlive off entirely, especially if you have a lot of images, css files, javascript files, all imported from your pages, because that would mean that for each such resource, your web server will have to spawn another worker thread, and that might cause a load spike if you experience a lot of traffic. So it’s a fine balance: You want it enabled, but you don’t want it to be too long, especially in the age of broadband, where a properly compressed page could finish transferring to the user in less than 3 seconds.

With Nginx: Threads in nginx work differently, so I personally feel that keepalive can be kept at 60 seconds with nginx, without cost to memory/thread pools.


VMWare Perl SDK on Ubuntu Lucid (10.04 LTS)

I have recently found interest in a system called OpenQRM to manage Virtual Machines / Appliances, as well as Cloud Machines (Public / Private). One of the plugins bundled with OpenQRM allows you to manage VMWare ESX Servers. It will scan the network and auto-discover VMWare Servers, However unless you install the VMWare Perl SDK it will fail to connect to the remote VMWare.

I had some trouble installing the Perl SDK, and decided to share what I’ve done so others may have an easier time installing it on Ubuntu Lucid.

As a preparation step, install the following packages:

sudo apt-get install libarchive-zip-perl libcrypt-ssleay-perl libclass-methodmaker-perl libdata-dump-perl libsoap-lite-perl perl-doc libssl-dev libuuid-perl liburi-perl libxml-libxml-perl

Once that’s done, you need to download the actual SDK from VMWare.com – You need an account to access the download section. Once you are logged in, the URL to access the SDK is:


When I ran the installer for the first time, it seemed as if it was done installing. There was a warning about HTTP and FTP Proxies not being defined, but I ignored it at first, thinking it was done but just complaining a bit. It turns out that it actually fails to install if those proxies are not defined.

To circumvent this, I just set them with empty values, and that did the trick:

export http_proxy=
export ftp_proxy=

Then unpack and run the installer:

tar xvzf VMware-vSphere-Perl-SDK-5.0.0-422456.i386.tar.gz
cd vmware-vsphere-cli-distrib

This time, the installer will install missing Perl module via CPAN, and after a few minutes (depending on how fast your system is), it will complete the installation.

Now when you add a VMWare ESX system within OpenQRM, it will manage to establish a connection to VMWare without a problem, and add the VMWare system to the Appliances database.


From Russia with Love?!

When I was very young, Russia was this “grey” and “evil” entity. Having lived in countries mainly under the influence of the west, this is no surprise. The impression was that the government is not very good for the people, as in, not very democratic.

However this is my third or fourth time to Russia, and what I discovered has changed how I think about countries and governments in general.

The first thing that shocked me was how popular virtual money is. When you are in the wallet business, you learn that in Russia the most popular valid form of payment is “Webmoney” but the reality is that many russian companies have wallets! What really matters, is that you can walk a short distance from your home and convert your real cash to virtual cash, with which you can then pay for services online. For that, a rampant network of money collection terminals exists, with fierce competition in some areas. The machines only take money, and produce a receipt.

In some apartment buildings the machine is in the lobby so you can go downstairs in your PJ’s and convert money to virtual value without braving the elements (visualize the Moscow winter to realize how practical this is!).

What makes this business thrive in Russia and Ukraine? What is the government doing or NOT doing, which allows wallets to be so popular? Is it the lack of trust in Russians banks? Is it some Russian cultural trait?

I welcome your feedback on this one.

Stay away from UltraHosting

Today I have been burned with UltraHosting.

They want me to pay $75 just to boot a rescue CD on my machine which failed to boot a new kernel, around 24 hours ago (it has failed, because they initially set only 46mb to my /boot/ partition, which is not enough for modern kernels). So basically, my data is being held hostage for a $75 ransom, until next month where I am entitled to another 15 minutes of free support. Then I will be able to ask them to spend the 2 minutes to boot a rescue CD. Then I will be able to SSH into the box, and get my data.

Besides, Their support staff takes hours to respond, their billing staff even longer, and they are not helpful at all, leaving you on the edge, eating your own fingernails. I mean, around 18 hours until receiving the answer from them that they won’t help me without me paying the $75…??

Why anyone works with them is beyond me, especially when there are so many superior competitors in the market, with Remote Boot and KVM-IP features (Hey, With those features I could have fixed my own server in less than 10 minutes! been there, done that!).

Goodbye Ultrahosting… and “good luck” surviving as a hosting company…