Following on from my recent attempts at benchmarking Ruby VMs, I decided to test all the server options available for the MRI (1.9.3p0) platform against each other. This is still just running the simple “Hello World” Sinatra app, that uses an erb template so there is more processing involved than simply outputting a string.
- Thin 1.3.1, running in production
- Puma 0.9.3, running in production
- Mongrel 1.2.0.pre2, running in production
- Webrick 1.3.1, running in production
- Passenger 3.0.11 standalone, running in production
- Unicorn 4.1.1, 1 worker process, running in production
The results are as below for “ab -n 10000 -c 1”: (higher is better)
Additionally, we need to care about how long the slowest requests are taking, so I’ve taken the response times of the 98% percentile from the AB testing, and plotted these here: (lower is better)
I’m quite surprised by this, if I’m honest. Unicorn certainly has a great reputation, and this would illustrate how strongly it performs here. Thin also has a great reputation, and is the other standout performer on this test. However, I’m really surprised by how far behind Mongrel is – for a server which is so trusted and widely used, being under half of the Req/sec that Unicorn offers is astonishing. Thin and Unicorn also are hugely much better in their response times, with 98% of users having a response around 2ms, whereas WEBrick for example is very much worse – at well over 10ms. In a real world application, this is an essential measure of how performant the server will be, and how satisfied the users will be.
Running the tests with “-c 1”, doesn’t allow some of these servers a chance to shine, as many of these can drastically increase throughput by forking requests, or using evented systems. In part 2, we’ll look at increasing the concurrency of requests, and how this affects matters.