"Curiosity is the very basis of education and if you tell me that curiosity killed the cat, I say only the cat died nobly." - Arnold Edinborough

I have previously talked about some of the most common nginx questions; not surprisingly, one such question is how to optimize nginx for high performance. This is not really overly surprising since most of new nginx users are migrating over from Apache and thus are used to having to tune settings and perform voodoo magic to ensure that their servers perform as best as possible.

Well I’ve got some bad news for you, you can’t really optimize nginx very much. There’s no magic settings that will reduce your load by half or make PHP run twice as fast. Thankfully, the good news is that nginx doesn’t require any tuning because it is already optimized out of the box. The biggest optimization happened when you decided to use nginx and ran that apt-get install, yum install or make install. (Please note that repositories are often out of date. The wiki install page usually has a more up-to-date repository)

That said, there’s a lot of options in nginx that affects its behaviour and not all of their defaults values are completely optimized for high traffic situations. We also need to consider the platform that nginx runs on and optimize our OS as there are limitations in place there as well.

So in short, while we cannot optimize the load time of individual connections we can ensure that nginx has the ideal environment optimized for handling high traffic situations. Of course, by high traffic I mean several hundreds of requests per second, the far majority of people don’t need to mess around with this, but if you are curious or want to be prepared then read on.

First of all we need to consider the platform to use as nginx is available on Linux, MacOS, FreeBSD, Solaris, Windows as well as some more esoteric systems. They all implement high performance event based polling methods, sadly, nginx only support 4 of them. I tend to favour FreeBSD out of the four but you should not see huge differences and it’s more important that you are comfortable with your OS of choice than that you get the absolutely most optimized OS.

In case you hadn’t guessed it already then the odd one out is Windows. Nginx on Windows is really not an option for anything you’re going to put into production. Windows has a different way of handling event polling and the nginx author has chosen not to support this; as such it defaults back to using select() which isn’t overly efficient and your performance will suffer quite quickly as a result.

Read More

This is part two in my caching series. Part one covered the concept behind the full page caching as well as potential problems to keep in mind. This part will focus on implementing the concept in actual PHP code. By the end of this you’ll have a working implementation that can cache full pages and invalidate them intelligently when an update happens.

Requirements

I’ll provide a fully functional framework with the simple application I used to get my benchmark figures. You’ll need the following software to be able to run it.

  • Nginx. I’m not sure which exact version but I generally use and recommend the latest development version.
  • PHP 5.3.0. I recommend at least 5.3.3 so you’ll have PHP-FPM for your fastcgi process management.
  • MySQL
  • Memcached

The Framework

I’ll be referencing the code on github instead of pasting it in this post to keep the size down, so you will probably want to download it.

Read More

Recently a post of mine was linked on yCombinator and for some reason a lot of the comments talked about the efficiency of WordPress. While it’s technically not related to the subject of the linked post I just want to point out that the performance of WordPress is pretty horrible regardless of whether you use Apache or nginx.

My friend Karl Blessing and I recently talked about WordPress caching plugins. He uses WP SuperCache and I use W3 Total Cache and he subsequently decided to do some WordPress caching benchmarks on the different methods. He’s done an awesome job and generated some pretty graphs for you to look at.

What I took away from the whole thing is that W3 Total Cache and WP SuperCache can offer similar performance if you’re willing to do static file caching, however, W3 Total Cache can offer a cleaner solution with caching in Memcached if you’re willing to sacrifice a bit of performance. The benefit to this, and why I use this method, is that you don’t need complicated rules in your nginx (or Apache) configuration files.

Edit: Part 2 is now available.

This is the first entry in a short series I’ll do on caching in PHP. During this series I’ll explore some of the options that exist when caching PHP code and provide a unique (I think) solution that I feel works well to gain high performance without sacrificing real-time data.

Caching in PHP is usually done on a per-object basis, people will cache a query or some CPU intensive calculations to prevent redoing these CPU intensive operations. This can get you a long way. I have an old site which uses this method and gets 105 requests per second on really old hardware.

An alternative that is used, for example in the Super Cache WordPress plug-in, is to cache the full-page data. This essentially mean that you create a page only once. This introduces the problem of stale data which people usually solve by checking whether data is still valid or by using a TTL caching mechanism and accepting stale data.

The method I propose is a spin on full-page caching. I’m a big fan of nginx and I tend to use it to solve a lot of my problems, this case is no exception. Nginx has a built-in Memcached module, with this we can store a page in Memcached and have nginx serve it – thus never touching PHP at all. This essentially turns this:

Read More