For the better part of yesterday and most of today I’ve been working on optimizing my websites. I had determined — using I method I’ll show you shortly — that if I had just one person access all of my websites consecutively, my server would crash. Just one person. Here’s some details about what I’m running:
At the time of writing, I was running a VPS with guaranteed 600MB per month. The aforementioned websites consisted of approximately 15 WordPress sites of which three had an anecdotally “intensive” amount of plugins installed. The server is running PHP5/fcgi and I had enabled x-cache support (but had no idea if it’s actually running).
My sites aren’t popular. Based on my analytics, I’d estimate about 50 hits per day against my server. Yet, once every few days my server was being automatically restarted because it was using too much memory. Once a client started getting the same thing on their single WordPress site on their server, I decided to start poking around.
First, I started by looking at the currently running processes and memory usage on my server. I SSH’d in and — after reading some DreamHost documentation — found these commands:
watch free -m
The first gave me a running list of the processes on my server. The second just gives a running and easy to see display of total/used/free memory available to my virtual server. So I started opening websites to see what happened. Basically, new processes were spawned every time I refreshed or opened a website and they stayed there, running, for quite some time. I then found this command:
ab -n 1000 -c 20 -t 5 http://websitesthatdontsuck.com/
This code allowed me to simulate high traffic. In this case, 1000 visitors, 20 at a time, wait no more than 5 seconds for a response. Suffice it to say, my server didn’t handle it well. If I recall, about 10 requests completed, with the remainder failing.
So I had established the problem, what did I do to fix the problem? W3-Total-Cache.