After seeing that my Amazon t2.micro EC2 instance would happily crash itself by quickly using all available memory when a few requests were coming in, I realised I needed a solution that did not involve me having to reboot the instance and bring the containers back up with so little traffic. The following outlines what I did to reduce the memory footprint to 160MB. This set-up does not lend itself to a high traffic site, but is useful for a blog averaging a few hits a minute.
Restricting memory allocation
The first step is to use docker's
-m option and limit the available memory to each container. You need to set limits on both the WordPress and MySQL containers so incoming requests don't cause the (mainly WordPress) containers to grab all the memory and crash docker and/or bash. I started with 500MB/200MB for wordpress/mysql respectively. I gradually got this down to 96MB/64MB, but was not able to do until I had optimised the docker images as detailed below - prior to optimisation, the containers were using about 300MB and 170MB memory for wordpress and mysql respectively. I tried 64MB for wordpress but found that the site became sluggish, so settled on 96MB.
Optimising the images
Next was to optimise the MySQL image. Thankfully Morgan Tocker has a config here so I threw that in and saw mysql drop from 170MB to about 130MB. Turning off
performance_schema then provided a staggering reduction and saw the footprint drop to under 64MB.
Finally onto the WordPress image, where apache2 is the real culprit. This article by Nguyễn Đình Quân has some excellent tips for tweaking the whole stack, but I just went with the
mpm_prefork settings. Additionally this post by Patrick McKenzie convinced me to switch off
KeepAlive which increases memory but decreases CPU - and at 64MB the memory limit will be reached before the CPU limit is.
The result is a wordpress instance that doesn't crash under a default siege - rather the request times just increase as the wordpress container reaches its memory limit. Extending the memory allowance for the wordpress container directly affects the number of requests served per minute - setting it to 400MB for example resulted in 4x more throughput. Finally, I'm using an nginx container as a reverse proxy but I'm not sure if this affects the performance of the setup.
The following docker images extend the default wordpress and mysql images to include the optimisations above: