A couple of weeks ago I got an email from Amazon Webservices informing me that one of my EC2 instances was operating on degraded hardware and was to be retired. Technically I’m prepared for such an eventuality thanks to docker, however it was still going to take a few hours considering I’d need to perform the following;

  • Bring up a new instance and install/upgrade docker and git
  • Bring up my containers - wordpress, mysql, mysql-backup, nginx (I have deploy scripts for this)
  • Restore wordpress backup and upgrade wordpress (ich)

This process is way too long and either requires further automation or a new strategy, and thanks to Jekyll I’ve dropped wordpress, mysql and ec2 for jekyll, github and s3. This results in both (AWS) cost savings and a decrease in deployment complexity.

Wordpress is heavy

Considering Wordpress is running on PHP I could make it do anything I like, however for all the Wordpress sites I’ve ever run, I generally spend alot of time at the start fiddling with themes and installing plugins, and then use it purely as a content host. This then has the overhead of requiring both a PHP enabled host and a database (and therefore backups). Once set up, the only real requirement for dynamic webpages would be comments, but that requirement can be transferred to disqus.

Jekyll is not

Jekyll is a ruby gem that generates static blogs and websites from html and/or markdown. This means you hold the source and content of a blog in version control, generate a static site with Jekyll, and then push the results of that to some host. Because Amazon S3 can be optimisied for static websites I can host it there, and this means I don’t need to pay for an EC2 instance, and S3 pricing is far cheaper. There are two options for importing Wordpress to Jekyll - Wordpress, which extracts the data from your mysql database, and WordPress.com, which extracts the data from the XML export via Wordpress admin tools.

I only realised I could use either after I’d burnt some time trying to get my mysql container running locally, and then found the output of the WordPress.com one was better anyway;

  • Wordpress import
    • Imported nav links to pages correctly
    • Left extraneous closing tags throughout my posts
    • Every post needed author to be updated from a json object
  • WordPress.com import
    • Imported pages, but not links to those pages
    • Every post needed author to be updated from a json object

CDN for speed

Because the site is no longer dynamic, it can be plonked onto a content delivery network - because more speed is more better yo. I’ve added Cloudfront which is a relatively easy setup, and my S3 deploy scripts additionally make a call to Cloudfront to invalidate the cache on update.

Directories over subdomains

I’ve got some projects to distribute via the site this year and from my reading using directories rather than subdomains seems to be better for SEO. As such, during the switchover I’m also dropping the blog.atqu.in for atqu.in/blog. I configured nginx on another EC2 instance I have to redirect all incoming requests using the previous blog.atqu.in/path/to/post to use atqu.in/blog/path/to/post.

And with that done, I terminated my EC2 instance, dropped my EBS volume, and deleted my database backup bucket.