Skip to main content

Enterprise scale wordpress hosting on AWS

<< Back to listing

It’s no secret that we at Dynamo6 love WordPress. Its a great tool for our clients, providing a huge range of functionality via a modern, easy-to-use interface. From a development point of view, its simple to set up and its incredibly flexible, allowing us to create all manner of innovative, engaging online experiences. But, what if you need to host something more substantial than a standard brochure site? What if you’re running a complex web application or an online store? If your site needs to be highly available and scale automatically to meet spikes in traffic? What if it’s not one site, but a network of connected sites managed from a single installation? Don’t worry – WordPress can handle all this too. We just need to create the right hosting environment by leveraging a couple of extra services from our friends at Amazon Web Services (AWS).

Building a better hosting platform

The quickest way to start hosting a site on AWS is to create a new EC2 instance via the console then deploy your code manually. But we want to automate things as much as possible; it saves us time and reduces the chance of human error. Ideally, once everything is configured we want the ability to update our site or scale up our infrastructure at the click of a button – so we need a service with a bit more brains and a bit more brawn.

AWS Opsworks

In short, AWS OpsWorks is a management and automation platform which allows you to automate how servers are configured, deployed, and managed across your Amazon EC2 fleet. Before we jump into the details it’s worth getting to grips with a few key OpsWorks concepts:

  • Stacks: a stack is a set of EC2 instances and other AWS resources (e.g. RDS databases, Load Balancers etc) that live inside a single VPC and work together with a common purpose. When you first create your stack you will specify a set of defaults that all instances inside your stack will inherit, such as preferred operating system. Having these settings configured centrally is our first big win; if a setting needs to be changed you only need to change it in one place
  • Layers: a layer is a software service which is made available to all instances in the stack. For example, we need Apache and PHP installed on all our instances, so we’d add these as a new layer
  • Recipes: a recipe is a set of instructions, written in Ruby, which denotes how layers and apps are deployed and configured. Multiple recipes grouped together are referred to as a cookbook. If you want to delve beyond the basic cookbooks provided in OpsWorks you’ll need to read up on Chef at www.chef.io
  • Instances: these are the underlying EC2 machines, created by OpsWorks to host your sites
  • Apps: an app is simply a collection of files which you wish to deploy to your instances. In our case that is the website source code which lives in our secure online repository. When you add a new app you have the option of adding an SSH key along with a repository URL. Then it is simply a matter of hitting deploy …
  • Deployments: a deployment is a time-stamped snapshot of an app. Opsworks always maintains one ‘current’ deployment on each instance which contains the most recent copy of your code. However, it also retains copies of the previous 4 deployments to enable one-click rollback if the latest code is found to be buggy or unreliable. In fact, that number can be configured so, if you really want to, you can keep the previous 20 deployments on your instances. So long as your instances have enough storage!

Configuring and customising

Let’s think about our goals and how we can use OpsWorks to achieve them.

High Availability

We need the site to stay up, come hell or high water. First off, we’ll set our stack to Auto Heal. This means that if any instance becomes unresponsive, OpsWorks will automatically spin up a new instance, deploy all our apps onto it then reroute traffic from the old troubled box to the new clean one. Furthermore, we can mitigate the impact of a catastrophic outage in the AWS data centre by creating our instances in different availability zones (AZs). If one centre goes offline completely, the load balancer will simply reroute all traffic to the remaining instance at the other location.

Cope with traffic spikes

Being too popular is a good problem to have, but its still a problem we must deal with. For example, if your site is mentioned on national news, your traffic will likely skyrocket. You could add more instances prophylactically but you never really know how many you will need until the traffic hits. Plus every new instance you bring online will cost you money whether it’s in use or not. Thankfully though, OpsWorks has us covered once again. So far we’ve added two “24/7” instances, which are always on, always responding to requests. Now we’ll add two more, one in each AZ, but this time we’ll set them to be “Load-based” instances. These will only come online when certain conditions are met; for example when the existing servers hit 75% CPU usage. The values and thresholds can easily be fine-tuned via the console; you can even hook them up to respond to CloudWatch alarms.

Painless updates

What about deploying new features or website updates – can OpsWorks help us streamline this process and prevent downtime? Of course, it can :) With the best will in the world, unit testing and QA it can only go so far – sometimes code goes bad when it goes live. We’ve already seen that we can roll back a deployment with a single click via the console, but what if we’re deploying a major update which we’d like to double check in the production environment, but privately without exposing customers to any potential errors. This is actually quite straightforward – we just spin up a new instance via the console then remove it from the load balancer. The instance remains publicly available via its IP address but no traffic is sent to it. We then deploy the latest copy of our code to that instance only and test it by amending our local host’s file. If everything looks good and we’re happy to push the update live for everyone we simply do another deployment to all the remaining instances. The new test instance can then be shut down again.

Data persistence

At this point you may well be thinking ‘what about my data – where does it live?’ That’s a great question. Every WordPress site big or small requires a MySQL database to store its config and content. We recommend using Amazon’s Relational Database Service (RDS); this is a managed service that allows you to create (amongst other things) MySQL databases which are automatically backed up and replicated across multiple AZs.

We’re nearly there now, but there’s one more major issue left that we must address. By default, when you log into WordPress and upload a file to the Media Library, that file is written to a folder on the server and the details are added to the database. But if we have multiple servers processing our upload requests, the file will only be created on one of them. The simplest solution is to tell WordPress to store assets elsewhere. There are several plugins out there which will allow you to automatically offload your media files to an S3 bucket for example. More recently we’ve also experimented with storing the media on a network file location which is available to all instances. Once again, AWS provides the ability to do this natively using their Elastic File System. This can be set to attach to any new instance that is brought online using a custom recipe.

Any other business?

That’s it, in a nutshell. We’ve created a highly available, auto-scaling hosting environment. With one click deployments and rollbacks that can handle as much traffic as you’ll be able to throw at it. We’ve only really grazed the surface in this article; OpsWorks provides a huge amount of functionality, especially when you begin to dabble with the custom cookbooks that are available online.

If you need a hosting environment that is reliable and stable with high availability and great performance Contact Us