Getting started with Rancher 1.3.0

By January 11, 2017Containers, Rancher

I’ve been playing with Docker and all of the associated tooling around that ecosystem for close to three years, but one of the things that’s always held me back was how hard the tools were to use. By itself, Docker doesn’t really do much to enable you to build highly reliable and scaleable systems. To fill this void, many high-profile projects have sprung up. You’ve probably heard the names KubernetesMesos and Swarm, but most people are unaware of the very awesome, and very easy-to-use tool Rancher.

What is “Rancher”?

If you’ve ever tried using something like Kubernetes for a simple website, you’re quickly overwhelmed by how complicated it is. Don’t get me wrong, these tools definitely have their place. If you have large, complex infrastructure that’s hard to scale, monitor and deploy, you definitely need Kubernetes or Mesos.

On the other hand, if you’re just looking to set up a few WordPress or Drupal sites and manage them easily, then you should definitely look at Rancher.

The best way to describe Rancher is that it’s a simplified and powerful Docker orchestration tool. It enables you to easily create, manage, scale and monitor any type of containerized workload.

Installing Rancher 1.3.0

To kick the tires on Rancher, you will need to install it on a Linux machine since it does not install on Windows or Mac. If you don’t have a Linux machine handy, we have a Terraform script that you can run here which will bring up an HA Rancher environment in AWS.

Once it’s up, you should be able to connect to it at its https address. You’ll be greeted with this…

The very first thing you should do is enable authentication!

Now that your Rancher server is secure, let’s poke around. There’s a lot here, but it’s a very easy system to navigate.

Rancher is composed of Environments and Stacks. You can think of an environment as something like development, staging or production and a stack would contain a set of services.

To really understand Rancher, let’s deploy a simple WordPress site. Start by creating a new environment. Navigate to the upper left and click Manage Environments.

Click the Add Environment button and name it wordpress. The default Environment Template is Cattle which is Rancher’s own orchestration system. As you can see, Rancher also supports Kubernetes, Mesos, Swarm and Windows. Leave it as Cattle.

As you use Rancher more, you can also assign users to have varying levels of control over your environments. You could use this to give your developers access to a development and QA environment, but only give your operations staff access to staging and production.

Click the Create button.

In order to use your new environment, you need to select it in the environment dropdown in the upper left. When you do, you’ll be asked to add your first Stack. However, before you can do that, you must spin up an EC2 instance to run your containers. Right now, you have a Rancher server, but you have no worker nodes. You will see a bar at the top like this

To add a worker node, either click the Add a host link, or click the Infrastructure menu item at the top and select Hosts. When you have worker nodes running, you will see them listed here, but for now it will be empty. Click the Add Host button.

In order for your worker nodes to communicate with your Rancher server, it needs to know where to find it. If you used our Terraform template to bring up your Rancher server, it should have populated the Rancher server’s address automatically. If not, make sure you enter a valid, publicly addressable URL that your nodes can use to talk to Rancher.

Click Save.

One of the really nice features of Rancher is its ability to run on any type of infrastructure. Your containerized workloads can run on AWS, Google Cloud, Azure or bare metal. This allows you to spread your services across cloud providers and make better use of resources.

For this example, we will use the Amazon EC2 option. It will ask for the Region, Access Key and Secret Key. Enter the appropriate values, and give your IAM user enough access to launch EC2 instances. Do NOT use your AWS root credentials for this!

Click Next: Authenticate & select network

Select the Availability Zone and VPC/Subnet for the worker node. It does not have to be on a public subnet, but there are some restrictions if you do put in on a private subnet. For one, it will not be directly accessable from the public internet, so web servers will need to sit behind an Elastic Load Balancer or an Application Load Balancer. For this example, select a public subnet and click Next: Select a Security Group.

Rancher can manage a security group for you which exposes the ports required for the worker node, and will create this group if needed. If this is your first time, it may offer to create this group for you. If you’d like to use your own security group, please make sure it exposes the correct ports. More information can be found here.

Click Set instance options.

The important fields on this page are

  • Name name your new instance rancher00
  • Quantity the number of instances to bring up. Leave it at the default for now.
  • Instance Type t2.small
  • AMI leave empty. By default Rancher will use an Ubuntu AMI, which is fine for this demo. RancherOS is also a good choice. Read more here.
  • SSH user leave as ubuntu

Click Create

It will take a few minutes to bring up the new instance and provision it with the Rancher agent, but you will soon see this

You can see that Rancher hosts start off running a bunch of stuff even before you’ve started adding your own applications. Click on the Stackmenu item and select All.

These are all created by Rancher when you bring up a new host. Notice the number of services and containers per stack. Each stack can contain multiple services, and each service can contain multiple containers. A service only consists of a single Docker image, but you can scale your services up and down by adding and removing containers.

Let’s create our WordPress application!

Creating a containerized WordPress site

Click the Add Stack button and give it the name WordPress. Don’t worry about docker-compose.yml or rancher-compose.yml. You can use those to upload the definition for an entire stack, and it makes it super simple to duplicate existing environments.

Click Create.

You’ll now have an empty stack where we can create the services required for our WordPress site. Click the Add Service button.

On this screen, you can completely configure a single service within your stack. Here’s where you’ll specify your Docker image, its version, any service links, port mappings, volumes, environment variables, health checks, etc. This is a very important screen, so I’d suggest that you look around.

For now, let’s create our MySQL service. For the sake of the demo, we will not be backing our MySQL service or our WordPress service with persistent disk storage, but in later blog postings I’ll show you how to use EFS to give your services persistent storage.

Enter mysql for the name and specify the image as mysql:5.7. At the bottom of the screen, add an environment variable called MYSQL_ROOT_PASSWORD and set its value to wordpress. Click Create.

Soon you will have a full MySQL server running on your worker node. To verify that it’s running, let’s look at its logs. Click the mysql link to view the running containers for the service

You can see that there’s only one container running. Click the ellipses to the right of the container and select View Logs

You should see the MySQL server’s logs indicating that it’s up and running.

Great! So, how do other services access it? Good question.

By default your services won’t expose ports to the host that they’re running on, so they’ll only be accessible to other Docker containers, and only if we link them. This is very good from a security standpoint, and it’s also good to be able to run multiple services that use the same port.

For example, MySQL’s port is 3306, but it won’t be visible on the worker node. This means we can run many copies of MySQL on this same worker node without any problems!

Why would we do that? You could host several WordPress sites on the same worker node, each with their own copy of MySQL. It really helps for isolation as well as getting maximum usage out of your AWS resources.

Next, let’s bring up our WordPress service. Close the log window and click on the WordPress link next to the service dropdown. This will bring you back to your WordPress stack, which only contains MySQL for right now.

Click Add Service again. Set the name field to wordpress and the image to wordpress:latest. Since WordPress needs MySQL, we will need to add that as a Service Link. Click the  next to Service Links.

Click the Destination Service dropdown and select your mysql service. There’s no need to specify an As Name since we want to use the name mysql. The WordPress team was clever enough to make their Docker image look for a linked MySQL server with the link name mysql.

Click Create.

In a few seconds your WordPress container will be up and running. To verify, click the wordpress service and view its container logs. It is now installed and running!

Great! So how do I view it in a web browser? Good question.

The last piece of this puzzle is a little magic called the Rancher Load Balancer. It’s a very powerful built-in service that dynamically updates based on the health and location of your linked containers. It’s based on HA proxy, and its configuration is updated and reloaded every time a linked container comes up or goes down. In essence, it gives you a very powerful, easy to use entry point to a containerized web service.

Close the wordpress log and click the WordPress link to get back to the stack view, which should now show 2 services.

Click the disclosure triangle next to Add Service and select Add Load Balancer.

The really nice thing about the Rancher Load Balancer is that it supports Layer 4 and Layer 7, meaning that you can route on host, port and path. You can use a single load balancer to manage multiple domains, hosts within those domains, paths on those domains, and send them all to different services.

In our case, we just want to catch all requests to port 80 on the worker node and send them to our WordPress service. Set the name to wordpress-lb, click the existing service rule and set the port to 80 and the Target to our wordpress service.

Click Create.

In a few seconds, you’ll have a load balancer listening on port 80 of your worker node, which will send all HTTP requests to our new WordPress service. You’ll need to find the public IP address of your worker node by logging into your AWS console.

Open a browser and go to http://{your worker node’s public IP}. If you did everything correctly, you should see

There’s a bunch more that can be done. Here are just a few examples of things that can be done to make this better/easier to manage

  • Use the Let’s Encrypt catalog item to automatically fetch a free SSL certificate, add it to Rancher’s certificate store and update any load balancer that uses it. It will also automatically refresh the certificate before it expires and reload any load balancers that use it.
  • Use the Route53 catalog item to automatically add DNS A records pointing at your Rancher worker nodes. This helps you to scale out by adding EC2 instances.
  • Use the Janitor catalog item to clean up stopped containers. This prevents your workers disks from filling up.
  • Use the EFS catalog item to create persistent disk volumes for your services using Amazon EFS.


In my next post, I will walk through the process of setting up a MySQL server with persistent disk storage using Amazon EFS.

If you’d like to find out more about Rancher, I encourage you to visit their site at



Leave a Reply