Previously I’ve discussed the basics of init containers and shown how to deploy WordPress locally. But if you’ve already got a domain handy and are ready to move your WordPress site to a staging or production environment please continue. I’ll use Digital Ocean but you could use any hosting provider you like so long as you can use them to deploy some VPS instances for yourself during the setup below. I’ve been using this method to host Chicago Gang History for over three years.
Guide assumes you are not using a "managed" K8s solution or cloud provider and want to create your own cloud using VPS instances you manage yourself.
When you are finished you will have a hardened WordPress site deployed to Kubernetes using Ansible capable of handling up to 80K users per month.
WordPress Migration from Pantheon to K8s. Saved client hundreds per month in hosting charges as website hit 80K visitors per month.
:: Kubernetes / K3s / Helm / Redis / MariaDB / Ansible / WordPress / Route 53 / Sendinblue
Ported second generation Chicago Gang History website from Pantheon to a multi-node K3s cluster on Digital Ocean, saving Zach over $400 a month in fees after an
unexpected price hike from his hosting provider.
Back in 2017 I decided to move my passion project After Dark off GitHub so I could have better repo usage insights. I was surprised to learn how much faster a self-hosted VCS was compared to GitHub. Not only was GitHub limiting the useful metrics I could capture they were actually slowing down my development!
Which brings me back to one of if not the most important concepts I learned as a developer after watching a talk given by Paul Irish at Fluent Conf 2012.
When I discovered Pantheon in early 2017 I thought I’d found an a hidden gem. The honeymoon ended when Pantheon hiked costs 40% (while taking away Redis) after about six months on their platform. That was a bummer, but not a deal breaker.
Fast-forward three years and Pantheon struck again. Only this time intead of instead of another 40% increase they went for the whole cookie jar with a jaw-dropping 1185.71% increase to $450 per month with a 10-day lead on the bill.
With little time to react to Pantheon’s change I did the most reasonable thing I could think of: let the site go down while I learned to move it to Kubernetes.
My requirements:
- Get site back up-and-running with the least amount of effort
- Eliminate visitor-based pricing imposed by hosting company
- Use minimum possible resources to run WordPress at scale
- Restore Redis cache Pantheon used to offer with $25 hosting
The rest of this post describes how I moved the Chicago Gang History WordPress website off Pantheon and onto Kubernetes. If you follow this guide, you can retrace my footsteps to migrate from Pantheon to Kubernetes too. At the outset you’ll have a 3-node WordPress cluster on
Digital Ocean for $30 a month.
Last week Pantheon dealt the final blow to the website I drove from 100 visitors up to 80,000 per month. By the time I heard the death knell we had a 10-day advance notice the price of hosting was increasing 1025% to $450/month.
I quickly spun up a Plesk instance on
Digital Ocean and installed WordPress on a $10/month VPS but realized Plesk was too bloated for our needs and probably not going to cut the mustard in the scale department should traffic decide to climb.
After initially attempting to deploy Wordpress using the Helm chart by Bitnami via the App Marketplace in Rancher 2.5 I found the chart difficult to use, kept looking and eventually found a
an alternative chart on a self-hosted VCS.
Like the Bitnami chart the independent chart includes optional database set-up. Unlike the Bitnami chart, however, the self-hosted chart also includes a Redis object cache, OpenID Connect authentication. It also builds a hardened WordPress Pod using WP CLI from scratch with Ansible inside an
Init Container. And in this tutorial I’m going to show you how you install it on macOS with K3D.
In this post we’re going to take a quick look at how to run
Rancher in a Kubernetes cluster locally on macOS for development and testing purposes.
There are several different ways to run Kubernetes for local development. In this guide I’m going to focus on just one way:
K3D.
K3D is a lightweight wrapper to run Rancher Labs’
K3s in Docker. K3s is a certified Kubernetes distribution for edge and IoT applications with a small resource footprint and ARMv7 support. Like
KiND, K3D uses a container runtime as opposed to a virtual machine — saving precious resources. Unlike KiND, K3D supports the ARM architecture and requires about 16x less RAM.
When you’re finished you’ll have a functional K3s Kubernetes cluster running on your Mac with Rancher UI for cluster management. This guide assumes you’ve never run Kubernetes before and, therefore, also serves as a practical starting point, though I won’t be going into detail about the
nuts and bolts of Kubernetes.
Rio is a MicroPaaS for Kubernetes designed to run using minimal resources. Rio provides automatic DNS and HTTPS, load balancing, routing, metrics and more. Use it to remove the chore of creating and managing a secure IT infrastructure.
k3s is a lightweight, certified Kubernetes distribution capable of running on constrained hardware and therefore ideal for local, edge and IoT substrates. K3s was originally developed for Rio but useful enough to stand on its own.
Today I’m going to show you how to easily set-up k3s and Rio on Manjaro Linux MacBook and use them to create a self-hosted, git-based continuous delivery pipeline to serve your own website.
If you’re not yet familiar with Kubernetes, no problem. Please let this gentle introduction serve as your practical guide. When you’re finished you’ll have a better understanding of the concepts and tools used in container orchestration and a shiny new website you can use to demonstrate your skills.