I've always been fond of the phrase "cattle, not pets", which refers to the idea that computer servers should be treated as cattle (i.e. you should have no problem killing them), as opposed to pets. One of the most important changes over the 35+ years I've been a professional developer is the rise of automating processes. Back in the day, if you wanted to set up a new server, you did it manually, carefully installing all the software and other dependencies, then even more carefully configuring them, and since you didn't want to have to do that work again[1]And it was often the case that you couldn't re-create them, even if you wanted to, because of all the minor tweaks and changes that were invariably made over time, that didn't get documented., these servers were treated as precious pets. But today, with the rise of technologies such as Ansible and containers, servers are disposable - if one fails, just throw it away and run a script to create a new one.
This approach introduces some new considerations (e.g. managing a fleet of servers, re-creating them when they fail, etc.), giving rise to a new class of software known as container orchestration. The king of these is Kubernetes, and since I recently did a bit of work with this, I wanted to set up a local instance for testing. While there are things like minikube, that let you set up a local cluster on your PC, there's nothing like a proper test environment that mirrors a real production environment as closely as possible.
The trend these days is, of course, to do everything in the cloud, so there's no shortage of information on how to set things up using e.g. AWS or GCP, but rather less on how to set up a bare-metal local cluster, so we'll remedy that here with a set of instructions on how to set up a local Kubernetes instance that has:
- a single VM that provides the control plane
It's possible to have a single server manage the control plane and act as a node (i.e. run containers), but in the interests of making this cluster as "real" as possible, we'll separate them out.
- two more VM's that will act as nodes[2]These will actually run the containers.
We want two of these, so that we can test things like distributing workloads over multiple servers, automatic failover if a server goes down, etc.
- dynamically-provisioned disk space
This gives us persistent disk storage, even as servers come and go.
- a local Docker registry to store images
So that we don't have to put them on Docker Hub.
Tutorial index
- Setting up the common infrastructure
In which we set up the infrastructure required by all the cluster VM's.
- Setting up DNS and SSH
In which we set up DNS and SSH access for the cluster VM's.
- Creating the node VM's
In which we clone the base VM to create the node VM's.
- Initializing the cluster
In which we initialize the cluster, and register the node VM's.
- Provisioning dynamic storage
In which we set up dynamic provisioning of persistent volumes.
- Setting up a local Docker registry
In which we set up a local Docker registry to store our images.
- Wrapping up
In which we we wrap things up with a larger example.