Servers as cattle[1]In other words, servers that you're happy to kill off. is fine in theory, but it introduces the need for persistent storage i.e. we need some way to store stuff on disk (e.g. in a database, or files for a website), that will always be there, even as the servers come and go.
Kubernetes lets applications request persistent disk space, and you need to set up at least one storage class to provision these requests. While it's possible to set up a no-provisioner storage class that provisions space on the local file system, the volume won't be available if the node goes down, and this storage class also doesn't support dynamic provisioning.
Clusters will typically use space in the cloud, but since we want to keep everything local, we'll set up a storage class that provisions space on an NFS mount. The underlying file system would normally be kept on a server outside the cluster, but our control plane VM is not doing much, so we'll just put it there
Set up NFS
On the vm-kcontrol server, first enable NFS, and prepare the directory that will be used to hold the shared data:
sudo systemctl start nfs-server sudo systemctl enable nfs-server sudo mkdir -p /srv/nfs/kdata sudo chmod -R 777 /srv/nfs/ # don't do this in a production environment!
To export the directory, add the following line to /etc/exports:
/srv/nfs/kdata *(rw,sync,no_subtree_check,no_root_squash,insecure)
and run the following command:
sudo exportfs -rv
You can check that the directory is now available for mounting by running showmount -e.
Finally, open up the firewall:
sudo firewall-cmd --permanent --add-service=nfs sudo firewall-cmd --permanent --add-service=mountd sudo firewall-cmd --permanent --add-service=rpc-bind sudo firewall-cmd --reload
Set up the storage class provisioner
Next, we'll install a storage class provisioner that works with NFS. We'll do that using Helm, so we need to install that first.
On the machine you're managing Kubernetes from[2]In other words, not necessarily the vm-kcontrol server (although it can be)., download and unpack the Helm binary. Then install nfs-subdir-external-provisioner:
helm repo add nfs-subdir-external-provisioner \ https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner helm install nfs-subdir-external-provisioner \ nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ --set nfs.server=vm-kcontrol \ --set nfs.path=/srv/nfs/kdata \ --set storageClass.onDelete=delete
Note that when volume directories are no longer required, the default behavior is to archive them, which is not really necessary in a test environment, so we configure them to be deleted instead.
Finally, make it the default provisioner:
kubectl patch storageclass nfs-client -p \ '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
You can confirm everything's been set up by running kubectl get storageclass.
Test the storage class provisioner
We'll test things by creating a persistent volume with an HTML file in it, then serve that file using nginx. We should be able to run an nginx pod on any node, or even reboot the entire cluster, and that file will continue to be served.
First, we need to create the volume by creating a persistent volume claim:
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nginx-nfs-test spec: accessModes: - ReadOnlyMany storageClassName: nfs-client resources: requests: storage: 10Mi EOF
We can confirm that the volume has been created by running kubectl get pvc, and if you check the /srv/nfs/kdata/ directory on vm-kcontrol, you'll see that a new sub-directory has been created for the volume.
Inside that directory, create a file called index.html and put some content in it (this is the file that nginx will serve).
Next, we want to create an nginx deployment that mounts this newly-created volume at /usr/share/nginx/html/, so that when we connect to nginx, we will see the index.html we just created:
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx-nfs-test name: nginx-nfs-test spec: replicas: 2 selector: matchLabels: app: nginx-nfs-test template: metadata: labels: app: nginx-nfs-test spec: volumes: - name: nginx-nfs-test persistentVolumeClaim: claimName: nginx-nfs-test containers: - image: nginx name: nginx volumeMounts: - name: nginx-nfs-test mountPath: /usr/share/nginx/html readOnly: true EOF
Finally, we create a service to make nginx available to the outside:
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Service metadata: name: nginx-nfs-test spec: type: NodePort selector: app: nginx-nfs-test ports: - protocol: TCP port: 80 nodePort: 30333 EOF
If you open vm-knode1:30333 in a browser, you should see the index.html file you created before. Even if you reboot the nodes, this will still be case, thus demonstrating that the storage is persistent, even after the pods have been deleted and re-created.