Awasu » Wrapping up
Friday 4th March 2022 9:11 PM

We now have a basic, but functional, Kubernetes cluster running locally. We'll wrap things up by writing and deploying a slightly more involved example.

This webapp will store a series of messages in a Postgres database, and constantly show the latest ones in the UI. You will be able to add messages from a browser connected to one node, and have them show up in a browser connected to another node. This will continue to work even as nodes are brought up and down, or even the entire cluster rebooted.

Set up Postgres

Postgres will, of course, need some persistent disk space for its database, so we'll start off by provisioning that:

cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: postgres
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi
EOF

We'll use Helm to install Postgres:

helm repo add bitnami https://charts.bitnami.com/bitnami

helm install postgres \
    bitnami/postgresql \
    --set persistence.existingClaim=postgres \
    --set volumePermissions.enabled=true

The output will explain how to get the root password, and how to connect to it (either by jumping into the container and running psql there, or by running psql from outside the cluster.

Finally, the firewall must be opened on the control plane, and each node:

sudo firewall-cmd --permanent --add-port=5432/tcp
sudo firewall-cmd --reload

Set up the webapp

Connect to Postgres, and create a database, table and user for our webapp:

CREATE DATABASE k8s_test ;
\c k8s_test
CREATE TABLE message ( caption VARCHAR(80), tstamp TIMESTAMP WITH TIME ZONE ) ;

CREATE USER k8s_test WITH PASSWORD 'password' ;
GRANT ALL PRIVILEGES ON TABLE message TO k8s_test ;

Download and unpack this ZIP file somewhere. If you have a Postgres instance available somewhere, you can set things up as described above, and run it locally. Configuration settings are passed in via environment variables[1]Since this is how settings are passed into a Docker container., so you'll need it run it like this:

FLASK_PORT=8080 DB_HOST=localhost DB_NAME=k8s_test DB_USERNAME=k8s_test DB_PASSWORD=password ./k8s_test.py

Then connect to it in a browser at localhost:8080.

Running the webapp using Docker

Once you've got it running locally, build and run it as a Docker container:

docker build --tag k8s-test .

docker run --rm \
    --name k8s-test \
    --publish 8080:8080 \
    --env DB_HOST=... \
    --env DB_NAME=k8s_test \
    --env DB_USERNAME=k8s_test \
    --env DB_PASSWORD=password \
    --env FLASK_PORT=8080 \
    k8s-test

Note that you will need to point to the Postgres server using its IP address[2]Even if you have Postgres running locally on your dev box, you won't be able to reference it via localhost, since localhost inside the container means the container itself, not the dev box running outside the container. , or a name that will resolve from inside the container.

Deploy the webapp into the cluster

To deploy the webapp into our Kubernetes cluster, we first need to tag the Docker image, and push it to our registry:

docker tag k8s-test vm-kcontrol:6060/k8s-test
docker push vm-kcontrol:6060/k8s-test

The webapp needs a few configuration settings, so we'll store these as secrets:

cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: k8s-test
    type: Opaque
    stringData:
      DB_USERNAME: k8s_test
      DB_PASSWORD: password
EOF

Now we can deploy our webapp into the cluster:

cat <<EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: k8s-test
      name: k8s-test
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: k8s-test
      template:
        metadata:
          labels:
            app: k8s-test
        spec:
          containers:
          - image: vm-kcontrol:6060/k8s-test
            name: k8s-test
            envFrom:
            - secretRef:
                name: k8s-test
            env:
            - name: DB_HOST
              value: "postgres-postgresql"
            - name: DB_NAME
              value: "k8s_test"
            - name: FLASK_PORT
              value: "8080"
EOF

Note that we connect to Postgres using the name postgres-postgresql, but if you're having trouble getting DNS to work inside the pods[3]This will almost certainly be happening because of the firewall., Kubernetes injects environment variables into the container that define Postgres settings, and so the code checks for both DB_HOST and POSTGRES_POSTGRESQL_SERVICE_HOST[4]This will be an IP address, and so won't need DNS to work. to figure out where to connect. So in this case, delete the DB_HOST setting in the YAML above, and the code will fall back to connecting to the server specified in POSTGRES_POSTGRESQL_SERVICE_HOST.

Finally, we create a service to expose the webapp to the outside:

cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Service
    metadata:
      name: k8s-test
    spec:
      type: NodePort
      selector:
        app: k8s-test
      ports:
      - protocol: TCP
        port: 8080
        nodePort: 30999
EOF

Open two browsers, one at vm-knode1:30999 and the other at vm-knode2:30999, and try adding a message in one. It will be stored in the database, and show up in the other window. You should be able to bring nodes up and down, even reboot the entire cluster, and the messages will be persisted in the Postgres database.

References

References
1 Since this is how settings are passed into a Docker container.
2 Even if you have Postgres running locally on your dev box, you won't be able to reference it via localhost, since localhost inside the container means the container itself, not the dev box running outside the container.
3 This will almost certainly be happening because of the firewall.
4 This will be an IP address, and so won't need DNS to work.
Have your say