We now have a VM that has been set up with the common infrastructure that all servers in a Kubernetes cluster need, but there are a few more things we can do that, while not essential, are useful.
Set up DNS
Each server will have a fixed IP address, and to avoid having to set up DNS, we just add them to /etc/hosts[1]This is a bit of a hack, but OK for our little test environment.:
192.168.50.70 vm-kcontrol 192.168.50.71 vm-knode1 192.168.50.72 vm-knode2
By doing this in the base image that will be used for all the VM's in the cluster, each server will be reachable, by name, from every other server.
Set up SSHThe cluster is managed via the control plane, so you usually won't need to SSH into these VM's, but while setting them up (or investigating problems), it's definitely useful to be able to do this. While the cluster can be managed from a machine within the cluster, it's usually done remotely, so from the machine that you will be managing the cluster from, create an SSH key pair, and save it somewhere e.g. ~/.ssh/vm-kubernetes: ssh-keygen -t ed25519 Then send it to the remote machine: ssh-copy-id -i ~/.ssh/vm-kubernetes.pub \ taka@192.168.50.70 |
![]() |
Add the following to ~/.ssh/config:
Host kcontrol IdentityFile ~/.ssh/vm-kubernetes HostName vm-kcontrol
You should now be able to SSH into the vm-kcontrol server, using the specified private key file[2]If you are using a different account name, you will also need to include a User directive. i.e. without having to use a password:
ssh kcontrol