Letting VMs talk to Kubernetes

Kamesh Sampath
5 min readJan 9, 2022

--

A very common scenario in an everyday Cloud Native Developer’s life is to make disparate systems and services to talk to each other. A classic example How to call a Kubernetes service from service from anVM without using LoadBalancer type Kubernetes services i.e. using Cluster-IP or Pod-IP?

This type of communication needs are common in typical production deployments. In production deployments the communication between Kubernetes nodes and non Kubernetes nodes are handled with some sophisticated techniques like VPC, VPN etc., In this story we will explore how to do that on a developer box.

Demo Setup

So before we deploy the demo here are the few tools that you need,

Download or clone the demo sources from my GitHub repository,

git clone https://github.com/kameshsampath/vm-to-kubernetes-demo
cd vm-to-kubernetes-demo

For the rest of this story we will refer to the demo sources folder as $DEMO_HOME. If you have direnv then the environment variables DEMO_HOME,KUBE_CONFIG_DIR and KUBECONFIG are automatically set for you via the .envrc file in the sources. If you don’t use direnv, set them as shown before proceeding further,

export DEMO_HOME="${PWD}"
export KUBECONFIG_DIR="${PWD}/.kube"
export KUBECONFIG="${KUBECONFIG_DIR}/config"

For this demo we will be using k3s as our Kubernetes platform. We will setup k3s on multipass Ubuntu vm. Before we create our VM and deploy k3s let us understand the settings that we will be using for the k3s Kubernetes cluster,

  • --cluster-cidr=172.16.0.0/24 this setting allows us to create 65–110 Pods on the k3s node. In our case this will be a single node cluster.
  • --service-cidr=172.18.0.0/20 this setting allows us to create 4096 services on this Kubernetes cluster.
  • Finally we will disable --disable=traefik as we will not need or deploy any LoadBalancer services as part of this demo.

Let us now create the VM and deploy k3s onto it, to make the process simpler I will do cloud-init which does the setup for us while the VM is created.

multipass launch \
--name cluster1 \
--cpus 4 --mem 8g --disk 20g \
--cloud-init $DEMO_HOME/configs/k3s-cni-calico

Once we have the VM running, we can check its details using the command multipass info cluster1. As k3s deployed as part of the VM creation, lets pull the kubeconfig to the host machine,

mkdir -p "$KUBECONFIG_DIR"
chmod -R 700 "$KUBECONFIG_DIR"
export CLUSTER1_IP=$(multipass info cluster1 --format json | jq -r '.info.cluster1.ipv4[0]')
# Copy kubeconfig
multipass exec cluster1 sudo cat /etc/rancher/k3s/k3s.yaml > "$KUBECONFIG_DIR/config"
# use the $CLUSTER1_IP over localhost/127.0.0.1
sed -i -E "s|127.0.0.1|${CLUSTER1_IP}|" "$KUBECONFIG_DIR/config"
# for better clarity as default name is `default`
sed -i -E "s|(^.*:\s*)(default)|\1cluster1|g" "$KUBECONFIG_DIR/config"

If you do kubectl get pods --all-namespaces will show all the pods in Pending state, that’s because we don’t have any network plugin configured with our Kubernetes cluster. I use Calico network plugin for this demo but you can use any Kubernetes network plugin of your choice as long it has ability to define host gateway routes.

Let us deploy Calico network plugin to get the pods to life,

kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
kubectl rollout status -n tigera-operator deploy/tigera-operator --timeout=180s

Now let us create the Calico installation to match our Pod settings described earlier and we also enable IPv4 forwarding to enable pods/services communicate to outside world via Calcio’s host gateway.

kubectl create -f $DEMO_HOME/manifests/calico-cr.yaml

Now if you do kubectl get pods --all-namespaces you will notice all pods coming to life along with few Calico pods.

To complete our story on the Kubernetes side, let us deploy a nginx pod and service which will use for our connectivity test from the VM,

kubectl --context=cluster1 run nginx --image=nginx --expose --port 80 --dry-run=client -o yaml | k --context=cluster1  apply  -f -

The goal of story is to make a Virtual Machine communicate with Kubernetes services without using LoadBalancer services. In our case we need to talk to the nginx service.

Let us create a new multipass vm called vm1,

multipass launch --name vm1 \
--cpus 2 --mem 4g --disk 20g \
--cloud-init $DEMO_HOME/configs/workload-vm

Let us copy the cluster1 kubeconfig into vm1, its not mandatory but it helps to do run kubectl commands from within the vm1.

multipass transfer "$KUBECONFIG" vm1:/home/ubuntu/.kube/config

Rest of the demo commands needs to be executed from within the VM, so let us shell in to vm1, multipass exec vm1 bash.

Once inside thevm1 run the following commands to get the nginx service ip(CLUSTER-IP) and its pod ip (POD_IP),

export NGINX_SVC_IP=$(kubectl get svc nginx -ojsonpath='{.spec.clusterIP}')
export NGINX_POD_IP=$(kubectl get pod nginx -ojsonpath='{.status.podIP}')

Let us try connecting to the nginx service using its service IP $NGINX_SVC_IP or its pod IP $NGINX_POD_IP. You will notice both the commands timeout as we don’t have route to reach to Kubernetes service/pod.

curl -vv --connect-timeout 3 --max-time 5  $NGINX_SVC_IP
curl -vv --connect-timeout 3 --max-time 5 $NGINX_SVC_IP

When we setup Calico it would have set up the host routes and we enabled IP forwarding in its settings. Now adding a route from our vm1 to cluster1 for the cluster-cidr and service-cidr should enable us to communicate with the nginx service using its $NGINX_SVC_IP or $NGINX_POD_IP .

export CLUSTER1_IP=$(kubectl get nodes -owide --no-headers  | awk 'NR==1{print $6}')
sudo ip route add 172.16.0.0/28 via $CLUSTER1_IP
sudo ip route add 172.18.0.0/20 via $CLUSTER1_IP

Let us check the ip routes using the command sudo ip route show, and it should now have route to the service-cidr and cluster-cidr via the vm’s default route 192.168.64.1 i.e. via the host.

default via 192.168.64.1 dev enp0s2 proto dhcp src 192.168.64.92 metric 100
172.16.0.0/28 via 192.168.64.90 dev enp0s2
172.18.0.0/20 via 192.168.64.90 dev enp0s2
192.168.64.0/24 dev enp0s2 proto kernel scope link src 192.168.64.92
192.168.64.1 dev enp0s2 proto dhcp scope link src 192.168.64.92 metric 100

Now when we try the cURL commands again it should result give us a successful result like HTTP/1.1 200.

curl -I --connect-timeout 3 --max-time 5 $NGINX_SVC_IP
curl -I --connect-timeout 3 --max-time 5 $NGINX_POD_IP

Just to summarise what we did ,

  • Setup mutlipass VMs cluster1 and vm1
  • Setup a Kubernetes k3s cluster in cluster1 with Calico network plugin
  • As Calico has host routes to Kubernetes host VM i.e. cluster1, it will enable traffic to route the Kubernetes services from the vm1 using their service ip (CLUSTER-IP) or the pod IP
  • Finally we added routes in the vm1 via the cluster1 host IP to allow us to directly communicate with Kubernetes Pods/Services using its service IP or Pod IP

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Kamesh Sampath
Kamesh Sampath

Written by Kamesh Sampath

Empowering developers to push boundaries and create the impossible. With battle-tested insights from enterprise solutions, I help devs tackle real challenges.

No responses yet

Write a response