{"id":35625,"date":"2018-08-15T21:36:41","date_gmt":"2018-08-15T18:36:41","guid":{"rendered":"https:\/\/www.altoros.com\/blog\/?p=35625"},"modified":"2018-09-06T13:06:12","modified_gmt":"2018-09-06T10:06:12","slug":"kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash","status":"publish","type":"post","link":"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/","title":{"rendered":"Kubernetes Networking: How to Write Your Own CNI Plug-in with Bash"},"content":{"rendered":"<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-transparent ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#Exploring_the_internals_of_Kubernetes_networking\" >Exploring the internals of Kubernetes networking<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#The_model_of_a_Kubernetes_network\" >The model of a Kubernetes network<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#CNI_plug-ins\" >CNI plug-ins<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#Using_kubeadm_to_deploy_Kubernetes\" >Using kubeadm to deploy Kubernetes<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#Preparing_the_GCP_infrastructure\" >Preparing the GCP infrastructure<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#Installing_Kubernetes_with_kubeadm\" >Installing Kubernetes with kubeadm<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#Testing_the_cluster\" >Testing the cluster<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#Configuring_a_CNI_plug-in\" >Configuring a CNI plug-in<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#Testing_the_plugin\" >Testing the plugin<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#Fixing_container-to-container_communication\" >Fixing container-to-container communication<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-11\" href=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#Fixing_external_access_using_NAT\" >Fixing external access using NAT<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-12\" href=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#Communication_between_containers_on_different_VMs\" >Communication between containers on different VMs<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-13\" href=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#Conclusion\" >Conclusion<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-14\" href=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#Further_reading\" >Further reading<\/a><\/li><\/ul><\/nav><\/div>\n<h3><span class=\"ez-toc-section\" id=\"Exploring_the_internals_of_Kubernetes_networking\"><\/span>Exploring the internals of Kubernetes networking<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>When I was preparing Kubernetes training courses, I found an area that induced a lot of interest, but at the same time, was very difficult to explain\u2014<em>the internals of Kubernetes networking<\/em>. Everybody wants to know so much:<\/p>\n<ul>\n<li style=\"margin-bottom: 6px;\">How <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/\" rel=\"noopener\" target=\"_blank\">pods<\/a> deployed to different physical nodes can communicate directly with each other using IP addresses allocated from a single subnet<\/li>\n<li style=\"margin-bottom: 6px;\">How Kubernetes services work<\/li>\n<li style=\"margin-bottom: 6px;\">How load balancing is implemented<\/li>\n<li style=\"margin-bottom: 6px;\">How network policies are implemented<\/li>\n<li style=\"margin-bottom: 6px;\">How much overhead Kubernetes overlay networking adds<\/li>\n<\/ul>\n<p>It is difficult to answer all of these questions, because in order to really understand them you need to be aware of various low-level networking concepts and tools\u2014such as NAT, the OSI network layers, IPtables, VFS, VLANS, and more. People can get especially puzzled when they need to choose one of the available <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/cluster-administration\/networking\/#how-to-implement-the-kubernetes-networking-model\" rel=\"noopener\" target=\"_blank\">networking solutions for Kubernetes<\/a>. As you can see, the list is a large one.<\/p>\n<p>Most of the mentioned solutions include <em>container network interface<\/em> (CNI) plug-ins. These are the cornerstones of Kubernetes networking, and it is essential to understand them to make an informed decision which networking solution to chose. It is also useful to know some details about the internals of your preferred networking solution. This way, you will be able to choose what Kubernetes networking features you need, analyze networking performance \/ security \/ reliability, and troubleshoot low-level issues. <\/p>\n<p>The purpose of this blog post is to help you understand Kubernetes networking in greater detail and assist you with all of the above-mentioned tasks. So, here is the plan:<\/p>\n<ul>\n<li style=\"margin-bottom: 6px;\">We will start with the discussion of the Kubernetes network model and how CNI plug-ins fit into it.<\/li>\n<li style=\"margin-bottom: 6px;\">Then, we will try to write a simple CNI plug-in responsible for implementing a Kubernetes overlay network, as well as for allocating and configuring network interfaces in pods.<\/li>\n<li style=\"margin-bottom: 6px;\">We will also deploy a Kubernetes cluster and configure it to use our plug-in.<\/li>\n<li style=\"margin-bottom: 6px;\">Along the way, we will discuss all the relevant networking concepts required to understand how the plug-in works.<\/li>\n<\/ul>\n<p>Now let\u2019s jump directly to the fist topic.<\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"The_model_of_a_Kubernetes_network\"><\/span>The model of a Kubernetes network<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>The main idea behind the design of the Kubernetes network model is that you should be able to directly move your workload from virtual machines (VMs) to Kubernetes containers without any changes to your apps. This imposes three fundamental requirements:<\/p>\n<ul>\n<li>All the containers can communicate with each other directly without NAT.<\/li>\n<li>All the nodes can communicate with all containers (and vice versa) without NAT.<\/li>\n<li>The IP that a container sees itself as is the same IP that others see it as.<\/li>\n<\/ul>\n<p>The main challenge in implementing these requirements is that containers can be placed to different nodes. It is relatively easy to create a virtual network on a single host (what Docker does), but spreading this network across different virtual machines or physical hosts is not a trivial task. The following diagram illustrates how the Kubernetes network model is usually implemented.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/08\/Kubernetes-Network-Model.png\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/08\/Kubernetes-Network-Model-1024x402.png\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-35628\" \/><\/a><small>Example of a typical Kubernetes network<\/small><\/center><\/p>\n<p>The idea is that we allocate a subnet for each container host and then set up some kind of routing between the hosts to forward container traffic appropriately. This is something you will be able to implement by yourself after reading this post.<\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"CNI_plug-ins\"><\/span>CNI plug-ins<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Another problem with the Kubernetes network model is that there is no single standard implementation of it. Instead, the preferred implementation greatly depends on an environment where your cluster is deployed. That\u2019s why the Kubernetes team decided to externalize the approach and redirect the task of implementing the network model to <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/extend-kubernetes\/compute-storage-net\/network-plugins\/\" rel=\"noopener\" target=\"_blank\">a CNI plug-in<\/a>.<\/p>\n<p>A CNI plug-in is responsible for allocating network interfaces to the newly created containers. Kubernetes first creates a container without a network interface and then calls a CNI plug-in. The plug-in configures container networking and returns information about allocated network interfaces, IP addresses, etc. The parameters that Kubernetes sends to a CNI plugin, as well as the structure of the response must satisfy the <a href=\"https:\/\/github.com\/containernetworking\/cni\/blob\/main\/SPEC.md\" rel=\"noopener\" target=\"_blank\">CNI specification<\/a>, but the plug-in itself may do whatever it needs to do its job.<\/p>\n<p>Now, when you get the basic understanding of the CNI plug-ins, we shall proceed with the investigation of the CNI plug-in interface and implementation of our own one. Before we are able to do this, we need to deploy a test cluster to experiment with.<\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Using_kubeadm_to_deploy_Kubernetes\"><\/span>Using kubeadm to deploy Kubernetes<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Developed by the Kubernetes team, <a href=\"https:\/\/www.altoros.com\/blog\/a-multitude-of-kubernetes-deployment-tools-kubespray-kops-and-kubeadm\/\">kubeadm<\/a> can be used to configure and run Kubernetes components on a virtual machine or a physical host. There are many Kubernetes installers, but <em>kubeadm<\/em> is the most flexible one as it allows the use of your own network plug-in.<\/p>\n<p>Before working with <em>kubeadm<\/em>, we must provision and configure VMs that will constitute our Kubernetes cluster. Because I wanted to make this guide easily reproducible, I decided to choose a managed IaaS platform for this task, such as Google Cloud Platform (GCP). If you want to follow this guide and you don\u2019t have a GCP account, you can easily <a href=\"https:\/\/cloud.google.com\/free\/\" rel=\"noopener\" target=\"_blank\">register<\/a> a free trial account. It comes with a $300 credit, which is more than enough for this guide.<\/p>\n<p>To learn more about <em>kubeadm<\/em>, check out the <a href=\"https:\/\/kubernetes.io\/docs\/setup\/production-environment\/tools\/kubeadm\/create-cluster-kubeadm\/\" rel=\"noopener\" target=\"_blank\">official documentation<\/a>.<\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Preparing_the_GCP_infrastructure\"><\/span>Preparing the GCP infrastructure<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>With a GCP account ready, the first thing you need to do is open the <a href=\"https:\/\/cloud.google.com\/shell\/docs\/how-cloud-shell-works\" rel=\"noopener\" target=\"_blank\">Cloud Shell<\/a>. Cloud Shell is a VM that is automatically created for you by GCP. It already contains a lot of useful tools, such as the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">gcloud<\/code> CLI, and allows you to issue <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">gcloud<\/code> commands without authentication. You can learn how to open the Cloud Shell from the <a href=\"https:\/\/cloud.google.com\/shell\/docs\/using-cloud-shell\" rel=\"noopener\" target=\"_blank\">official guide<\/a>.<\/p>\n<p>Next, you have to create a new GCP network.<\/p>\n<pre>\r\n$ gcloud compute networks create k8s\r\n<\/pre>\n<p>This command creates a new GCP network and allocates a subnet in each of the GCP regions. It also creates some default routes to direct traffic between the created subnets and redirect all the external traffic to the default Internet gateway. You can examine the created network, subnets, and routes at \u201cVPC network -> VPC networks\u201d and \u201cVPC network -> Routes\u201d pages.<\/p>\n<p>Note that you use the default network instead of creating a new one, though having the same infrastructure as in this tutorial prevents you from unexpected surprises.<\/p>\n<p>By default, the GCP firewall allows only <a href=\"https:\/\/www.webopedia.com\/definitions\/egress-traffic\/\" rel=\"noopener\" target=\"_blank\">egress traffic<\/a>, so if you want to reach your network from the outside, you must create some firewall rules. The following command creates a firewall rule to open all the ports for all the protocols on all VMs in the Kubernetes network. (Yes, I know that it is terribly insecure, but it is okay for a few simple experiments.)<\/p>\n<pre>\r\n$ gcloud compute firewall-rules create k8s-allow-all \\\r\n    --network k8s \\\r\n    --action allow \\\r\n    --direction ingress \\\r\n    --rules all \\\r\n    --source-ranges 0.0.0.0\/0 \\\r\n    --priority 1000\r\n<\/pre>\n<p>Now we can start creating our cluster\u2014the simplest possible one. For the purpose, we just need a single master VM and a single worker VM. We can create both VMs using the following two commands:<\/p>\n<pre>\r\n$ gcloud compute instances create k8s-master \\\r\n    --zone us-central1-b \\\r\n    --image-family ubuntu-1604-lts \\\r\n    --image-project ubuntu-os-cloud \\\r\n    --network k8s \\\r\n    --can-ip-forward\r\n<\/pre>\n<pre>\r\n$ gcloud compute instances create k8s-worker \\\r\n    --zone us-central1-b \\\r\n    --image-family ubuntu-1604-lts \\\r\n    --image-project ubuntu-os-cloud \\\r\n    --network k8s \\\r\n    --can-ip-forward\r\n<\/pre>\n<p>Here, we are just creating two VMs (<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">k8s-master<\/code> and <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">k8s-worker<\/code>) from the Ubuntu v16.04 image. The important parameter here is <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">--can-ip-forward<\/code>. This parameter configures IP forwarding on the network interface of a VM, allowing it to receive and forward network packets that have different destination IP address from its own IP address. This is a requirement, because each VM should accept packets with the destination IP set to a container IP rather then an IP of a virtual machine.<\/p>\n<p>Note that alternatively you can enable IP forwarding on a VM using the following command: <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">sysctl -w net.ipv4.ip_forward=1<\/code>.<\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Installing_Kubernetes_with_kubeadm\"><\/span>Installing Kubernetes with kubeadm<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>After successfully creating both master and working VMs, you have to SSH to each of them. In GCP, you can SSH to a VM just by clicking on the SSH button on the \u201cCompute Engine -> VM instances\u201d page. Consult with the <a href=\"https:\/\/cloud.google.com\/compute\/docs\/instances\/connecting-to-instance\" rel=\"noopener\" target=\"_blank\">official docs<\/a> for more details.<\/p>\n<p>Next, you have to execute the following sequence of commands on both a master and a worker.<\/p>\n<ol>\n<li>Install some prerequisite packages<\/li>\n<pre>\r\n$ sudo apt-get update\r\n$ sudo apt-get install -y docker.io apt-transport-https curl jq nmap iproute2\r\n<\/pre>\n<p>The most important package here is Docker as Kubernetes uses it under the hood to run containers.<\/p>\n<\/ol>\n<ol start=\"2\">\n<li>Install kubeadm, kubelet, and kubectl<\/li>\n<pre>\r\n$ sudo su\r\n$ curl -s https:\/\/packages.cloud.google.com\/apt\/doc\/apt-key.gpg | apt-key add -\r\n$ cat > \/etc\/apt\/sources.list.d\/kubernetes.list &#60;&#60;EOF\r\ndeb http:\/\/apt.kubernetes.io\/ kubernetes-xenial main\r\nEOF\r\n<\/pre>\n<p>This command installs three binaries:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.altoros.com\/blog\/a-multitude-of-kubernetes-deployment-tools-kubespray-kops-and-kubeadm\/\">kubeadm<\/a>. The tool that we will be using to run and configure all Kubernetes components.<\/li>\n<li><a href=\"https:\/\/kubernetes.io\/docs\/reference\/command-line-tools-reference\/kubelet\/\" rel=\"noopener\" target=\"_blank\">kubelet<\/a>. The primary Kubernetes node agent. It talks to Docker and runs pods on a node. All the other Kubernetes components (such as an API server, etcd, kube-proxy, etc.) are run as pods by <b>kubelet<\/b>.<\/li>\n<li><a href=\"https:\/\/www.altoros.com\/visuals\/kubernetes-kubectl-cli-cheat-sheet\/\">kubectl<\/a>. The Kubernetes CLI,  which we use to issue commands to the Kubernetes API.<\/li>\n<\/ul>\n<\/ol>\n<p>Next, we need to start the cluster. To do so, execute the following command only from the master VM:<\/p>\n<pre>\r\n$ sudo kubeadm init --pod-network-cidr=10.244.0.0\/16\r\n<\/pre>\n<p>This command does the following:<\/p>\n<ul>\n<li style=\"margin-bottom: 6px;\">Creates the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">systemd<\/code> service for kubelet and starts it.<\/li>\n<li style=\"margin-bottom: 6px;\">Generates manifest files for all Kubernetes system components and configures kubelet to run them. The system components manifests are ordinary Kubernetes YAML files, which you usually use with kubectl. You can find them in the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">\/etc\/kubernetes\/manifests\/<\/code> folder.<\/li>\n<li style=\"margin-bottom: 6px;\">Most importantly, it configures Kubernetes to use the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.0.0\/16<\/code> CIDR range for the pod overlay networking. For now, this value isn\u2019t used by anybody, it is just stored in the etcd database for later use.<\/li>\n<\/ul>\n<p>The command below generates a lot of output. At the end of the output, you should see the command, which you may use to join the newly created cluster. It should look like this:<\/p>\n<pre>\r\nYou can now join any number of machines by running the following on each node\r\nas root:\r\n\r\n  kubeadm join 10.128.0.2:6443 --token 4s3sqa.z0jeax6iydntib5b --discovery-token-ca-cert-hash sha256:ac43930987f2c40386a172fe796fd22d905480671959d65044d66c1180c39f13\r\n<\/pre>\n<p>Now, copy the generated command and execute it on the worker VM. This command is similar to the previous one, but instead of configuring and running system components, it instructs the kubelet on the worker VM to connect to the previously deployed API server and join the cluster. <\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Testing_the_cluster\"><\/span>Testing the cluster<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Now, it&#8217;s time to check whether our cluster is working properly. The first thing we need to do is to configure kubectl to connect to the newly created cluster. In order to do this, run the following commands from the master VM:<\/p>\n<pre>\r\n$ mkdir -p $HOME\/.kube\r\n$ sudo cp -i \/etc\/kubernetes\/admin.conf $HOME\/.kube\/config\r\n$ sudo chown $(id -u):$(id -g) $HOME\/.kube\/config\r\n<\/pre>\n<p>Now, you should be able to use kubectl from the master VM. Let\u2019s use the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">kubectl get nodes<\/code> command to check the status of the cluster nodes.<\/p>\n<pre>\r\n$ kubectl get nodes\r\nNAME         STATUS     ROLES     AGE       VERSION\r\nk8s-master   NotReady   master    25m       v1.11.1\r\nk8s-worker   NotReady   <none>    9s        v1.11.1\r\n<\/pre>\n<p>As you can see from the output, both master and worker nodes are currently in the \u201cNotReady\u201d state. This is expected, because we haven\u2019t configured any networking plug-in yet. If you try to deploy a pod at this time, your pod will forever hang in the \u201cPending\u201d state, because the Kubernetes schedule will not be able to find any \u201cReady\u201d node for it. However, kubelet runs all the system components as ordinary pods. How, then, was it able to do so if the nodes are not ready?<\/p>\n<p>The answer is that neither of the system components is deployed to the pod overlay network\u2014all of them use the host network instead. If you check the definition of any of the system components in the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">\/etc\/kubernetes\/manifests\/<\/code> folder, you will see that all of them have the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">hostNetwork: true<\/code> property in their spec. The result is that all of the system components share the network interface with the host VM. For these pods, we don\u2019t need any networking plug-in.<\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Configuring_a_CNI_plug-in\"><\/span>Configuring a CNI plug-in<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Now we&#8217;re at the interesting part. We are going to deploy our custom CNI plugin to both master and worker VMs and see how it works. But before we do this, what subnets are allocated from the pod network range to both master and worker nodes? We can find it out using the following two commands:<\/p>\n<pre>\r\n$ kubectl describe node k8s-master | grep PodCIDR\r\nPodCIDR:                     10.244.0.0\/24\r\n\r\n$ kubectl describe node k8s-worker | grep PodCIDR\r\nPodCIDR:                     10.244.1.0\/24\r\n<\/pre>\n<p>As you can see from the output, the whole pod network range (<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.0.0.\/16<\/code>) has been divided into small subnets, and each of the nodes received its own subnets. This means that the master node can use any of the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.0.0<\/code>&#8211;<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.0.255<\/code> IPs for its containers, and the worker node uses <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.1.0<\/code>&#8211;<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.1.255<\/code> IPs.<\/p>\n<p>If you don\u2019t understand how I made this calculations or are not familiar with the CIDR range notation, you can read this <a href=\"https:\/\/en.wikipedia.org\/wiki\/Classless_Inter-Domain_Routing\" rel=\"noopener\" target=\"_blank\">article<\/a>.<\/p>\n<p>Now, we are ready to start configuring our plug-in. The first thing you should do is create the plug-in configuration. Save the following file as <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">\/etc\/cni\/net.d\/10-bash-cni-plugin.conf<\/code>.<\/p>\n<pre class=\"brush: xml; title: ; notranslate\" title=\"\">\r\n{\r\n        &quot;cniVersion&quot;: &quot;0.3.1&quot;,\r\n        &quot;name&quot;: &quot;mynet&quot;,\r\n        &quot;type&quot;: &quot;bash-cni&quot;,\r\n        &quot;network&quot;: &quot;10.244.0.0\/16&quot;,\r\n        &quot;subnet&quot;: &quot;&lt;node-cidr-range&gt;&quot;\r\n}\r\n<\/pre>\n<p>This must be done on both master and worker nodes. Don\u2019t forget to replace <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">&lt;node-cidr-range&gt;<\/code> with <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.0.0\/24<\/code> for the master and <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.1.0.\/24<\/code> for the worker. It is also very important that you put the file into the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">\/etc\/cni\/net.d\/<\/code> folder. kubelet uses this folder to discover CNI plug-ins.<\/p>\n<p>The first three parameters in the configuration (<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">cniVersion<\/code>, <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">name<\/code>, and <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">type<\/code>) are mandatory and are documented in the CNI specification. <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">cniVersion<\/code> is used to determine the CNI version used by the plugin, <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">name<\/code> is just the network name, and <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">type<\/code> refers to the file name of the CNI plug-in executable. The <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">network<\/code> and <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">subnet<\/code> parameters are our custom parameters, they are not mentioned in the CNI specification, and later we will see how exactly they are used by the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">bash-cni<\/code> network plug-in.<\/p>\n<p>The next thing to do is prepare a network bridge on both master and worker VMs. The <a href=\"https:\/\/en.wikipedia.org\/wiki\/Bridging_(networking)\" rel=\"noopener\" target=\"_blank\">network bridge<\/a> is a special device that aggregates network packets from multiple network interfaces. Later, whenever requested, our CNI plug-in will add network interfaces from all containers to the bridge.<\/p>\n<p>This allows containers on the same host to freely communicate with each other. The bridge can also have its own MAC and IP addresses, so each container sees the bridge as another device plugged into the same network. We reserve the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.0.1<\/code> IP address for the bridge on the master VM and <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.1.1<\/code> for the bridge on the worker VM. The following commands can be used to create and configure the bridge with the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">cni0<\/code> name:<\/p>\n<pre>\r\n$ sudo brctl addbr cni0\r\n$ sudo ip link set cni0 up\r\n$ sudo ip addr add &lt;bridge-ip&gt;\/24 dev cni0\r\n<\/pre>\n<p>These commands create the bridge, enable it, and then assign an IP address to it. The last command also implicitly creates a route, so that all traffic with the destination IP belonging to the pod CIDR range, local to the current node, will be redirected to the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">cni0<\/code> network interface. (As mentioned before, all the other software communicates with a bridge as though it were an ordinary network interface.) You can view this implicitly created route by running the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">ip route<\/code> command from both master and worker VMs:<\/p>\n<pre>\r\n$ ip route | grep cni0\r\n10.244.0.0\/24 dev cni0  proto kernel  scope link  src 10.244.0.1\r\n\r\n$ ip route | grep cni0\r\n10.244.1.0\/24 dev cni0  proto kernel  scope link  src 10.244.1.1\r\n<\/pre>\n<p>Now, let\u2019s create the plug-in itself. The plug-in&#8217;s executable file must be placed in the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">\/opt\/cni\/bin\/<\/code> folder, its name must be exactly the same as the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">type<\/code> parameter in the plug-in configuration (<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">bash-cni<\/code>), and its contents can be found in <a href=\"https:\/\/github.com\/s-matyukevich\/bash-cni-plugin\/blob\/master\/01_gcp\/bash-cni\" rel=\"noopener\" target=\"_blank\">this GiHub repo<\/a>. (After you put the plug-in in the correct folder, don\u2019t forget to make it executable by running <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">sudo chmod +x bash-cni<\/code>.) This should be done on both master and worker VMs.<\/p>\n<p>The <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">\/opt\/cni\/bin<\/code> folder stores all the CNI plug-ins. It contains a lot of default plug-ins, but we are not going to use them now, and we will discuss them later.<\/p>\n<p>Now, let\u2019s examine the plug-in&#8217;s source code. I won&#8217;t copy it here, instead you can open it in a separate browser window, and I will guide you through this code and explain its most important parts. The script starts with the following two lines:<\/p>\n<pre>\r\nexec 3>&1 # make stdout available as fd 3 for the result\r\nexec &>> \/var\/log\/bash-cni-plugin.log\r\n<\/pre>\n<p>Here, we redirect both <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">stdin<\/code> and <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">stderr<\/code> to a file and preserve the original <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">stdout<\/code> file descriptor as <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">&3<\/code>. This is required, because all CNI plug-ins are expected to read their input and write output using <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">stdin<\/code> and <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">stdout<\/code>. So, you can tail <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">\/var\/log\/bash-cni-plugin.log<\/code> to see the logs of the plug-in, and the plug-in also uses <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">'echo \u201c\u201d >&3'<\/code> whenever it needs to output something to the original <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">stdout<\/code>.<\/p>\n<p>Let\u2019s skip the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">allocate_ip<\/code> function for now and take a look at the main switch case.<\/p>\n<pre>\r\ncase $CNI_COMMAND in\r\nADD)\r\n    \u2026\r\nDEL)\r\n    \u2026\r\nGET)\r\n    \u2026\r\nVERSION)\r\n    \u2026\r\nesac\r\n\r\n<\/pre>\n<p>As you might infer from this definition, our CNI plug-in supports four commands\u2014<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">ADD<\/code>, <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">DEL<\/code>, <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">GET<\/code>, and <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">VERSION<\/code>. The caller of a CNI plug-in (kubelet in our case) must initialize the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">CNI_COMMAND<\/code> environment variable, which contains the desired command. The most important command is <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">ADD<\/code>, it is executed each time after a container is created, and it is responsible for allocating a network interface to the container. Let\u2019s take a look how it works.<\/p>\n<p>The command starts with retrieving the needed values from the plug-in configuration.<\/p>\n<pre>\r\n    network=$(echo \"$stdin\" | jq -r \".network\")\r\n    subnet=$(echo \"$stdin\" | jq -r \".subnet\")\r\n    subnet_mask_size=$(echo $subnet | awk -F  \"\/\" '{print $2}')\r\n<\/pre>\n<p>When this command is executed, <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">stdin<\/code> contains exactly the same content as we put into the plug-in configuration (<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">\/etc\/cni\/net.d\/10-bash-cni-plugin.conf<\/code>). Here, we are getting values of the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">network<\/code> and <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">subnet<\/code> variables. We also parse the subnet mask size. For the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.0.0\/24<\/code> subnet, the size will be 24.<\/p>\n<p>Next, we allocate an IP address for the container.<\/p>\n<pre>\r\n    all_ips=$(nmap -sL $subnet | grep \"Nmap scan report\" | awk '{print $NF}')\r\n    all_ips=(${all_ips[@]})\r\n    skip_ip=${all_ips[0]}\r\n    gw_ip=${all_ips[1]}\r\n    reserved_ips=$(cat $IP_STORE 2> \/dev\/null || printf \"$skip_ip\\n$gw_ip\\n\") # reserving 10.244.0.0 and 10.244.0.1 \r\n    reserved_ips=(${reserved_ips[@]})\r\n    printf '%s\\n' \"${reserved_ips[@]}\" > $IP_STORE\r\n    container_ip=$(allocate_ip)\r\n<\/pre>\n<p>First, we create a list of all the available IPs (<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">all_ips<\/code>) using the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">nmap<\/code> program. Then, we read the list of IPs that are already taken (<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">reserved_ips<\/code>) from the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">\/tmp\/reserved_ips<\/code> file (the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">IP_STORE<\/code> variable points to this file). We always skip the first IP in the subnet (<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.0.0<\/code>) and assume that the next IP (<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.0.1<\/code>) will be a gateway for all containers. Remember, that the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">cni0<\/code> bridge will be our gateway, and we already assigned the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.0.1<\/code> and <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.1.1<\/code> IP addresses to the bridges on the master and worker VMs.<\/p>\n<p>Finally, we call the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">allocate_ip<\/code> function, which just iterates over <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">all_ips<\/code> and <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">reserved_ips<\/code>, finds the first non-reserved IP, and updates the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">\/tmp\/reserved_ips<\/code> file. This is the job that is usually done by the IPAM (IP address management) CNI plug-ins. We allocated the IP address exactly in the same way as host-local IPAM CNI plug-in does it (though, definitely, our implementation is very simplified). There are other possible ways of allocating IP addresses that include, for example, the allocation from a DHCP server.<\/p>\n<p>The next two lines might look strange to you.<\/p>\n<pre>\r\n    mkdir -p \/var\/run\/netns\/\r\n    ln -sfT $CNI_NETNS \/var\/run\/netns\/$CNI_CONTAINERID\r\n<\/pre>\n<p>The CNI spec tells the caller (in our case, kubelet) to create a network namespace and pass it in the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">CNI_NETNS<\/code> environment variable. Here, we are creating a symlink that points to the network namespace and is located in the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">\/var\/run\/netns\/<\/code> folder (this is something that the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">ip netns<\/code> tool assumes). After those commands are executed, we will be able to access the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">CNI_NETNS<\/code> namespace using the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">ip netns $CNI_CONTAINERID<\/code> command.<\/p>\n<p>Then, we create a pair of network interfaces.<\/p>\n<pre>\r\n    rand=$(tr -dc 'A-F0-9' &#60; \/dev\/urandom | head -c4)\r\n    host_if_name=\"veth$rand\"\r\n    ip link add $CNI_IFNAME type veth peer name $host_if_name \r\n<\/pre>\n<p>The interfaces are created as an interconnected pair. Packages transmitted to one of the devices in the pair are immediately received on the other device. <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">CNI_IFNAME<\/code> is provided by the caller and specifies the name of the network interface that will be assigned to the container (usually, <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">eth0<\/code>). The name of the second network interface is generated dynamically.<\/p>\n<p>The second interface remains in the host namespace and should be added to the bridge. That is something we are about to do in the next two lines.<\/p>\n<pre>\r\n    ip link set $host_if_name up \r\n    ip link set $host_if_name master cni0 \r\n<\/pre>\n<p>This interface will be responsible for receiving network packets that appear in the bridge and are intended to be sent to the container.<\/p>\n<p>Let me pause for a moment and explain the general idea behind bridging. Here, are some analogies I found useful:<\/p>\n<ul>\n<li>A bridge is analogous to a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Network_switch\" rel=\"noopener\" target=\"_blank\">network switch<\/a>.<\/li>\n<li>A container is analogous to a device that we plug in to the switch.<\/li>\n<li>A network interface pair is analogous to a wire-one end we plug into a device and another to the network switch.<\/li>\n<\/ul>\n<p>Any device will always have an IP address, and we will also allocate an IP for the container network interface. The port where we plug in the other end of the wire doesn\u2019t have its own IP, and we don\u2019t allocate an IP for the host interface either. However, a port in a switch always has its own MAC address (check this <a href=\"https:\/\/networkengineering.stackexchange.com\/questions\/10899\/why-each-and-every-single-port-on-layer-2-switches-need-to-have-its-own-mac-add\" rel=\"noopener\" target=\"_blank\">StackExchange thread<\/a> if you are curious why it is the case), and similarly our host interface has a MAC.<\/p>\n<p>A network switch, as well as a bridge, are both holding a list of MAC addresses connected to each of their ports. They use this list to figure out which port they need to forward an incoming network package. This way, they prevent flooding everybody else with unnecessary traffic. Some switches (layer 3 switches) can assign an IP to one of their ports and use this port to connect to the external networks. That is something we are doing with our bridge, as well. We\u2019ve configured an IP on the bridge, which all the containers will use as their gateway to communicate with the outside world.<\/p>\n<p>Okay, now let\u2019s go back to our script. The next step is more or less obvious\u2014we need to configure the container interface.<\/p>\n<pre>\r\n    ip link set $CNI_IFNAME netns $CNI_CONTAINERID\r\n    ip netns exec $CNI_CONTAINERID ip link set $CNI_IFNAME up\r\n    ip netns exec $CNI_CONTAINERID ip addr add $container_ip\/$subnet_mask_size dev $CNI_IFNAME\r\n    ip netns exec $CNI_CONTAINERID ip route add default via $gw_ip dev $CNI_IFNAME \r\n<\/pre>\n<p>First, we move the interface to the new network namespace. (After this step, nobody in the host namespace will be able to communicate directly with the container interface. All communication must be done only via the host pair.) Then, we assign the previously allocated container IP to the interface and create a default route that redirects all traffic to the default gateway, which is the IP address of the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">cni0<\/code> bridge).<\/p>\n<p>That&#8217;s it. All configuration is done now. The only thing left is to return the information about the created network interface to the caller. This is something that we are doing in the <a href=\"https:\/\/github.com\/s-matyukevich\/bash-cni-plugin\/blob\/master\/01_gcp\/bash-cni#L65-L82\" rel=\"noopener\" target=\"_blank\">last statement of the ADD command<\/a>. (Note, how we are using the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">&3<\/code> descriptor to print the result to the original <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">stdout<\/code>.)<\/p>\n<p>The other three CNI commands are much simpler and are not so important for us. The <a href=\"https:\/\/github.com\/s-matyukevich\/bash-cni-plugin\/blob\/master\/01_gcp\/bash-cni#L86\" rel=\"noopener\" target=\"_blank\">DEL<\/a> command just removes the IP address of the container from the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">reserved_ips<\/code> list. Note that we don\u2019t have to delete any network interfaces, because they will be deleted automatically after the container namespace will be removed. <a href=\"https:\/\/github.com\/s-matyukevich\/bash-cni-plugin\/blob\/master\/01_gcp\/bash-cni#L94\" rel=\"noopener\" target=\"_blank\">GET<\/a> is intended to return the information about some previously created container, but it is not used by kubelet, and we don\u2019t implement it at all. <a href=\"https:\/\/github.com\/s-matyukevich\/bash-cni-plugin\/blob\/master\/01_gcp\/bash-cni#L99\" rel=\"noopener\" target=\"_blank\">VERSION<\/a> just prints the supported CNI versions.<\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Testing_the_plugin\"><\/span>Testing the plugin<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Now, if you execute the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">kubectl get node<\/code> command, you can see that both nodes should go to the &#8220;Ready&#8221; state. So, let\u2019s try to deploy an application and see how it works. But before we&#8217;re able to do this, we should \u201cuntaint\u201d the master node. By default, the scheduler will not put any pods on the master node, because it is \u201ctainted.\u201d But we want to test cross-node container communication, so we need to deploy some pods on the master, as well as on the worker. The taint can be removed using the following command.<\/p>\n<pre>\r\n$ kubectl taint nodes k8s-master node-role.kubernetes.io\/master-\r\nnode\/k8s-master untainted\r\n<\/pre>\n<p>Next, let\u2019s use <a href=\"https:\/\/github.com\/s-matyukevich\/bash-cni-plugin\/blob\/master\/01_gcp\/test-deployment.yml\" rel=\"noopener\" target=\"_blank\">this test deployment<\/a> to validate our CNI plug-in. <\/p>\n<pre>\r\n$ kubectl apply -f https:\/\/raw.githubusercontent.com\/s-matyukevich\/bash-cni-plugin\/master\/01_gcp\/test-deployment.yml\r\n<\/pre>\n<p>Here, we are deploying four simple pods. Two goes on the master and the remaining two on the worker. (Pay attention to how we are using the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">nodeSelector<\/code> property to tell the pod, where it should be deployed.) On both master and worker nodes, we have one pod running NGINX and one pod running the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">sleep<\/code> command. Now, let\u2019s run <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">kubectl get pod<\/code> to make sure that all pods are healthy and then get the pods IP addresses using the following command:<\/p>\n<pre>\r\n$ k$ kubectl describe pod | grep IP\r\nIP:                 10.244.0.4\r\nIP:                 10.244.1.3\r\nIP:                 10.244.0.6\r\nIP:                 10.244.1.2\r\n<\/pre>\n<p>In your case, the result might be different.<\/p>\n<p><p>Next, you have to get inside inside the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">bash-master<\/code> pod.<\/p>\n<pre>\r\n$ kubectl exec -it bash-master bash\r\n<\/pre>\n<p>From inside of the pod, you can ping various addresses to verify network connectivity. My result was as follows.<\/p>\n<pre>\r\n$ ping 10.128.0.2 # can ping own host \r\nPING 10.128.0.2 (10.128.0.2) 56(84) bytes of data.\r\n64 bytes from 10.128.0.2: icmp_seq=1 ttl=64 time=0.110 ms\r\n\r\n$ ping 10.128.0.3 # can\u2019t ping different host \r\nPING 10.128.0.3 (10.128.0.3) 56(84) bytes of data.\r\n\r\n$ ping 10.244.0.6 # can\u2019t ping a container on the same host\r\nPING 10.244.0.6 (10.244.0.6) 56(84) bytes of data.\r\n\r\n$ ping 10.244.1.3 # can\u2019t ping a container on a different host\r\nPING 10.244.1.3 (10.244.1.3) 56(84) bytes of data.\r\n\r\n$ ping 108.177.121.113 # can\u2019t ping any external address\r\nPING 108.177.121.113 (108.177.121.113) 56(84) bytes of data.\r\n<\/pre>\n<p>As you can see, the only thing that actually works is a container to host communication. There&#8217;s still a lot of work for us to do.<\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Fixing_container-to-container_communication\"><\/span>Fixing container-to-container communication<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>When I first saw this result, I was very puzzled. While we haven\u2019t done anything for cross-host communication and for external access to work, container-to-container communication on the same host should work for sure! So, I spent half a day investigating different properties of bridges and virtual network interfaces and sniffing the Ethernet traffic between containers using <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">tcpdump<\/code>. However, the issue appears to be completely unrelated to our setup. In order to explain it, let me first show you the content of the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">iptables FORWARD<\/code> chain.<\/p>\n<pre>\r\n$ sudo iptables -S FORWARD\r\n-P FORWARD DROP\r\n-A FORWARD -m comment --comment \"kubernetes forwarding rules\" -j KUBE-FORWARD\r\n-A FORWARD -j DOCKER-ISOLATION\r\n-A FORWARD -o docker0 -j DOCKER\r\n-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT\r\n-A FORWARD -i docker0 ! -o docker0 -j ACCEPT\r\n-A FORWARD -i docker0 -o docker0 -j ACCEPT\r\n<\/pre>\n<p>Here, you can see all the rules in the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">FORWARD<\/code> chain. This very chain is applied to all the packets that instead of being passed to some local process are forwarded elsewhere. (When it comes to the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">iptable<\/code> rules, a Linux kernel treats interfaces in non-root network namespaces as if they were external.) Then you may ask, why do the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">iptable<\/code> rules are applied to the container-container traffic at all, even if this traffic should not cross the bridge? This is a special Linux feature, which you can inspect more closely <a href=\"https:\/\/serverfault.com\/questions\/162366\/iptables-bridge-and-forward-chain\" rel=\"noopener\" target=\"_blank\">here<\/a>. You may also wonder why is it possible for us to ping the host machine from the container? In this case, the destination of the request is local to the host, and <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">iptables<\/code> applies the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">INPUT<\/code> chain to the request instead of the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">FORWARD<\/code> chain.<\/p>\n<p>Now, if you carefully examine the content of the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">FORWARD<\/code> chain and the nested chains (<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">KUBE-FORWARD<\/code>, <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">DOCKER-ISOLATION<\/code>, and <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">DOCKER<\/code>), you will see that there is no  single rule that can be applied to a request coming from one container to another. In this case, the default chain policy\u2014<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">DROP<\/code>\u2014is applied. If you are interested, the default chain policy was set by Docker to enhance security.<\/p>\n<p>In order to fix the issue, we need to apply additional forwarding rules that will allow to freely forward traffic inside the whole pod CIDR range. You should execute the two commands below on both master and worker VMs. This should fix the issues with communication between containers located at the same host.<\/p>\n<pre>\r\n$ sudo iptables -t filter -A FORWARD -s 10.244.0.0\/16 -j ACCEPT\r\n$ sudo iptables -t filter -A FORWARD -d 10.244.0.0\/16 -j ACCEPT\r\n<\/pre>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Fixing_external_access_using_NAT\"><\/span>Fixing external access using NAT<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>The fact that our container can\u2019t reach the Internet should be no surprise for you. Our containers are located in a private subnet (<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.0.0\/24<\/code>). Whenever a network package comes outside of a container, it will have source IP that belongs to this subnet. Due to the fact that the subnet is private, the router will drop this packet when it tries to reach the Internet. And even if it doesn\u2019t, and the packet successfully reaches its destination, there will be no way to send a response back to the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.0.X<\/code> address.<\/p>\n<p>In order to fix this, we should set up a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Network_address_translation\" rel=\"noopener\" target=\"_blank\">network address translation<\/a> (NAT) on the host VM. NAT is a mechanism that replaces a source IP address in the outcoming package with the IP address of the host VM. The original source address is stored somewhere else in the TCP packet. When the response arrives to the host VM, the original address is restored, and a package is forwarded to the container network interface. You can easily setup NAT using the following two commands:<\/p>\n<pre>\r\n$ sudo iptables -t nat -A POSTROUTING -s 10.244.0.0\/24 ! -o cni0 -j MASQUERADE\r\n<\/pre>\n<pre>\r\n$ sudo iptables -t nat -A POSTROUTING -s 10.244.1.0\/24 ! -o cni0 -j MASQUERADE\r\n<\/pre>\n<p>The first command should be executed on the master VM, and the second on the worker. Pay attention that here we are NATing only packages with the source IP belonging to the local pod subnet, which are not meant to be sent to the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">cni0<\/code> bridge. <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">MASQUERADE<\/code> is an <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">iptables<\/code> target, which can be used instead of the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">SNAT<\/code> target (a source NAT), when the external IP of the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">inet<\/code> interface is not known at the moment of writing the rule.<\/p>\n<p>After you set up NAT, you should be able to access external addresses, as well as other VMs, in the cluster.<\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Communication_between_containers_on_different_VMs\"><\/span>Communication between containers on different VMs<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Finally, there is only one issue left. Containers on different VMs can\u2019t talk to each other. If you think about it, this makes a perfect sense. If we are sending a request from the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.0.4<\/code> container to the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.1.3<\/code> container, we never specified that the request should be routed through the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.128.0.3<\/code> VM. Usually, in such cases, we can rely on the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">ip route<\/code> command to setup some additional routes for us. If we carried out this experiment on some bare-metal servers directly connected to each other, we could do something like this:<\/p>\n<pre>\r\n$ ip route add 10.244.1.0\/24 via 10.128.0.3 dev ens4 # run on master \r\n$ ip route add 10.244.0.0\/24 via 10.128.0.2 dev ens4 # run on worker\r\n<\/pre>\n<p>This, however, requires the direct layer 2 connectivity between VMs. In GCP, the above-listed commands are not going to work. Instead, we can rely on some native GCP features to set up routing for us. Just execute the following two commands from the Cloud Shell VM:<\/p>\n<pre>\r\n$ gcloud compute routes create k8s-master --destination-range 10.244.0.0\/24 --network k8s --next-hop-address 10.128.0.2\r\n\r\n$ gcloud compute routes create k8s-worker --destination-range 10.244.1.0\/24 --network k8s --next-hop-address 10.128.0.3\r\n<\/pre>\n<p>The first command creates a route in GCP to forward all packages with the destination IP from the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.244.0.0\/2<\/code> range to the master VM (<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">10.128.0.2<\/code>). The second command does the same thing for the worker VM. After this, containers located on different VMs should be able to communicate.<\/p>\n<p>Now, all types of communication should work just fine. As a final step, I want you to take a final look at the Kubernetes networking solution we\u2019ve just implemented.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/08\/Configured-Kubernetes-Network-Model.png\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/08\/Configured-Kubernetes-Network-Model-1024x613.png\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-35642\" \/><\/a><small>The Kubernetes network after configuration<\/small><\/center><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Conclusion\"><\/span>Conclusion<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>We have done a great job writing our own CNI plug-in and implementing it into a Kubernetes cluster, but still our plug-in is far from being perfect. It doesn\u2019t cover a lot of scenarios, and we haven\u2019t discussed a lot of important details, related to the Kubernetes networking. The following list contains items that, from my point of view, would be nice to discuss and implement:<\/p>\n<ul>\n<li style=\"margin-bottom: 6px;\">In the real world, nobody writes a CNI plug-in from scratch like we did, everybody utilizes <a href=\"https:\/\/github.com\/containernetworking\/plugins\" rel=\"noopener\" target=\"_blank\">default CNI plug-ins<\/a> instead and redirects all the job to them. We can rewrite our plug-in to do so, as well.<\/li>\n<ul>\n<li style=\"margin-bottom: 6px;\">We can replace our IP management with some of the <a href=\"https:\/\/github.com\/containernetworking\/plugins\/tree\/master\/plugins\/ipam\/host-local\" rel=\"noopener\" target=\"_blank\">host-local ipam plug-ins<\/a>. <\/li>\n<li style=\"margin-bottom: 6px;\">We can also replace the rest of our plug-in with the <a href=\"https:\/\/github.com\/containernetworking\/plugins\/tree\/master\/plugins\/main\/bridge\" rel=\"noopener\" target=\"_blank\">bridge CNI plug-in<\/a>. It is actually working in exactly the same way as our own one.<\/li>\n<\/ul>\n<li style=\"margin-bottom: 6px;\">We did a lot of manual work in order to prepare both master and worker VMs. Real CNI plug-ins, for instance, <a href=\"https:\/\/github.com\/containernetworking\/plugins\/tree\/master\/plugins\/meta\/flannel\" rel=\"noopener\" target=\"_blank\">flanell<\/a>, usually rely on a special agent to do that. It would be nice to examine how it works.<\/li>\n<li style=\"margin-bottom: 6px;\">Usually, real CNI plug-ins are deployed in Kubernetes itself. They utilize Kubernetes features, such as service accounts, to get access to the API. We can examine some of the plug-in manifests to see how it works.<\/li>\n<li style=\"margin-bottom: 6px;\">We have seen that there are a lot of different ways to set up routing between cluster VMs. We did it using GCP routes and discussed how it might be done using a Linux routing table. It would be cool to discuss or implement some other options, such as <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">vxlan<\/code>, UDP packages, etc.<\/li>\n<li style=\"margin-bottom: 6px;\">Our plug-in doesn\u2019t support Kubernetes network policies and mesh networking. It might be worth taking a look at the plug-ins that do and check out how they work.<\/li>\n<li>Kubernetes has a lot of other networking related components besides CNI plug-ins, such as <a href=\"https:\/\/kubernetes.io\/docs\/reference\/command-line-tools-reference\/kube-proxy\/\" rel=\"noopener\" target=\"_blank\">kube-proxy<\/a>, that implements Kubernetes services and Core DNS. We can discuss this, as well.<\/li>\n<\/ul>\n<p>I can continue this list, but I am afraid of it becoming too long. Right now, I am considering writing a second part of this series, so I would appreciate any feedback and suggestions of topics you want me to cover. I hope that this post was insightful and helped you to understand Kubernetes much better.<\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Further_reading\"><\/span>Further reading<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<ul>\n<li><a href=\"https:\/\/www.altoros.com\/blog\/a-multitude-of-kubernetes-deployment-tools-kubespray-kops-and-kubeadm\/\">A Multitude of Kubernetes Deployment Tools: Kubespray, kops, and kubeadm<\/a><\/li>\n<li><a href=\"https:\/\/www.altoros.com\/blog\/managing-multi-cluster-workloads-with-google-kubernetes-engine\/\">Managing Multi-Cluster Workloads with Google Kubernetes Engine<\/a><\/li>\n<li><a href=\"https:\/\/www.altoros.com\/blog\/enabling-persistent-storage-for-docker-and-kubernetes-on-oracle-cloud\/\">Enabling Persistent Storage for Docker and Kubernetes on Oracle Cloud<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Exploring the internals of Kubernetes networking<\/p>\n<p>When I was preparing Kubernetes training courses, I found an area that induced a lot of interest, but at the same time, was very difficult to explain\u2014the internals of Kubernetes networking. Everybody wants to know so much:<\/p>\n<p>How pods deployed to different physical nodes can communicate [&#8230;]<\/p>\n","protected":false},"author":94,"featured_media":35769,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":"","_links_to":"","_links_to_target":""},"categories":[214],"tags":[873,570,912],"class_list":["post-35625","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tutorials","tag-cloud-native","tag-containers","tag-kubernetes"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Kubernetes Networking: How to Write Your Own CNI Plug-in with Bash | Altoros<\/title>\n<meta name=\"description\" content=\"Learn how to create a container network interface plug-in, configure and test it, as well as enable external access and communication between containers.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Kubernetes Networking: How to Write Your Own CNI Plug-in with Bash | Altoros\" \/>\n<meta property=\"og:description\" content=\"Exploring the internals of Kubernetes networking When I was preparing Kubernetes training courses, I found an area that induced a lot of interest, but at the same time, was very difficult to explain\u2014the internals of Kubernetes networking. Everybody wants to know so much: How pods deployed to different physical nodes can communicate [...]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/\" \/>\n<meta property=\"og:site_name\" content=\"Altoros\" \/>\n<meta property=\"article:published_time\" content=\"2018-08-15T18:36:41+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2018-09-06T10:06:12+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/08\/Kubernetes-Network-Model-Configuration-Management-v5.gif\" \/>\n\t<meta property=\"og:image:width\" content=\"640\" \/>\n\t<meta property=\"og:image:height\" content=\"360\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/gif\" \/>\n<meta name=\"author\" content=\"Siarhei Matsiukevich\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Siarhei Matsiukevich\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"29 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\\\/\"},\"author\":{\"name\":\"Siarhei Matsiukevich\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/#\\\/schema\\\/person\\\/5c29ff93db657e3cf6552d5e642003d9\"},\"headline\":\"Kubernetes Networking: How to Write Your Own CNI Plug-in with Bash\",\"datePublished\":\"2018-08-15T18:36:41+00:00\",\"dateModified\":\"2018-09-06T10:06:12+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\\\/\"},\"wordCount\":5023,\"commentCount\":20,\"image\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/wp-content\\\/uploads\\\/2018\\\/08\\\/Kubernetes-Network-Model-Configuration-Management-v5.gif\",\"keywords\":[\"Cloud-Native\",\"Containers\",\"Kubernetes\"],\"articleSection\":[\"Tutorials\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.altoros.com\\\/blog\\\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\\\/\",\"url\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\\\/\",\"name\":\"Kubernetes Networking: How to Write Your Own CNI Plug-in with Bash | Altoros\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/wp-content\\\/uploads\\\/2018\\\/08\\\/Kubernetes-Network-Model-Configuration-Management-v5.gif\",\"datePublished\":\"2018-08-15T18:36:41+00:00\",\"dateModified\":\"2018-09-06T10:06:12+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/#\\\/schema\\\/person\\\/5c29ff93db657e3cf6552d5e642003d9\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.altoros.com\\\/blog\\\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/wp-content\\\/uploads\\\/2018\\\/08\\\/Kubernetes-Network-Model-Configuration-Management-v5.gif\",\"contentUrl\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/wp-content\\\/uploads\\\/2018\\\/08\\\/Kubernetes-Network-Model-Configuration-Management-v5.gif\",\"width\":640,\"height\":360},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Kubernetes Networking: How to Write Your Own CNI Plug-in with Bash\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/\",\"name\":\"Altoros\",\"description\":\"Insight\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/#\\\/schema\\\/person\\\/5c29ff93db657e3cf6552d5e642003d9\",\"name\":\"Siarhei Matsiukevich\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/wp-content\\\/uploads\\\/2016\\\/06\\\/Sergey-Matyukevich-150x150.jpg\",\"url\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/wp-content\\\/uploads\\\/2016\\\/06\\\/Sergey-Matyukevich-150x150.jpg\",\"contentUrl\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/wp-content\\\/uploads\\\/2016\\\/06\\\/Sergey-Matyukevich-150x150.jpg\",\"caption\":\"Siarhei Matsiukevich\"},\"description\":\"Siarhei Matsiukevich is a Cloud Engineer and Go Developer at Altoros. With 6+ years in software engineering, he is an expert in cloud automation and designing architectures for complex cloud-based systems. An active member of the Go community, Siarhei is a frequent contributor to open-source projects, such as Ubuntu and Juju Charms.\",\"url\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/author\\\/siarhei-matsiukevich\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Kubernetes Networking: How to Write Your Own CNI Plug-in with Bash | Altoros","description":"Learn how to create a container network interface plug-in, configure and test it, as well as enable external access and communication between containers.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/","og_locale":"en_US","og_type":"article","og_title":"Kubernetes Networking: How to Write Your Own CNI Plug-in with Bash | Altoros","og_description":"Exploring the internals of Kubernetes networking When I was preparing Kubernetes training courses, I found an area that induced a lot of interest, but at the same time, was very difficult to explain\u2014the internals of Kubernetes networking. Everybody wants to know so much: How pods deployed to different physical nodes can communicate [...]","og_url":"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/","og_site_name":"Altoros","article_published_time":"2018-08-15T18:36:41+00:00","article_modified_time":"2018-09-06T10:06:12+00:00","og_image":[{"width":640,"height":360,"url":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/08\/Kubernetes-Network-Model-Configuration-Management-v5.gif","type":"image\/gif"}],"author":"Siarhei Matsiukevich","twitter_misc":{"Written by":"Siarhei Matsiukevich","Est. reading time":"29 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#article","isPartOf":{"@id":"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/"},"author":{"name":"Siarhei Matsiukevich","@id":"https:\/\/www.altoros.com\/blog\/#\/schema\/person\/5c29ff93db657e3cf6552d5e642003d9"},"headline":"Kubernetes Networking: How to Write Your Own CNI Plug-in with Bash","datePublished":"2018-08-15T18:36:41+00:00","dateModified":"2018-09-06T10:06:12+00:00","mainEntityOfPage":{"@id":"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/"},"wordCount":5023,"commentCount":20,"image":{"@id":"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#primaryimage"},"thumbnailUrl":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/08\/Kubernetes-Network-Model-Configuration-Management-v5.gif","keywords":["Cloud-Native","Containers","Kubernetes"],"articleSection":["Tutorials"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/","url":"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/","name":"Kubernetes Networking: How to Write Your Own CNI Plug-in with Bash | Altoros","isPartOf":{"@id":"https:\/\/www.altoros.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#primaryimage"},"image":{"@id":"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#primaryimage"},"thumbnailUrl":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/08\/Kubernetes-Network-Model-Configuration-Management-v5.gif","datePublished":"2018-08-15T18:36:41+00:00","dateModified":"2018-09-06T10:06:12+00:00","author":{"@id":"https:\/\/www.altoros.com\/blog\/#\/schema\/person\/5c29ff93db657e3cf6552d5e642003d9"},"breadcrumb":{"@id":"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#primaryimage","url":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/08\/Kubernetes-Network-Model-Configuration-Management-v5.gif","contentUrl":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/08\/Kubernetes-Network-Model-Configuration-Management-v5.gif","width":640,"height":360},{"@type":"BreadcrumbList","@id":"https:\/\/www.altoros.com\/blog\/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.altoros.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Kubernetes Networking: How to Write Your Own CNI Plug-in with Bash"}]},{"@type":"WebSite","@id":"https:\/\/www.altoros.com\/blog\/#website","url":"https:\/\/www.altoros.com\/blog\/","name":"Altoros","description":"Insight","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.altoros.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.altoros.com\/blog\/#\/schema\/person\/5c29ff93db657e3cf6552d5e642003d9","name":"Siarhei Matsiukevich","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2016\/06\/Sergey-Matyukevich-150x150.jpg","url":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2016\/06\/Sergey-Matyukevich-150x150.jpg","contentUrl":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2016\/06\/Sergey-Matyukevich-150x150.jpg","caption":"Siarhei Matsiukevich"},"description":"Siarhei Matsiukevich is a Cloud Engineer and Go Developer at Altoros. With 6+ years in software engineering, he is an expert in cloud automation and designing architectures for complex cloud-based systems. An active member of the Go community, Siarhei is a frequent contributor to open-source projects, such as Ubuntu and Juju Charms.","url":"https:\/\/www.altoros.com\/blog\/author\/siarhei-matsiukevich\/"}]}},"_links":{"self":[{"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/posts\/35625","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/users\/94"}],"replies":[{"embeddable":true,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/comments?post=35625"}],"version-history":[{"count":76,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/posts\/35625\/revisions"}],"predecessor-version":[{"id":36468,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/posts\/35625\/revisions\/36468"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/media\/35769"}],"wp:attachment":[{"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/media?parent=35625"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/categories?post=35625"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/tags?post=35625"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}