Static Pods
Static Pods are Kubernetes Pods that are run by the kubelet
on a single node and are not managed by the Kubernetes cluster itself. This means that whilst the Pod can appear within Kubernetes, it can't make use of a variety of Kubernetes functionality (such as the Kubernetes token or ConfigMap resources). The static Pod approach is primarily required for kubeadm as this is due to the sequence of actions performed by kubeadm
. Ideally, we want kube-vip to be part of the Kubernetes cluster, but for various bits of functionality we also need kube-vip to provide a HA virtual IP as part of the installation.
The sequence of events for building a highly available Kubernetes cluster with kubeadm
and kube-vip are as follows:
- Generate a kube-vip manifest in the static Pods manifest directory (see the generating a manifest section below).
- Run
kubeadm init
with the--control-plane-endpoint
flag using the VIP address provided when generating the static Pod manifest. - The
kubelet
will parse and execute all manifests, including the kube-vip manifest generated in step one and the other control plane components includingkube-apiserver
. - kube-vip starts and advertises the VIP address.
- The
kubelet
on this first control plane will connect to the VIP advertised in the previous step. kubeadm init
finishes successfully on the first control plane.- Using the output from the
kubeadm init
command on the first control plane, run thekubeadm join
command on the remainder of the control planes. - Copy the generated kube-vip manifest to the remainder of the control planes and place in their static Pods manifest directory (default of
/etc/kubernetes/manifests/
).
kube-vip as HA, Load Balancer, or both
The functionality of kube-vip
depends on the flags used to create the static Pod manifest. By passing in --controlplane
we instruct kube-vip
to provide and advertise a virtual IP to be used by the control plane. By passing in --services
we tell kube-vip
to provide load balancing for Kubernetes Service resources created inside the cluster. With both enabled, kube-vip
will manage a virtual IP address that is passed through its configuration for a highly available Kubernetes cluster. It will also watch Services of type LoadBalancer
and once their service.metadata.annotations["kube-vip.io/loadbalancerIPs"]
or spec.LoadBalancerIP
is updated (typically by a cloud controller, including (optionally) the one provided by kube-vip in on-prem scenarios) it will advertise this address using BGP/ARP. In this example, we will use both when generating the manifest.
Generating a Manifest
In order to create an easier experience of consuming the various functionality within kube-vip, we can use the kube-vip container itself to generate our static Pod manifest. We do this by running the kube-vip image as a container and passing in the various flags for the capabilities we want to enable.
Set configuration details
We use environment variables to predefine the values of the inputs to supply to kube-vip.
Set the VIP
address to be used for the control plane:
export VIP=192.168.0.40
Set the INTERFACE
name to the name of the interface on the control plane(s) which will announce the VIP. In many Linux distributions this can be found with the ip a
command.
export INTERFACE=ens160
Get the latest version of the kube-vip release by parsing the GitHub API. This step requires that jq
and curl
are installed.
KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name")
To set manually instead, find the desired release tag:
export KVVERSION=v0.5.0
Creating the manifest
With the input values now set, we can pull and run the kube-vip image supplying it the desired flags and values. Once the static Pod manifest is generated for your desired method (ARP or BGP), if running multiple control plane nodes, ensure it is placed in each control plane's static manifest directory (by default, /etc/kubernetes/manifests
).
Depending on the container runtime, use one of the two aliased commands to create a kube-vip command which runs the kube-vip image as a container.
For containerd, run the below command:
alias kube-vip="ctr image pull ghcr.io/kube-vip/kube-vip:$KVVERSION; ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip"
For Docker, run the below command:
alias kube-vip="docker run --network host --rm ghcr.io/kube-vip/kube-vip:$KVVERSION"
ARP
With the inputs and alias command set, we can run the kube-vip container to generate a static Pod manifest which will be directed to a file at /etc/kubernetes/manifests/kube-vip.yaml
. As such, this is assumed to run on the first control plane node.
This configuration will create a manifest that starts kube-vip providing control plane VIP and Kubernetes Service management using the leaderElection
method and ARP. When this instance is elected as the leader, it will bind the vip
to the specified interface
. This is the same behavior for Services of type LoadBalancer
.
Note: When running these commands on a to-be control plane node,
sudo
access may be required along with pre-creation of the/etc/kubernetes/manifests/
directory.
1kube-vip manifest pod \
2 --interface $INTERFACE \
3 --address $VIP \
4 --controlplane \
5 --services \
6 --arp \
7 --leaderElection | tee /etc/kubernetes/manifests/kube-vip.yaml
Example ARP Manifest
1apiVersion: v1
2kind: Pod
3metadata:
4 creationTimestamp: null
5 name: kube-vip
6 namespace: kube-system
7spec:
8 containers:
9 - args:
10 - manager
11 env:
12 - name: vip_arp
13 value: "true"
14 - name: port
15 value: "6443"
16 - name: vip_interface
17 value: ens192
18 - name: vip_cidr
19 value: "32"
20 - name: cp_enable
21 value: "true"
22 - name: cp_namespace
23 value: kube-system
24 - name: vip_ddns
25 value: "false"
26 - name: svc_enable
27 value: "true"
28 - name: vip_leaderelection
29 value: "true"
30 - name: vip_leaseduration
31 value: "5"
32 - name: vip_renewdeadline
33 value: "3"
34 - name: vip_retryperiod
35 value: "1"
36 - name: address
37 value: 192.168.0.40
38 image: ghcr.io/kube-vip/kube-vip:v0.4.0
39 imagePullPolicy: Always
40 name: kube-vip
41 resources: {}
42 securityContext:
43 capabilities:
44 add:
45 - NET_ADMIN
46 - NET_RAW
47 - SYS_TIME
48 volumeMounts:
49 - mountPath: /etc/kubernetes/admin.conf
50 name: kubeconfig
51 hostAliases:
52 - hostnames:
53 - kubernetes
54 ip: 127.0.0.1
55 hostNetwork: true
56 volumes:
57 - hostPath:
58 path: /etc/kubernetes/admin.conf
59 name: kubeconfig
60status: {}
BGP
This configuration will create a manifest that starts kube-vip providing control plane VIP and Kubernetes Service management. Unlike ARP, all nodes in the BGP configuration will advertise virtual IP addresses.
Note we bind the address to lo
as we don't want multiple devices that have the same address on public interfaces. We can specify all the peers in a comma-separated list in the format of address:AS:password:multihop
.
export INTERFACE=lo
1kube-vip manifest pod \
2 --interface $INTERFACE \
3 --address $VIP \
4 --controlplane \
5 --services \
6 --bgp \
7 --localAS 65000 \
8 --bgpRouterID 192.168.0.2 \
9 --bgppeers 192.168.0.10:65000::false,192.168.0.11:65000::false | tee /etc/kubernetes/manifests/kube-vip.yaml
Example BGP Manifest
1apiVersion: v1
2kind: Pod
3metadata:
4 creationTimestamp: null
5 name: kube-vip
6 namespace: kube-system
7spec:
8 containers:
9 - args:
10 - manager
11 env:
12 - name: vip_arp
13 value: "false"
14 - name: port
15 value: "6443"
16 - name: vip_interface
17 value: lo
18 - name: vip_cidr
19 value: "32"
20 - name: cp_enable
21 value: "true"
22 - name: cp_namespace
23 value: kube-system
24 - name: vip_ddns
25 value: "false"
26 - name: bgp_enable
27 value: "true"
28 - name: bgp_routerid
29 value: 192.168.0.2
30 - name: bgp_as
31 value: "65000"
32 - name: bgp_peeraddress
33 - name: bgp_peerpass
34 - name: bgp_peeras
35 value: "65000"
36 - name: bgp_peers
37 value: 192.168.0.10:65000::false,192.168.0.11:65000::false
38 - name: address
39 value: 192.168.0.40
40 image: ghcr.io/kube-vip/kube-vip:v0.3.9
41 imagePullPolicy: Always
42 name: kube-vip
43 resources: {}
44 securityContext:
45 capabilities:
46 add:
47 - NET_ADMIN
48 - NET_RAW
49 - SYS_TIME
50 volumeMounts:
51 - mountPath: /etc/kubernetes/admin.conf
52 name: kubeconfig
53 hostAliases:
54 - hostnames:
55 - kubernetes
56 ip: 127.0.0.1
57 hostNetwork: true
58 volumes:
59 - hostPath:
60 path: /etc/kubernetes/admin.conf
61 name: kubeconfig
62status: {}