This repository has been archived by the owner on Jul 3, 2021. It is now read-only.
Support a kubelet on the master node for pluggable CNI (calico, canal, etc) #403
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What this PR does / why we need it:
This PR demonstrates deploying kubelets on the master nodes. This is the foundation for using Kubernetes itself to install system-level drivers and addons such as pluggable CNI drivers. This PR includes pluggable CNI drivers for Calico, Canal, and Flannel.
How can this PR be verified?
Kubo-ci tests are forthcoming.
Just master kubelets:
with flannel running as a daemonset:
with flannel+calico running as a daemonset
with calico CNI running as a daemonset (requires intra-cluster L3, should work on GCP, vSphere, Openstack, AWS, not Azure without tweaking manifest to the new vxlan support):
Is there any change in kubo-release?
Yes, to enable taints/labels on the master node: cloudfoundry-incubator/kubo-release#333
Is there any change in kubo-ci?
Forthcoming integration/conformance tests with this variant of configuration (and a few pluggable CNIs)
Does this affect upgrade, or is there any migration required?
For master kubelet ops-file, this only adds an extra worker nodes (the masters themselves) to the cluster.
If removing BOSH flannel and adding a pluggable CNI, this requires a BOSH VM recreate and may lead to some cluster network partitions depending on which CNI driver you are switching to.
Swapping pluggable CNIs probably, requires
kubectl delete
-ing the CNI daemonset, andbosh deploy --recreate
ing the VMs to clean up any ip link cruft when swapping the CNI ops-file.Which issue(s) this PR fixes:
N/A
Release note: