-
Notifications
You must be signed in to change notification settings - Fork 611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[k8s-keystone-auth] kubeadm init phase fails when webhook not ready #2575
Comments
Well, sounds like static pod is the way to go, but as we can read in docs:
I see that SA is only needed to fetch the ConfigMap from K8s API. I think that you can use |
@dulek Ok that sounds promising. I'll give that a try. Thank you! |
I've encountered the same issue running the k8s-keystone-auth application as a static pod. The static pod starts, but
The pod logs are full of:
It appears that even though the pod runs, it is not actually listening for connections on the host. Without a
Using So far the only solution that has worked for me is to create the cluster without Create the following kustomize definition:
Add the following
With this setup |
I think the issue with this is that when/if the cluster is upgraded, the configuration will get wiped.. no? |
The pre/postKubeadmCommands run for an upgrade the same as they do for cluster initialization so kustomize modifies the kube-apiserver.yaml manifest to add the auth webhook. |
Perfect, alright, so that sounds like a not ideal but functional solution. |
it does not add the --authorization-* arguments until after kubeadm init run. Once kubeadm init has finished, run kustomize to add the arguments to the kube-apiserver.yaml manifest. ref: kubernetes/cloud-provider-openstack#2575
it does not add the --authorization-* arguments until after kubeadm init run. Once kubeadm init has finished, run kustomize to add the arguments to the kube-apiserver.yaml manifest. ref: kubernetes/cloud-provider-openstack#2575
* update patch versions and add zuul CI jobs for new versions * Use cloud images as base Signed-off-by: Mohammed Naser <mnaser@vexxhost.com> * Update versions to build cleanly * use kustomize to enable keystone webhook after kubeadm init it does not add the --authorization-* arguments until after kubeadm init run. Once kubeadm init has finished, run kustomize to add the arguments to the kube-apiserver.yaml manifest. ref: kubernetes/cloud-provider-openstack#2575 * fix lint error and add 1.29 and 1.30 jobs * append webhook authz mode only to avoid duplication with defaults api-server sets Node and RBAC as default authz modes in its command args. And does not allow the mode specified more than once. * fix typo * fix lint error * make a workaround for cilium conformance test failures cilium/cilium#29913 kubernetes/kubernetes#120069 cilium/cilium#9207 * fix flake8 errors --------- Signed-off-by: Mohammed Naser <mnaser@vexxhost.com> Co-authored-by: okozachenko1203 <okozachenko@vexxhost.com> Co-authored-by: Mohammed Naser <mnaser@vexxhost.com> Co-authored-by: Oleksandr K. <okozachenko1203@gmail.com>
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
kubeadm
is unable to bootstrap the admin user when--authorization-mode
hasWebhook
at cluster init.For background, we're running a ClusterAPI-based platform for deplying clusters and have been using the following configuration to set up the
kube-apiserver
to use the k8s-keystone-auth webhook.This is applied when
kubeadm init
runs and is bootstrapping the control plane and has the net effect of creating the/etc/kubernetes/manifests/kube-apiserver.yaml
with the necessary arguments for the auth webhook.Once the control plane has been bootstrapped we are using a Helm chart to deploy
k8s-keystone-auth
similar to the example code. Up until Kubernetes v1.28.9 this has worked fine even though the webhook initially can't reach thek8s-keystone-auth
Pod.In v1.29 however
kubeadm init
fails, presumably because the webhook is not responding and also because theadmin
user is no longer insystem:masters
and can not authenticate (See kubernetes/kubernetes#121305).What you expected to happen:
How to reproduce it:
Create a v1.29 cluster with the
kube-apiserver
arguments whenkubeadm init
first initialises control plane:Anything else we need to know?:
I have a work around in our ClusterAPI setup in which I do not add the
--authorization-*
arguments until afterkubeadm init
run. Oncekubeadm init
has finished I runkustomize
to add the arguments to thekube-apiserver.yaml
manifest.It seems to work, but we're concerned about a potential race condition and also wondering if there is a cleaner approach.
Your docs suggest using static pods, however I am not sure how to set up the needed ServiceAccount, ClusterRoleBindings, etc. It seems like the pod would also just not run until that is in place. So far my attempts to use a static pod at cluster init have also failed.
Environment:
The text was updated successfully, but these errors were encountered: