-
Notifications
You must be signed in to change notification settings - Fork 210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster self healing have not works with Cilium CNI #723
Comments
This issue is currently awaiting triage. If the repository mantainers determine this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
it will be good to have more details on the logs from the cloud controller manager and the kubelet, please also add the name of the node with the problem |
I have two instances And delete both in one command for self hilling test: After nodes goes up, i check network connectivity and my cluster network have not work CloudController logs: Kubelet logs: |
the cilium logs seems to be the key to understand the problem |
Create simple cluster with KOPS (https://kops.sigs.k8s.io/getting_started/gce/#creating-a-cluster)
k8s: v1.29.3
Recreate cluster node with gcloud CLI:
$ gcloud compute instances delete --zone=us-central1-a control-plane-us-central1-a-vjvg nodes-us-central1-a-pfb4
Expected Result: Cluster goes back to normal operation
Actual Result: After nodes goes back, cluster network unhealthily.
In case with in-tree cloud provider with feature-gates DisableCloudProviders: false, and cloud-provider=gce flag in the kubelet, api and kube-controller-manager, cluster self healing works properly.
The text was updated successfully, but these errors were encountered: