Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for "node-role" labelling #1941

Closed
centr0nix opened this issue Jun 15, 2022 · 13 comments
Closed

Support for "node-role" labelling #1941

centr0nix opened this issue Jun 15, 2022 · 13 comments
Labels
feature New feature or request

Comments

@centr0nix
Copy link

Tell us about your request
Add support for Node Role labelling

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
We would like to be able to add node role labels as required. The following PR https://github.com/aws/karpenter/pull/1882 adds node.kubernetes.io to the label domain exception list but not node-role.kubernetes.io from what I see in the current Karpenter version.

Are you currently working around this issue?
We aren't.

Additional context

Attachments

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@centr0nix centr0nix added the feature New feature or request label Jun 15, 2022
@felix-zhe-huang
Copy link
Contributor

felix-zhe-huang commented Jun 15, 2022

hi @centr0nix ,

Thank you very much for your feedback. We do have some concerns about allowing node-role.kubernetes.io because it is a well documented reserved label domain.

To make matters more complicated, labels provided in the Karpenter provisioner will be injected to kubelet. But node-role.kubernetes.io is not a label that is allowed by kubelet. Which means allowing node-role.kubernetes.io is another anti-pattern just like kops.k8s.io.

We would love to learn a bit more about your use cases so that we can come up with a better way to enable this. Would you mind sharing more details, especially:

  • Other than the k8s components, do you have any other components that consume those labels?
  • Do you need Karpenter to dynamically generate the label values?
  • Are the labels used in provisioner label or requirement?

@olemarkus
Copy link
Contributor

Those labels must be managed through the kubernetes installer. That's why they are not allowed to be set through the kubelet flags. So karpenter is the wrong place for this. Karpenter only needs to know about these labels to do scheduling simulation correctly

@grosser
Copy link

grosser commented Sep 8, 2022

FYI we set a node-type label and then have an external helper pod set the label once the node joins the cluster, not pretty but works 🤷

@oliverblaha
Copy link

I think this is still a valid use case.

Of course it would be possible to use custom labels to achieve the same result, however, node-role.kubernetes.io/ is the de-facto standard to assign a role to nodes. The key node-role.kubernetes.io/control-plane is a well-known label and taint (originally kubeadm, but also used by others) for control plane nodes. The part of node-role.kubernetes.io/ is e.g. also shown in the node's role column in k9s.

Here it is emphasized that the label is set after creating the node, which would indeed suggest that this might indeed be out of scope for Karpenter.

I believe that a node's role is a valid criteria for node affinity, and for that to work, Karpenter must be aware of the label on provisioner-level, correct @olemarkus?

Basically, the current situation forces users who want to label (and potentially taint) nodes by their role to either:

  • use non-standard labels (and potentially taints, if wanting to match the naming), or
  • add labels themselves later (overhead), while losing the ability to have Karpenter match the provisioners to those labels, or
  • some mix of the above.

Thoughts?

@olemarkus
Copy link
Contributor

Karpenter must know that the node will receive such node labels, but cannot directly set them. I haven't tracked karpenter development and behavior in a while, but at some point when you specified kubernetes.io labels on the provisioner, it made karpenter aware of them for scheduling simulation purposes, but Karpenter didn't try to apply them to nodes.

@oliverblaha
Copy link

@bwagner5 @felix-zhe-huang, what do you think? Isn't it a valid use case to have node roles assigned by Karpenter directly? Is there some workaround? (One that still maintains being able to use those labels for assigning the proper provisioner.)

@tmoreadobe
Copy link

Just poking in around here, has anyone found a workaround for this one. Any app they use. I have been trying to look for a potential solution but some do have conflict with karpenter and some are daemonsets which is not what we want. Any thoughts please?

@sftim
Copy link
Contributor

sftim commented Mar 1, 2024

@2rs2ts
Copy link

2rs2ts commented Oct 2, 2024

Since I have been doing Kubernetes the Hard Way for my entire time using K8s and have only recently been introduced to Karpenter, I've found the lack of this support truly puzzling. Sure, in Kubernetes the Hard Way, we add the labels afterward, but we do it in a PostStartExec in a systemd unit, and we get the role from info injected into the Launch Template, so it's all automated. This label, and also kubernetes.io/role=<role>, are used to identify node roles to kubectl output, and the docs actually encourage you to set this label for your other nodes. I understand Kubernetes and other CNCF projects can be of many minds when it comes to stuff like this, which is why kubelet may prohibit setting initial labels that the docs encourage you to add to nodes later, but if you take a step back and look at the big picture, you can see that there are enough people of sound mind saying that YES, you should add these labels! The only prohibited label should be node-role.kubernetes.io/control-plane, per those docs, since it's part of kubeadm's official contract.

@sftim
Copy link
Contributor

sftim commented Oct 2, 2024

the docs actually encourage you to set this label for your other nodes

Not quite. The docs might be wrong on this, but at the time of writing, the docs encourage you to label your control plane nodes as control plane (and nothing else).

If you use Karpenter to manage your control plane nodes, uh, I don't think this is going to be fun - but you can get them labelled some way.

Seriously, though, the docs might be wrong. I'm an approver for the Kubernetes docs, and there's mixed views about what was intended here within the project. I don't think the docs at the time of writing capture the coders' intent when they added various bits of support to core Kubernetes.

What should the docs say? I don't have a lot of capacity to look into this. However, if someone has more time than I do, feel free to write up the story in terms of “the coders intended this, here's what we should document“ - and file that against k/website.

Projects like Karpenter may rely on the K8s docs to work out what behavior is appropriate, and we can't really blame them.

@2rs2ts
Copy link

2rs2ts commented Oct 2, 2024

at the time of writing, the docs encourage you to label your control plane nodes as control plane (and nothing else).

Fair point, it says nothing of non-control plane nodes, which makes sensesince the annotation key documented is specifically node-role.kubernetes.io/control-plane.

If surmising intent is the problem, then I suppose we can use the following clues:

  • kubectl get node shows the value of the node role label as a column
  • the above has a placeholder value for nodes that lack the label
  • basically every cluster on earth has non-control plane nodes
  • nothing in the control plane prevents you from setting this label
  • however, kubelet does not let you use it as a startup label

From where I stand, the intent seems overwhelmingly in favor of allowing users to define meaingful roles and to use those in determining what kind of nodes are what, but, like you said, there are dissidents (i.e., kubelet.) Just kubectl's behavior alone is enough to show that there are valid use cases to allow users to set this label, let alone all the stuff in the ecosystem which reads this label. Observability platforms will likely use this label to visualize the behavior of different node groups since it's closer to a "universal" thing than stuff karpenter provides, for example.

I guess I can see a future where sig-node says "stop letting people set this" to sig-apimachinery and then whatever SIG is in charge of kubectl (I can't remember) just has to pivot their plans; and in that future, karpenter users who got used to being able to set the label will have their cheese moved. So I can see how karpenter, as a project, might want to not allow anything that could ever potentially be a moved cheese, especially if they seem to be of little consequence like this. However, at my company, we use this label for a lot, and not being able to set it could be quite crippling depending on how far we take our usage of karpenter, so seem is really the key word here. In our case, if the API starts blocking attempts to set kubernetes.io labels, we'd have to change our setup either way, and that'd include karpenter if we use it... but we would feel like the k8s project moved our cheese in that case, not karpenter. Hopefully that might alleviate some anxiety towards having users blame karpenter for having to change their labels if K8s one day dictates it...

@sftim
Copy link
Contributor

sftim commented Nov 22, 2024

/reopen

@sftim
Copy link
Contributor

sftim commented Nov 22, 2024

Maybe we should focus on kubernetes-sigs/karpenter#1046 instead, though?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request
Projects
None yet
Development

No branches or pull requests

9 participants