-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add kube_ingress_status metric #2433
base: main
Are you sure you want to change the base?
Conversation
/triage accepted |
internal/store/ingress.go
Outdated
"kube_ingress_status", | ||
"Ingress status.", | ||
metric.Gauge, | ||
basemetrics.STABLE, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps should start with an EXPERIMENTAL
metric https://github.com/kubernetes/kube-state-metrics/blob/main/docs/developer/guide.md#add-new-metrics ?
cc @CatherineF-dev
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed here c88e383
internal/store/ingress.go
Outdated
for _, ingress := range i.Status.LoadBalancer.Ingress { | ||
for _, port := range ingress.Ports { | ||
ms = append(ms, &metric.Metric{ | ||
LabelKeys: []string{"ip", "hostname", "port", "protocol"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need both the ip
& the hostname
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering since IPs can keep on changing so cardinality might explode. Just using hostname might suffice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At first, I just wanted to include all the fields, but as you mentioned, the hostname will suffice.
Made the changes here: 622505e
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're removing IPs could we end up with duplicate metrics if multiple LB VIPs serve the same hostname (not sure if this is likely at all, maybe with multiple regionally deployed LBs)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should include IPs (due to the cardinality concern that Richa pointed out).
If we're removing IPs could we end up with duplicate metrics if multiple LB VIPs serve the same hostname (not sure if this is likely at all, maybe with multiple regionally deployed LBs)?
Won't we have unique instance IDs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With instance IDs you mean in this object https://pkg.go.dev/k8s.io/api/networking/v1#Ingress ?
I don't know if that is the case, I assume there's must be some reason why the object is so convoluted...
https://pkg.go.dev/k8s.io/api/networking/v1#IngressLoadBalancerStatus contains a list of https://pkg.go.dev/k8s.io/api/networking/v1#IngressLoadBalancerIngress (both IP and Hostname are optional, without having looked into the code, I assume you need to set either one or the other?)
which then contains a list of https://pkg.go.dev/k8s.io/api/networking/v1#IngressPortStatus
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should keep both ip
and hostname
otherwise this metric won't make much sense since we won't be able to distinguish between the load balancers.
If cardinality is a concern we can always have this metric disabled by default, but here I don't think it is a problem because the number of timeseries for this metric will be limited to the number of Endpoints in the cluster. This is just the theoretical worst case scenario, but in practice it will only be a subset of that since it only includes the endpoints of the loadbalancer. and since the number of Endpoints in a cluster is limited due to scalability limits, this metric will be bounded.
Disclaimer: I am not too knowledgeable about Ingress so for what I know it might just be bounded to the number of Services instead of Endpoints which is even better. It all depends on where the <ip/hostname>: pair come from.
/lgtm |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: isuyyy, richabanker The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/assign @dgrisonnet |
@@ -202,6 +202,28 @@ func ingressMetricFamilies(allowAnnotationsList, allowLabelsList []string) []gen | |||
} | |||
}), | |||
), | |||
*generator.NewFamilyGeneratorWithStability( | |||
"kube_ingress_status", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"kube_ingress_status", | |
"kube_ingress_status", |
I'm a bit torn on this name.
Maybe:
kube_ingress_status_loadbalancer (just wondering if additional fields get added to _status this might be difficult to include)
{ | ||
Port: 8888, | ||
Protocol: "TCP", | ||
Error: nil, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we expose this error somehow as well?
I assume it can potentially cause high cardinality, but it will be interesting to know if there's an error on the IngressPortStatus?
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
What this PR does / why we need it:
Collects metrics of ingress status. / The info is needed when we use ALB.
How does this change affect the cardinality of KSM: (increases, decreases or does not change cardinality)
Increases.
Which issue(s) this PR fixes:
Fixes #1366