Comments (10)
If this is just a warning that is printed within the first few minutes after restarting Prometheus, then this is expected and nothing to worry about.
from kubernetes-mixin.
unfortunately the log is constant...
can I worry about this?
from kubernetes-mixin.
Unique warning I get after start:
caller=manager.go:389 component="rule manager" group=alertmanager.rules msg="Evaluating rule failed" rule="alert: AlertmanagerConfigInconsistent\nexpr: count_values by(service) (\"config_hash\", alertmanager_config_hash{job=\"prometheus-operator-alertmanager\"})\n / on(service) group_left() label_replace(prometheus_operator_spec_replicas{controller=\"alertmanager\",job=\"prometheus-operator-operator\"},\n \"service\", \"alertmanager-$1\", \"name\", \"(.*)\") != 1\nfor: 5m\nlabels:\n severity: critical\nannotations:\n message: The configuration of the instances of the Alertmanager cluster `{{$labels.service}}`\n are out of sync.\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:container_cpu_usage_seconds_total:sum_rate\nexpr: sum by(namespace, label_name) (sum by(namespace, pod_name) (rate(container_cpu_usage_seconds_total{container_name!=\"\",image!=\"\",job=\"kubelet\"}[5m]))\n * on(namespace, pod_name) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:container_memory_usage_bytes:sum\nexpr: sum by(namespace, label_name) (sum by(pod_name, namespace) (container_memory_usage_bytes{container_name!=\"\",image!=\"\",job=\"kubelet\"})\n * on(namespace, pod_name) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:kube_pod_container_resource_requests_cpu_cores:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_cpu_cores{job=\"kube-state-metrics\"}\n and on(pod) kube_pod_status_scheduled{condition=\"true\"}) * on(namespace, pod) group_left(label_name)\n label_replace(kube_pod_labels{job=\"kube-state-metrics\"}, \"pod_name\", \"$1\", \"pod\",\n \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:kube_pod_container_resource_requests_memory_bytes:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_memory_bytes{job=\"kube-state-metrics\"})\n * on(namespace, pod) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
from kubernetes-mixin.
Seeing a similar regular warning after upgrade to prometheus-operator release 1.7.0
, Prometheus v2.5.0
.
level=warn ts=2019-01-16T15:05:48.970506555Z caller=manager.go:408 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:container_cpu_usage_seconds_total:sum_rate\nexpr: sum by(namespace, label_name) (sum by(namespace, pod_name) (rate(container_cpu_usage_seconds_total{container_name!=\"\",image!=\"\",job=\"kubelet\"}[5m]))\n * on(namespace, pod_name) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2019-01-16T15:05:48.974490062Z caller=manager.go:408 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:container_memory_usage_bytes:sum\nexpr: sum by(namespace, label_name) (sum by(pod_name, namespace) (container_memory_usage_bytes{container_name!=\"\",image!=\"\",job=\"kubelet\"})\n * on(namespace, pod_name) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2019-01-16T15:05:48.976632336Z caller=manager.go:408 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:kube_pod_container_resource_requests_memory_bytes:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_memory_bytes{job=\"kube-state-metrics\"})\n * on(namespace, pod) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2019-01-16T15:05:48.97984908Z caller=manager.go:408 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:kube_pod_container_resource_requests_cpu_cores:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_cpu_cores{job=\"kube-state-metrics\"}\n and on(pod) kube_pod_status_scheduled{condition=\"true\"}) * on(namespace, pod) group_left(label_name)\n label_replace(kube_pod_labels{job=\"kube-state-metrics\"}, \"pod_name\", \"$1\", \"pod\",\n \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
from kubernetes-mixin.
did you find some solution @faheem-cliqz ?
from kubernetes-mixin.
We are seeing the exact issue in our cluster with the following recording rules provided by kubernetes-mixin (using kube-prometheus)
Versions:
- prometheus-operator: v0.28.0
- prometheus: v2.5.0
- kubernetes-mixin:
{ "name": "kubernetes-mixin", "source": { "git": { "remote": "https://github.com/kubernetes-monitoring/kubernetes-mixin", "subdir": "" } }, "version": "ccb787a44f2ebdecbb346d57490fa7e49981b323" },
k8s logs -l "prometheus=kube-prometheus" -c prometheus | grep "Evaluating rule failed" | gcut -d' ' -f1,2,3,4,5,6,7,8,9 --complement | sort -u | cut -d":" -f3
node_cpu_saturation_load1
node_memory_utilisation
container_cpu_usage_seconds_total
container_memory_usage_bytes
kube_pod_container_resource_requests_cpu_cores
kube_pod_container_resource_requests_memory_bytes
node_cpu_utilisation
node_disk_saturation
node_disk_utilisation
node_memory_bytes_available
node_memory_bytes_total
node_memory_swap_io_bytes
node_net_saturation
node_net_utilisation
node_num_cpu
All of the above rules have the error:
"many-to-many matching not allowed: matching labels must be unique on one side"
Raw Log
rule="record: 'node:node_cpu_saturation_load1:'\nexpr: sum by(node) (node_load1{job=\"node-exporter\"} * on(namespace, pod) group_left(node)\n node_namespace_pod:kube_pod_info:) / node:node_num_cpu:sum\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
rule="record: 'node:node_memory_utilisation:'\nexpr: 1 - sum by(node) ((node_memory_MemFree_bytes{job=\"node-exporter\"} + node_memory_Cached_bytes{job=\"node-exporter\"}\n + node_memory_Buffers_bytes{job=\"node-exporter\"}) * on(namespace, pod) group_left(node)\n node_namespace_pod:kube_pod_info:) / sum by(node) (node_memory_MemTotal_bytes{job=\"node-exporter\"}\n * on(namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:)\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
rule="record: namespace_name:container_cpu_usage_seconds_total:sum_rate\nexpr: sum by(namespace, label_name) (sum by(namespace, pod_name) (rate(container_cpu_usage_seconds_total{container_name!=\"\",image!=\"\",job=\"kubelet\"}[5m]))\n * on(namespace, pod_name) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
rule="record: namespace_name:container_memory_usage_bytes:sum\nexpr: sum by(namespace, label_name) (sum by(pod_name, namespace) (container_memory_usage_bytes{container_name!=\"\",image!=\"\",job=\"kubelet\"})\n * on(namespace, pod_name) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
rule="record: namespace_name:kube_pod_container_resource_requests_cpu_cores:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_cpu_cores{job=\"kube-state-metrics\"}\n and on(pod) kube_pod_status_scheduled{condition=\"true\"}) * on(namespace, pod) group_left(label_name)\n label_replace(kube_pod_labels{job=\"kube-state-metrics\"}, \"pod_name\", \"$1\", \"pod\",\n \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
rule="record: namespace_name:kube_pod_container_resource_requests_memory_bytes:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_memory_bytes{job=\"kube-state-metrics\"})\n * on(namespace, pod) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
rule="record: node:node_cpu_utilisation:avg1m\nexpr: 1 - avg by(node) (rate(node_cpu_seconds_total{job=\"node-exporter\",mode=\"idle\"}[1m])\n * on(namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:)\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
rule="record: node:node_disk_saturation:avg_irate\nexpr: avg by(node) (irate(node_disk_io_time_weighted_seconds_total{device=~\"(sd|xvd|nvme).+\",job=\"node-exporter\"}[1m])\n / 1000 * on(namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:)\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
rule="record: node:node_disk_utilisation:avg_irate\nexpr: avg by(node) (irate(node_disk_io_time_seconds_total{device=~\"(sd|xvd|nvme).+\",job=\"node-exporter\"}[1m])\n * on(namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:)\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
rule="record: node:node_memory_bytes_available:sum\nexpr: sum by(node) ((node_memory_MemFree_bytes{job=\"node-exporter\"} + node_memory_Cached_bytes{job=\"node-exporter\"}\n + node_memory_Buffers_bytes{job=\"node-exporter\"}) * on(namespace, pod) group_left(node)\n node_namespace_pod:kube_pod_info:)\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
rule="record: node:node_memory_bytes_total:sum\nexpr: sum by(node) (node_memory_MemTotal_bytes{job=\"node-exporter\"} * on(namespace,\n pod) group_left(node) node_namespace_pod:kube_pod_info:)\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
rule="record: node:node_memory_swap_io_bytes:sum_rate\nexpr: 1000 * sum by(node) ((rate(node_vmstat_pgpgin{job=\"node-exporter\"}[1m]) + rate(node_vmstat_pgpgout{job=\"node-exporter\"}[1m]))\n * on(namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:)\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
rule="record: node:node_net_saturation:sum_irate\nexpr: sum by(node) ((irate(node_network_receive_drop_total{device=\"eth0\",job=\"node-exporter\"}[1m])\n + irate(node_network_transmit_drop_total{device=\"eth0\",job=\"node-exporter\"}[1m]))\n * on(namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:)\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
rule="record: node:node_net_utilisation:sum_irate\nexpr: sum by(node) ((irate(node_network_receive_bytes_total{device=\"eth0\",job=\"node-exporter\"}[1m])\n + irate(node_network_transmit_bytes_total{device=\"eth0\",job=\"node-exporter\"}[1m]))\n * on(namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:)\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
rule="record: node:node_num_cpu:sum\nexpr: count by(node) (sum by(node, cpu) (node_cpu_seconds_total{job=\"node-exporter\"}\n * on(namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
from kubernetes-mixin.
I am also experiencing this error on some clusters.
Has anyone found a way to pinpoint which pods are causing the error? Increasing the Prometheus log level to debug doesn't seem to help.
from kubernetes-mixin.
fwiw, my issue was due to prometheus discovering redundant services in another namespace
from kubernetes-mixin.
Any updates on this issue?
from kubernetes-mixin.
I'm getting the same issue when deploy "kube-prometheus-stack" version 48.3.1 using Helm on Google Kubernetes Engine (GKE).
from kubernetes-mixin.
Related Issues (20)
- Customize generated alerts by cluster
- Migrate apiserver request latency metric to `apiserver_request_sli_duration_seconds` HOT 3
- Potentially incorrect aggregation on node_namespace_pod_container:container_memory_working_set_bytes HOT 2
- KubeletTooManyPods should not be info.
- KubeAPIDown not working as intended if targets a set of clusters
- Two Questions about diskDeviceSelector
- Make KubeCPUOvercommit indulgent with cluster autoscaler
- KubeDeploymentReplicasMismatch does not alert on mismatched replicaset HOT 1
- KubePodNotReady alert doesn't actually check pod readiness
- How to disable clusterLabel?
- na
- Add cluster label to CPUThrottling alert when showMultiCluster enabled
- Some rules won't work properly on m3db/prometheus and will only work on victoriametrics
- Design doc URL is broken HOT 4
- Use generated grafonnet library
- Clusterlabel support for Kubernetes alerts
- Grafana angular deprecation in grafana 11 HOT 17
- `make dashboards_out` fails HOT 7
- [Question] How to exclude dashboards?
- `code_verb:apiserver_request_total:increase30d` does not perform sum by cluster
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kubernetes-mixin.