Giter VIP home page Giter VIP logo

Comments (10)

bowei avatar bowei commented on September 26, 2024

The hits/misses are a cumulative count since the start of the process. You will have to apply a rate-of-change transform from Graphana/Graphite (see derivative here http://graphite.readthedocs.io/en/latest/functions.html)

from dns.

djsly avatar djsly commented on September 26, 2024

Thanks. even after applying the derivative from the influxDB logic the results were still weird.

After some more digging, I realized that we have a telegraf process that queries the /metrics through a k8s service since we are running multiple replicas of the kube-dns pods. Therefore each query goes on a different replica hence returning values that are non linear.

Do you think that a flag or tag should be added as part of the received metric to allow proper identification of the POD

time	collector	gauge	host	url
2017-03-28T18:55:10Z	"dnsmasq"	1702388	"telegraf-prometheus-4180396350-zfmww"	"http://kube-dns:10054/metrics"
2017-03-28T18:55:20Z	"dnsmasq"	4224872	"telegraf-prometheus-4180396350-zfmww"	"http://kube-dns:10054/metrics"
2017-03-28T18:55:30Z	"dnsmasq"	1702437	"telegraf-prometheus-4180396350-zfmww"	"http://kube-dns:10054/metrics"
2017-03-28T18:55:40Z	"dnsmasq"	4225222	"telegraf-prometheus-4180396350-zfmww"	"http://kube-dns:10054/metrics"
2017-03-28T18:55:50Z	"dnsmasq"	4224948	"telegraf-prometheus-4180396350-zfmww"	"http://kube-dns:10054/metrics"
2017-03-28T18:56:00Z	"dnsmasq"	4224976	"telegraf-prometheus-4180396350-zfmww"	"http://kube-dns:10054/metrics"
2017-03-28T18:56:10Z	"dnsmasq"	1702536	"telegraf-prometheus-4180396350-zfmww"	"http://kube-dns:10054/metrics"
2017-03-28T18:56:20Z	"dnsmasq"	1702562	"telegraf-prometheus-4180396350-zfmww"	"http://kube-dns:10054/metrics"
2017-03-28T18:56:30Z	"dnsmasq"	1702587	"telegraf-prometheus-4180396350-zfmww"	"http://kube-dns:10054/metrics"
2017-03-28T18:56:40Z	"dnsmasq"	4225075	"telegraf-prometheus-4180396350-zfmww"	"http://kube-dns:10054/metrics"
2017-03-28T18:56:50Z	"dnsmasq"	4225391	"telegraf-prometheus-4180396350-zfmww"	"http://kube-dns:10054/metrics"
2017-03-28T18:57:00Z	"dnsmasq"	1702666	"telegraf-prometheus-4180396350-zfmww"	"http://kube-dns:10054/metrics"
2017-03-28T18:57:10Z	"dnsmasq"	4225151	"telegraf-prometheus-4180396350-zfmww"	"http://kube-dns:10054/metrics"

from dns.

bowei avatar bowei commented on September 26, 2024

The usual way to use the metric is to aggregate (sum) the counter across the whole cluster. I do remember that Prometheus allows for addition of the pod name to the key-value pairs identifying a metric, not sure about telegraf.

from dns.

bowei avatar bowei commented on September 26, 2024

The metrics are meant to be collected from the pod, rather than via the service VIP.

from dns.

djsly avatar djsly commented on September 26, 2024

thats what I just realized... so one collector per replica...
Thanks for your input @bowei.
Things are starting to make more sense now :)

from dns.

djsly avatar djsly commented on September 26, 2024

Ohhh I now remember why we couldn't do this... the influxDB is running inside the k8s cluster, so we need the kube-dns to resolve the address where to push the metrics :)

Therefore running the collector from within the same POD as the kube-dns was preventing us from knowing the destination address...

I think that adding a new metric / tag with the pod name could help querying from outside of the POD. What do you think ?

from dns.

bowei avatar bowei commented on September 26, 2024

Confused -- what is the issue? You can point the collector to localhost:53 if you are modifying the kube-dns pod spec to include your collector. I am fairly certain the pod name is the hostname.

from dns.

djsly avatar djsly commented on September 26, 2024

Thats for the source of the /metrics but we need to push the metrics in a time series database. The location of the DB is also within kubernetes and its IP/Address is unknown

from dns.

bowei avatar bowei commented on September 26, 2024

Sorry, I wasn't clear -- you can point the collector's resolv.conf to localhost:53 and use kube-dns. Or run the collector from a different pod.

from dns.

djsly avatar djsly commented on September 26, 2024

ok I see, I miss understood, I though that localhost:53 was localhost:53/metrics.

I guess that running the collector from a different pods is what we are doing but with multiple replicas it doesnt work too too nice. I will try to revisite the collector within the same pod to see if we can easily update the resolv.conf from the manifest directly...

from dns.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.