Giter VIP home page Giter VIP logo

pushgateway's Introduction

Prometheus Pushgateway

CircleCI Docker Repository on Quay Docker Pulls

The Prometheus Pushgateway exists to allow ephemeral and batch jobs to expose their metrics to Prometheus. Since these kinds of jobs may not exist long enough to be scraped, they can instead push their metrics to a Pushgateway. The Pushgateway then exposes these metrics to Prometheus.

Non-goals

First of all, the Pushgateway is not capable of turning Prometheus into a push-based monitoring system. For a general description of use cases for the Pushgateway, please read When To Use The Pushgateway.

The Pushgateway is explicitly not an aggregator or distributed counter but rather a metrics cache. It does not have statsd-like semantics. The metrics pushed are exactly the same as you would present for scraping in a permanently running program. If you need distributed counting, you could either use the actual statsd in combination with the Prometheus statsd exporter, or have a look at the prom-aggregation-gateway. With more experience gathered, the Prometheus project might one day be able to provide a native solution, separate from or possibly even as part of the Pushgateway.

For machine-level metrics, the textfile collector of the Node exporter is usually more appropriate. The Pushgateway is intended for service-level metrics.

The Pushgateway is not an event store. While you can use Prometheus as a data source for Grafana annotations, tracking something like release events has to happen with some event-logging framework.

A while ago, we decided to not implement a “timeout” or TTL for pushed metrics because almost all proposed use cases turned out to be anti-patterns we strongly discourage. You can follow a more recent discussion on the prometheus-developers mailing list.

Run it

Download binary releases for your platform from the release page and unpack the tarball.

If you want to compile yourself from the sources, you need a working Go setup. Then use the provided Makefile (type make).

For the most basic setup, just start the binary. To change the address to listen on, use the --web.listen-address flag (e.g. "0.0.0.0:9091" or ":9091"). By default, Pushgateway does not persist metrics. However, the --persistence.file flag allows you to specify a file in which the pushed metrics will be persisted (so that they survive restarts of the Pushgateway).

Using Docker

You can deploy the Pushgateway using the prom/pushgateway Docker image.

For example:

docker pull prom/pushgateway

docker run -d -p 9091:9091 prom/pushgateway

Use it

Configure the Pushgateway as a target to scrape

The Pushgateway has to be configured as a target to scrape by Prometheus, using one of the usual methods. However, you should always set honor_labels: true in the scrape config (see below for a detailed explanation).

Libraries

Prometheus client libraries should have a feature to push the registered metrics to a Pushgateway. Usually, a Prometheus client passively presents metric for scraping by a Prometheus server. A client library that supports pushing has a push function, which needs to be called by the client code. It will then actively push the metrics to a Pushgateway, using the API described below.

Command line

Using the Prometheus text protocol, pushing metrics is so easy that no separate CLI is provided. Simply use a command-line HTTP tool like curl. Your favorite scripting language has most likely some built-in HTTP capabilities you can leverage here as well.

Note that in the text protocol, each line has to end with a line-feed character (aka 'LF' or '\n'). Ending a line in other ways, e.g. with 'CR' aka '\r', 'CRLF' aka '\r\n', or just the end of the packet, will result in a protocol error.

Pushed metrics are managed in groups, identified by a grouping key of any number of labels, of which the first must be the job label. The groups are easy to inspect via the web interface.

For implications of special characters in label values see the URL section below.

Examples:

  • Push a single sample into the group identified by {job="some_job"}:

      echo "some_metric 3.14" | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/some_job
    

    Since no type information has been provided, some_metric will be of type untyped.

  • Push something more complex into the group identified by {job="some_job",instance="some_instance"}:

      cat <<EOF | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/some_job/instance/some_instance
      # TYPE some_metric counter
      some_metric{label="val1"} 42
      # TYPE another_metric gauge
      # HELP another_metric Just an example.
      another_metric 2398.283
      EOF
    

    Note how type information and help strings are provided. Those lines are optional, but strongly encouraged for anything more complex.

  • Delete all metrics in the group identified by {job="some_job",instance="some_instance"}:

      curl -X DELETE http://pushgateway.example.org:9091/metrics/job/some_job/instance/some_instance
    
  • Delete all metrics in the group identified by {job="some_job"} (note that this does not include metrics in the {job="some_job",instance="some_instance"} group from the previous example, even if those metrics have the same job label):

      curl -X DELETE http://pushgateway.example.org:9091/metrics/job/some_job
    
  • Delete all metrics in all groups (requires to enable the admin API via the command line flag --web.enable-admin-api):

      curl -X PUT http://pushgateway.example.org:9091/api/v1/admin/wipe
    

About the job and instance labels

The Prometheus server will attach a job label and an instance label to each scraped metric. The value of the job label comes from the scrape configuration. When you configure the Pushgateway as a scrape target for your Prometheus server, you will probably pick a job name like pushgateway. The value of the instance label is automatically set to the host and port of the target scraped. Hence, all the metrics scraped from the Pushgateway will have the host and port of the Pushgateway as the instance label and a job label like pushgateway. The conflict with the job and instance labels you might have attached to the metrics pushed to the Pushgateway is solved by renaming those labels to exported_job and exported_instance.

However, this behavior is usually undesired when scraping a Pushgateway. Generally, you would like to retain the job and instance labels of the metrics pushed to the Pushgateway. That's why you have to set honor_labels: true in the scrape config for the Pushgateway. It enables the desired behavior. See the documentation for details.

This leaves us with the case where the metrics pushed to the Pushgateway do not feature an instance label. This case is quite common as the pushed metrics are often on a service level and therefore not related to a particular instance. Even with honor_labels: true, the Prometheus server will attach an instance label if no instance label has been set in the first place. Therefore, if a metric is pushed to the Pushgateway without an instance label (and without instance label in the grouping key, see below), the Pushgateway will export it with an empty instance label ({instance=""}), which is equivalent to having no instance label at all but prevents the server from attaching one.

About metric inconsistencies

The Pushgateway exposes all pushed metrics together with its own metrics via the same /metrics endpoint. (See the section about exposed metrics for details.) Therefore, all the metrics have to be consistent with each other: Metrics of the same name must have the same type, even if they are pushed to different groups, and there must be no duplicates, i.e. metrics with the same name and the exact same label pairs. Pushes that would lead to inconsistencies are rejected with status code 400.

Inconsistent help strings are tolerated, though. The Pushgateway will pick a winning help string and log about it at info level.

Legacy note: The help string of Pushgateway's own push_time_seconds metric has changed in v0.10.0. By using a persistence file, metrics pushed to a Pushgateway of an earlier version can make it into a Pushgateway of v0.10.0 or later. In this case, the above mentioned log message will show up. Once each previously pushed group has been deleted or received a new push, the log message will disappear.

The consistency check performed during a push is the same as it happens anyway during a scrape. In common use cases, scrapes happen more often than pushes. Therefore, the performance cost of the push-time check isn't relevant. However, if a large amount of metrics on the Pushgateway is combined with frequent pushes, the push duration might become prohibitively long. In this case, you might consider using the command line flag --push.disable-consistency-check, which saves the cost of the consistency check during a push but allows pushing inconsistent metrics. The check will still happen during a scrape, thereby failing all scrapes for as long as inconsistent metrics are stored on the Pushgateway. Setting the flag therefore puts you at risk to disable the Pushgateway by a single inconsistent push.

About timestamps

If you push metrics at time t1, you might be tempted to believe that Prometheus will scrape them with that same timestamp t1. Instead, what Prometheus attaches as a timestamp is the time when it scrapes the Pushgateway. Why so?

In the world view of Prometheus, a metric can be scraped at any time. A metric that cannot be scraped has basically ceased to exist. Prometheus is somewhat tolerant, but if it cannot get any samples for a metric in 5min, it will behave as if that metric does not exist anymore. Preventing that is actually one of the reasons to use a Pushgateway. The Pushgateway will make the metrics of your ephemeral job scrapable at any time. Attaching the time of pushing as a timestamp would defeat that purpose because 5min after the last push, your metric will look as stale to Prometheus as if it could not be scraped at all anymore. (Prometheus knows only one timestamp per sample, there is no way to distinguish a 'time of pushing' and a 'time of scraping'.)

As there aren't any use cases where it would make sense to attach a different timestamp, and many users attempting to incorrectly do so (despite no client library supporting this), the Pushgateway rejects any pushes with timestamps.

If you think you need to push a timestamp, please see When To Use The Pushgateway.

In order to make it easier to alert on failed pushers or those that have not run recently, the Pushgateway will add in the metrics push_time_seconds and push_failure_time_seconds with the Unix timestamp of the last successful and failed POST/PUT to each group. This will override any pushed metric by that name. A value of zero for either metric implies that the group has never seen a successful or failed POST/PUT.

API

All pushes are done via HTTP. The interface is vaguely REST-like.

URL

The default port the Pushgateway is listening to is 9091. The path looks like

/metrics/job/<JOB_NAME>{/<LABEL_NAME>/<LABEL_VALUE>}

<JOB_NAME> is used as the value of the job label, followed by any number of other label pairs (which might or might not include an instance label). The label set defined by the URL path is used as a grouping key. Any of those labels already set in the body of the request (as regular labels, e.g. name{job="foo"} 42) will be overwritten to match the labels defined by the URL path!

If job or any label name is suffixed with @base64, the following job name or label value is interpreted as a base64 encoded string according to RFC 4648, using the URL and filename safe alphabet. (Padding is optional, but a single = is required to encode an empty label value.) This is the only way to handle the following cases:

  • A job name or a label value that contains a /, because the plain (or even URI-encoded) / would otherwise be interpreted as a path separator.
  • An empty label value, because the resulting // or trailing / would disappear when the path is sanitized by the HTTP router code. Note that an empty job name is invalid. Empty label values are valid but rarely useful. To encode them with base64, you have to use at least one = padding character to avoid a // or a trailing /.

For other special characters, the usual URI component encoding works, too, but the base64 might be more convenient.

Ideally, client libraries take care of the suffixing and encoding.

Examples:

  • To use the grouping key job="directory_cleaner",path="/var/tmp", the following path will not work:

    /metrics/job/directory_cleaner/path//var/tmp
    

    Instead, use the base64 URL-safe encoding for the label value and mark it by suffixing the label name with @base64:

    /metrics/job/directory_cleaner/path@base64/L3Zhci90bXA
    

    If you are not using a client library that handles the encoding for you, you can use encoding tools. For example, there is a command line tool base64url (Debian package basez), which you could combine with curl to push from the command line in the following way:

    echo 'some_metric{foo="bar"} 3.14' | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/directory_cleaner/path@base64/$(echo -n '/var/tmp' | base64url)
    
  • To use a grouping key containing an empty label value such as job="example",first_label="",second_label="foobar", the following path will not work:

     /metrics/job/example/first_label//second_label/foobar
    

    Instead, use the following path including the = padding character:

    /metrics/job/example/first_label@base64/=/second_label/foobar
    
  • The grouping key job="titan",name="Προμηθεύς" can be represented “traditionally” with URI encoding:

    /metrics/job/titan/name/%CE%A0%CF%81%CE%BF%CE%BC%CE%B7%CE%B8%CE%B5%CF%8D%CF%82
    

    Or you can use the more compact base64 encoding:

    /metrics/job/titan/name@base64/zqDPgc6_zrzOt864zrXPjc-C
    

PUT method

PUT is used to push a group of metrics. All metrics with the grouping key specified in the URL are replaced by the metrics pushed with PUT.

The body of the request contains the metrics to push either as delimited binary protocol buffers or in the simple flat text format (both in version 0.0.4, see the data exposition format specification). Discrimination between the two variants is done via the Content-Type header. (Use the value application/vnd.google.protobuf; proto=io.prometheus.client.MetricFamily; encoding=delimited for protocol buffers, otherwise the text format is tried as a fall-back.)

The response code upon success is either 200, 202, or 400. A 200 response implies a successful push, either replacing an existing group of metrics or creating a new one. A 400 response can happen if the request is malformed or if the pushed metrics are inconsistent with metrics pushed to other groups or collide with metrics of the Pushgateway itself. An explanation is returned in the body of the response and logged on error level. A 202 can only occur if the --push.disable-consistency-check flag is set. In this case, pushed metrics are just queued and not checked for consistency. Inconsistencies will lead to failed scrapes, however, as described above.

In rare cases, it is possible that the Pushgateway ends up with an inconsistent set of metrics already pushed. In that case, new pushes are also rejected as inconsistent even if the culprit is metrics that were pushed earlier. Delete the offending metrics to get out of that situation.

If using the protobuf format, do not send duplicate MetricFamily proto messages (i.e. more than one with the same name) in one push, as they will overwrite each other.

Note that the Pushgateway doesn't provide any strong guarantees that the pushed metrics are persisted to disk. (A server crash may cause data loss. Or the Pushgateway is configured to not persist to disk at all.)

A PUT request with an empty body effectively deletes all metrics with the specified grouping key. However, in contrast to the DELETE request described below, it does update the push_time_seconds metrics.

POST method

POST works exactly like the PUT method but only metrics with the same name as the newly pushed metrics are replaced (among those with the same grouping key).

A POST request with an empty body merely updates the push_time_seconds metrics but does not change any of the previously pushed metrics.

DELETE method

DELETE is used to delete metrics from the Pushgateway. The request must not contain any content. All metrics with the grouping key specified in the URL are deleted.

The response code upon success is always 202. The delete request is merely queued at that moment. There is no guarantee that the request will actually be executed or that the result will make it to the persistence layer (e.g. in case of a server crash). However, the order of PUT/POST and DELETE request is guaranteed, i.e. if you have successfully sent a DELETE request and then send a PUT, it is guaranteed that the DELETE will be processed first (and vice versa).

Deleting a grouping key without metrics is a no-op and will not result in an error.

Request compression

The body of a POST or PUT request may be gzip- or snappy-compressed. Add a header Content-Encoding: gzip or Content-Encoding: snappy to do so.

Examples:

echo "some_metric 3.14" | gzip | curl -H 'Content-Encoding: gzip' --data-binary @- http://pushgateway.example.org:9091/metrics/job/some_job
echo "some_metric 3.14" | snzip | curl -H 'Content-Encoding: snappy' --data-binary @- http://pushgateway.example.org:9091/metrics/job/some_job

Admin API

The Admin API provides administrative access to the Pushgateway, and must be explicitly enabled by setting --web.enable-admin-api flag.

URL

The default port the Pushgateway is listening to is 9091. The path looks like:

/api/<API_VERSION>/admin/<HANDLER>
  • Available endpoints:
HTTP_METHOD API_VERSION HANDLER DESCRIPTION
PUT v1 wipe Safely deletes all metrics from the Pushgateway.
  • For example to wipe all metrics from the Pushgateway:

      curl -X PUT http://pushgateway.example.org:9091/api/v1/admin/wipe
    

Query API

The query API allows accessing pushed metrics and build and runtime information.

URL

/api/<API_VERSION>/<HANDLER>
  • Available endpoints:
HTTP_METHOD API_VERSION HANDLER DESCRIPTION
GET v1 status Returns build information, command line flags, and the start time in JSON format.
GET v1 metrics Returns the pushed metric families in JSON format.
  • For example :

      curl -X GET http://pushgateway.example.org:9091/api/v1/status | jq
      
      {
        "status": "success",
        "data": {
          "build_information": {
            "branch": "master",
            "buildDate": "20200310-20:14:39",
            "buildUser": "[email protected]",
            "goVersion": "go1.13.6",
            "revision": "eba0ec4100873d23666bcf4b8b1d44617d6430c4",
            "version": "1.1.0"
          },
          "flags": {
            "log.format": "logfmt",
            "log.level": "info",
            "persistence.file": "",
            "persistence.interval": "5m0s",
            "push.disable-consistency-check": "false",
            "web.enable-admin-api": "false",
            "web.enable-lifecycle": "false",
            "web.external-url": "",
            "web.listen-address": ":9091",
            "web.route-prefix": "",
            "web.telemetry-path": "/metrics"
          },
          "start_time": "2020-03-11T01:44:49.9189758+05:30"
        }
      }
      
      curl -X GET http://pushgateway.example.org:9091/api/v1/metrics | jq
      
      {
        "status": "success",
        "data": [
          {
            "labels": {
              "job": "batch"
            },
            "last_push_successful": true,
            "my_job_duration_seconds": {
              "time_stamp": "2020-03-11T02:02:27.716605811+05:30",
              "type": "GAUGE",
              "help": "Duration of my batch job in seconds",
              "metrics": [
                {
                  "labels": {
                    "instance": "",
                    "job": "batch"
                  },
                  "value": "0.2721322309989773"
                }
              ]
            },
            "push_failure_time_seconds": {
              "time_stamp": "2020-03-11T02:02:27.716605811+05:30",
              "type": "GAUGE",
              "help": "Last Unix time when changing this group in the Pushgateway failed.",
              "metrics": [
                {
                  "labels": {
                    "instance": "",
                    "job": "batch"
                  },
                  "value": "0"
                }
              ]
            },
            "push_time_seconds": {
              "time_stamp": "2020-03-11T02:02:27.716605811+05:30",
              "type": "GAUGE",
              "help": "Last Unix time when changing this group in the Pushgateway succeeded.",
              "metrics": [
                {
                  "labels": {
                    "instance": "",
                    "job": "batch"
                  },
                  "value": "1.5838723477166057e+09"
                }
              ]
            }
          }
        ]
      }
    

Management API

The Pushgateway provides a set of management API to ease automation and integrations.

  • Available endpoints:
HTTP_METHOD PATH DESCRIPTION
GET /-/healthy Returns 200 whenever the Pushgateway is healthy.
GET /-/ready Returns 200 whenever the Pushgateway is ready to serve traffic.
  • The following endpoint is disabled by default and can be enabled via the --web.enable-lifecycle flag.
HTTP_METHOD PATH DESCRIPTION
PUT /-/quit Triggers a graceful shutdown of Pushgateway.

Alternatively, a graceful shutdown can be triggered by sending a SIGTERM to the Pushgateway process.

Exposed metrics

The Pushgateway exposes the following metrics via the configured --web.telemetry-path (default: /metrics):

  • The pushed metrics.
  • For each pushed group, a metric push_time_seconds and push_failure_time_seconds as explained above.
  • The usual metrics provided by the Prometheus Go client library, i.e.:
    • process_...
    • go_...
    • promhttp_metric_handler_requests_...
  • A number of metrics specific to the Pushgateway, as documented by the example scrape below.
# HELP pushgateway_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which pushgateway was built.
# TYPE pushgateway_build_info gauge
pushgateway_build_info{branch="master",goversion="go1.10.2",revision="8f88ccb0343fc3382f6b93a9d258797dcb15f770",version="0.5.2"} 1
# HELP pushgateway_http_push_duration_seconds HTTP request duration for pushes to the Pushgateway.
# TYPE pushgateway_http_push_duration_seconds summary
pushgateway_http_push_duration_seconds{method="post",quantile="0.1"} 0.000116755
pushgateway_http_push_duration_seconds{method="post",quantile="0.5"} 0.000192608
pushgateway_http_push_duration_seconds{method="post",quantile="0.9"} 0.000327593
pushgateway_http_push_duration_seconds_sum{method="post"} 0.001622878
pushgateway_http_push_duration_seconds_count{method="post"} 8
# HELP pushgateway_http_push_size_bytes HTTP request size for pushes to the Pushgateway.
# TYPE pushgateway_http_push_size_bytes summary
pushgateway_http_push_size_bytes{method="post",quantile="0.1"} 166
pushgateway_http_push_size_bytes{method="post",quantile="0.5"} 182
pushgateway_http_push_size_bytes{method="post",quantile="0.9"} 196
pushgateway_http_push_size_bytes_sum{method="post"} 1450
pushgateway_http_push_size_bytes_count{method="post"} 8
# HELP pushgateway_http_requests_total Total HTTP requests processed by the Pushgateway, excluding scrapes.
# TYPE pushgateway_http_requests_total counter
pushgateway_http_requests_total{code="200",handler="static",method="get"} 5
pushgateway_http_requests_total{code="200",handler="status",method="get"} 8
pushgateway_http_requests_total{code="202",handler="delete",method="delete"} 1
pushgateway_http_requests_total{code="202",handler="push",method="post"} 6
pushgateway_http_requests_total{code="400",handler="push",method="post"} 2

Alerting on failed pushes

It is in general a good idea to alert on push_time_seconds being much farther behind than expected. This will catch both failed pushes as well as pushers being down completely.

To detect failed pushes much earlier, alert on push_failure_time_seconds > push_time_seconds.

Pushes can also fail because they are malformed. In this case, they never reach any metric group and therefore won't set any push_failure_time_seconds metrics. Those pushes are still counted as pushgateway_http_requests_total{code="400",handler="push"}. You can alert on the rate of this metric, but you have to inspect the logs to identify the offending pusher.

TLS and basic authentication

The Pushgateway supports TLS and basic authentication. This enables better control of the various HTTP endpoints.

To use TLS and/or basic authentication, you need to pass a configuration file using the --web.config.file parameter. The format of the file is described in the exporter-toolkit repository.

Note that the TLS and basic authentication settings affect all HTTP endpoints: /metrics for scraping, the API to push metrics via /metrics/..., the admin API via /api/..., and the web UI.

Development

The normal binary embeds the web files in the resources directory. For development purposes, it is handy to have a running binary use those files directly (so that you can see the effect of changes immediately). To switch to direct usage, add -tags dev to the flags entry in .promu.yml, and then make build. Switch back to "normal" mode by reverting the changes to .promu.yml and typing make assets.

Contributing

Relevant style guidelines are the Go Code Review Comments and the Formatting and style section of Peter Bourgon's Go: Best Practices for Production Environments.

pushgateway's People

Contributors

aecolley avatar akfaew avatar beorn7 avatar bernerdschaefer avatar brian-brazil avatar conr avatar demahum avatar dependabot[bot] avatar dgl avatar discordianfish avatar fabxc avatar glenn-m avatar grobie avatar hairyhenderson avatar jamtur01 avatar jan--f avatar juliusv avatar kfdm avatar like-inspur avatar maurorappa avatar mrueg avatar mtanda avatar prombot avatar roidelapluie avatar sdurrheimer avatar simon04 avatar simonpasquier avatar snebel29 avatar superq avatar weastel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pushgateway's Issues

Pushing several timeseries with the same metric name at the same time works incorrect

Consider an example php-script:

<?php
    exec("echo \"test{city_key=\\\"1\\\"} 1\" | /usr/local/bin/curl --max-time 1 --data-binary @- http://localhost:9091/metrics/job/test 2> /dev/null");
    exec("echo \"test{city_key=\\\"2\\\"} 2\" | /usr/local/bin/curl --max-time 1 --data-binary @- http://localhost:9091/metrics/job/test 2> /dev/null");
    exec("echo \"test{city_key=\\\"3\\\"} 3\" | /usr/local/bin/curl --max-time 1 --data-binary @- http://localhost:9091/metrics/job/test 2> /dev/null");
    exec("echo \"test{city_key=\\\"4\\\"} 4\" | /usr/local/bin/curl --max-time 1 --data-binary @- http://localhost:9091/metrics/job/test 2> /dev/null");
    exec("echo \"test{city_key=\\\"5\\\"} 5\" | /usr/local/bin/curl --max-time 1 --data-binary @- http://localhost:9091/metrics/job/test 2> /dev/null");

This will result that the only last time serie will be scraped by prometheus.

But if we do the same with different metric names, all time series are scrapped:

<?php
    exec("echo \"test_1{city_key=\\\"1\\\"} 1\" | /usr/local/bin/curl --max-time 1 --data-binary @- http://localhost:9091/metrics/job/test 2> /dev/null");
    exec("echo \"test_2{city_key=\\\"2\\\"} 2\" | /usr/local/bin/curl --max-time 1 --data-binary @- http://localhost:9091/metrics/job/test 2> /dev/null");
    exec("echo \"test_3{city_key=\\\"3\\\"} 3\" | /usr/local/bin/curl --max-time 1 --data-binary @- http://localhost:9091/metrics/job/test 2> /dev/null");
    exec("echo \"test_4{city_key=\\\"4\\\"} 4\" | /usr/local/bin/curl --max-time 1 --data-binary @- http://localhost:9091/metrics/job/test 2> /dev/null");
    exec("echo \"test_5{city_key=\\\"5\\\"} 5\" | /usr/local/bin/curl --max-time 1 --data-binary @- http://localhost:9091/metrics/job/test 2> /dev/null");

Allow persisting to S3

Currently the pushgateway is a stateful service, if it's state was checkpointed to S3 say every 5 minutes it could be treated at a stateless service for the cost of a few cent a month and a small window of potential data loss.

Deal with clash between pushgateway's own metrics and pushed metrics

After upgrading our Java client libraries to 0.0.8, we started getting problems with Prometheus scraping the /metrics page of PushGateway.

An error has occurred:

metric family with duplicate name injected: name:"process_start_time_seconds" help:"Start time of the process, in unixtime." type:GAUGE metric:<label:<name:"job" value:"internal-Gianni-PW-RHODRIK1" > label:<name:"instance" value:"PW-RHODRIK1" > gauge:<value:1.424863714816e+09 > > metric:<label:<name:"job" value:"internal-michal-wintermute" > label:<name:"instance" value:"wintermute" > gauge:<value:1.424803036927e+09 > > metric:<label:<name:"job" value:"internal-chris-PW-RENDER1-PC" > label:<name:"instance" value:"PW-RENDER1-PC" > gauge:<value:1.424863870177e+09 > > metric:<label:<name:"job" value:"internal-callum-PW-CALLUML1" > label:<name:"instance" value:"PW-CALLUML1" > gauge:<value:1.424864927792e+09 > > metric:<label:<name:"job" value:"internal-Virgil-VirgilNotebook" > label:<name:"instance" value:"VirgilNotebook" > gauge:<value:1.424865024272e+09 > > metric:<label:<name:"job" value:"internal-rok-dev-fcaa141b9c88" > label:<name:"instance" value:"dev-fcaa141b9c88" > gauge:<value:1.424861518748e+09 > > 

Eyeballing the main page of PushGateway, everything is fine, the data has values and each job-instance pair has a single value. The error is fairly cryptic :(

Any idea how to debug this further?

debug info: concurrent map read and map write

time="2016-06-29T20:34:30+08:00" level=info msg="Metrics persisted to 'persistence.file'." source="diskmetricstore.go:156"
fatal error: concurrent map read and map write

goroutine 268718 [running]:
runtime.throw(0xa1a5c0, 0x21)
/usr/local/go/src/runtime/panic.go:547 +0x90 fp=0xc820d21550 sp=0xc820d21538
runtime.mapaccess2(0x8ab2c0, 0xc82235d9e0, 0xc825baa7c0, 0x4e7dca, 0x58c000)
/usr/local/go/src/runtime/hashmap.go:343 +0x5a fp=0xc820d21598 sp=0xc820d21550
reflect.mapaccess(0x8ab2c0, 0xc82235d9e0, 0xc825baa7c0, 0xc82235d9e0)
/usr/local/go/src/runtime/hashmap.go:993 +0x35 fp=0xc820d215c8 sp=0xc820d21598
reflect.Value.MapIndex(0x8ab2c0, 0xc825baa398, 0x95, 0x821a60, 0xc825baa7c0, 0x98, 0x0, 0x0, 0x0)
/usr/local/go/src/reflect/value.go:1041 +0x14a fp=0xc820d21650 sp=0xc820d215c8
encoding/gob.(_Encoder).encodeMap(0xc820092000, 0xc820092038, 0x8ab2c0, 0xc825baa398, 0x95, 0xa8ab98, 0xc821490110, 0x0, 0x0)
/usr/local/go/src/encoding/gob/encode.go:381 +0x2cb fp=0xc820d21760 sp=0xc820d21650
encoding/gob.encOpFor.func3(0xc8200d0f30, 0xc8250e9580, 0x8ab2c0, 0xc825baa398, 0x95)
/usr/local/go/src/encoding/gob/encode.go:577 +0x10a fp=0xc820d217d0 sp=0xc820d21760
encoding/gob.(_Encoder).encodeStruct(0xc820092000, 0xc820092038, 0xc82216d3e0, 0x920ea0, 0xc825baa390, 0x99)
/usr/local/go/src/encoding/gob/encode.go:334 +0x3e6 fp=0xc820d218b0 sp=0xc820d217d0
encoding/gob.encOpFor.func4(0x0, 0xc8250e9540, 0x920ea0, 0xc825baa390, 0x99)
/usr/local/go/src/encoding/gob/encode.go:587 +0xb3 fp=0xc820d21900 sp=0xc820d218b0
encoding/gob.encodeReflectValue(0xc8250e9540, 0x920ea0, 0xc825baa390, 0x99, 0xc821490140, 0x0)
/usr/local/go/src/encoding/gob/encode.go:369 +0x147 fp=0xc820d21970 sp=0xc820d21900
encoding/gob.(_Encoder).encodeMap(0xc820092000, 0xc820092038, 0x8ab220, 0xc820129e60, 0x15, 0xa8ac00, 0xc821490140, 0x0, 0x0)
/usr/local/go/src/encoding/gob/encode.go:381 +0x329 fp=0xc820d21a80 sp=0xc820d21970
encoding/gob.encOpFor.func3(0xc8238b9e30, 0xc8250e94c0, 0x8ab220, 0xc820129e60, 0x15)
/usr/local/go/src/encoding/gob/encode.go:577 +0x10a fp=0xc820d21af0 sp=0xc820d21a80
encoding/gob.(_Encoder).encodeSingle(0xc820092000, 0xc820092038, 0xc82216d3c0, 0x8ab220, 0xc820129e60, 0x15)
/usr/local/go/src/encoding/gob/encode.go:307 +0x26c fp=0xc820d21b88 sp=0xc820d21af0
encoding/gob.(_Encoder).encode(0xc820092000, 0xc820092038, 0x8ab220, 0xc820129e60, 0x15, 0xc821035fc0)
/usr/local/go/src/encoding/gob/encode.go:709 +0x216 fp=0xc820d21be8 sp=0xc820d21b88
encoding/gob.(_Encoder).EncodeValue(0xc820092000, 0x8ab220, 0xc820129e60, 0x15, 0x0, 0x0)
/usr/local/go/src/encoding/gob/encoder.go:247 +0x6ed fp=0xc820d21d58 sp=0xc820d21be8
encoding/gob.(_Encoder).Encode(0xc820092000, 0x8ab220, 0xc820129e60, 0x0, 0x0)
/usr/local/go/src/encoding/gob/encoder.go:174 +0x72 fp=0xc820d21da8 sp=0xc820d21d58
github.com/prometheus/pushgateway/storage.(_DiskMetricStore).persist(0xc82014cc30, 0x0, 0x0)
/go/src/github.com/prometheus/pushgateway/storage/diskmetricstore.go:249 +0x21d fp=0xc820d21ea8 sp=0xc820d21da8
github.com/prometheus/pushgateway/storage.(*DiskMetricStore).loop.func1.1()
/go/src/github.com/prometheus/pushgateway/storage/diskmetricstore.go:150 +0x7d fp=0xc820d21f90 sp=0xc820d21ea8
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1998 +0x1 fp=0xc820d21f98 sp=0xc820d21f90
created by time.goFunc
/usr/local/go/src/time/sleep.go:129 +0x3a

goroutine 1 [IO wait]:
net.runtime_pollWait(0x7f7aa9ab91a8, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(_pollDesc).Wait(0xc8235a7e20, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(_pollDesc).WaitRead(0xc8235a7e20, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(_netFD).accept(0xc8235a7dc0, 0x0, 0x7f7aa9ab9268, 0xc820a48000)
/usr/local/go/src/net/fd_unix.go:426 +0x27c
net.(_TCPListener).AcceptTCP(0xc82395a168, 0xc821e12c20, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:254 +0x4d
net.(_TCPListener).Accept(0xc82395a168, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:264 +0x3d
net/http.(_Server).Serve(0xc820124180, 0x7f7aa9ab8228, 0xc82395a168, 0x0, 0x0)
/usr/local/go/src/net/http/server.go:2117 +0x129
main.main()
/go/src/github.com/prometheus/pushgateway/main.go:108 +0x126a

goroutine 17 [syscall, 50 minutes]:
os/signal.signal_recv(0x0)
/usr/local/go/src/runtime/sigqueue.go:116 +0x132
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:22 +0x18
created by os/signal.init.1
/usr/local/go/src/os/signal/signal_unix.go:28 +0x37

goroutine 33 [select]:
github.com/prometheus/pushgateway/storage.(*DiskMetricStore).loop(0xc82014cc30, 0x45d964b800)
/go/src/github.com/prometheus/pushgateway/storage/diskmetricstore.go:166 +0x4a2
created by github.com/prometheus/pushgateway/storage.NewDiskMetricStore
/go/src/github.com/prometheus/pushgateway/storage/diskmetricstore.go:79 +0x5a8

goroutine 94 [chan receive, 50 minutes]:
main.interruptHandler(0x7f7aa9ab8228, 0xc82395a168)
/go/src/github.com/prometheus/pushgateway/main.go:135 +0x192
created by main.main
/go/src/github.com/prometheus/pushgateway/main.go:107 +0x117b

goroutine 95 [select, 50 minutes, locked to thread]:
runtime.gopark(0xa8bbb0, 0xc82084b728, 0x99e480, 0x6, 0x18, 0x2)
/usr/local/go/src/runtime/proc.go:262 +0x163
runtime.selectgoImpl(0xc82084b728, 0x0, 0x18)
/usr/local/go/src/runtime/select.go:392 +0xa67
runtime.selectgo(0xc82084b728)
/usr/local/go/src/runtime/select.go:215 +0x12
runtime.ensureSigM.func1()
/usr/local/go/src/runtime/signal1_unix.go:279 +0x358
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1998 +0x1

goroutine 268798 [IO wait]:
net.runtime_pollWait(0x7f7aa9ab8f68, 0x72, 0xc821ab0000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(_pollDesc).Wait(0xc820175f70, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(_pollDesc).WaitRead(0xc820175f70, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(_netFD).Read(0xc820175f10, 0xc821ab0000, 0x1000, 0x1000, 0x0, 0x7f7aa9d4f050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(_conn).Read(0xc821bac780, 0xc821ab0000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
net/http.(_connReader).Read(0xc82143c000, 0xc821ab0000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/server.go:526 +0x196
bufio.(_Reader).fill(0xc821676420)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(_Reader).Peek(0xc821676420, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:132 +0xcc
net/http.(_conn).readRequest(0xc82008f600, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/server.go:702 +0x2e6
net/http.(_conn).serve(0xc82008f600)
/usr/local/go/src/net/http/server.go:1425 +0x947
created by net/http.(_Server).Serve
/usr/local/go/src/net/http/server.go:2137 +0x44e

goroutine 268701 [runnable]:
net/http.(_conn).serve(0xc82175c000)
/usr/local/go/src/net/http/server.go:1383
created by net/http.(_Server).Serve
/usr/local/go/src/net/http/server.go:2137 +0x44e

runtime environment:
version: 0.3.0
os: ubuntu 14.04
kernel: 3.2.0-23-generic

Improve debug logging

Currently, we don't use leveled logging at all in pushgateway.

The original idea was that the pushgateway is so simplistic, all the relevant information would be in the HTTP response.

As it turns out, clients tend to never display the body of an HTTP response with an error code.

For convenience, consider introducing leveled logging so that client-induced errors (invalid payload) are logged at a low level like INFO or DEBUG.

Delete all metrics grouped by job only not working

I have the following data in pushgateway:

job="myjobtest_1464406264" instance="127.0.0.1_574910f84b454"
job="myjobtest_1464406264" instance="127.0.0.1_574910f8491c3"
job="myjobtest_1464406264" instance="127.0.0.1_574910f84ae0d"
job="myjobtest_1464406264" instance="127.0.0.1_574910f829084"

curl -X DELETE http://mypushgateway:9091/metrics/job/myjobtest_1464406264

My expectation is that all will be deleted as based on the documentation "Delete all metrics grouped by job only" and since they are all in the same job name. But no data deleted (0/4). Is this a bug?

build info of pushgateway that I used:
branch HEAD
commit add82b8
date 20150902-14:52:33
user bjoern@reasonableresolution
version 0.2.0

Grouping labels should be validated

While working on the java client, I managed to create a labelname of a=b which is not valid. The grouping labels should be checked for validity in push.go. It appears that the prometheus server is also not validating this on ingestion.

`go get github.com/prometheus/pushgateway` throws an error, but the end result works

More tales from the canary which managed to find all the weird things:

 % go get github.com/prometheus/pushgateway
# github.com/prometheus/pushgateway
../go/src/github.com/prometheus/pushgateway/main.go:77: undefined: Asset
../go/src/github.com/prometheus/pushgateway/main.go:77: undefined: AssetDir
../go/src/github.com/prometheus/pushgateway/main.go:77: undefined: AssetInfo
../go/src/github.com/prometheus/pushgateway/main.go:80: undefined: Asset
% echo $?
2
% cd github.com/prometheus/pushgateway
% make
mkdir -p /home/richih/work/go/src/github.com/prometheus/pushgateway/.build/gopath/src/github.com/prometheus/
ln -s /home/richih/work/go/src/github.com/prometheus/pushgateway /home/richih/work/go/src/github.com/prometheus/pushgateway/.build/gopath/src/github.com/prometheus/pushgateway
GOPATH=/home/richih/work/go/src/github.com/prometheus/pushgateway/.build/gopath /usr/bin/go get -d
touch dependencies-stamp
GOPATH=/home/richih/work/go/src/github.com/prometheus/pushgateway/.build/gopath /usr/bin/go get github.com/jteeuwen/go-bindata/...
/home/richih/work/go/src/github.com/prometheus/pushgateway/.build/gopath/bin/go-bindata -prefix=resources resources/...
GOPATH=/home/richih/work/go/src/github.com/prometheus/pushgateway/.build/gopath /usr/bin/go build -ldflags "-X main.buildVersion=0.2.0 -X main.buildRev=c9149bf -X main.buildBranch=master -X [email protected] -X main.buildDate=20160123-17:33:48" -o pushgateway
% make test
GOPATH=/home/richih/work/go/src/github.com/prometheus/pushgateway/.build/gopath /usr/bin/go get -d
touch dependencies-stamp
GOPATH=/home/richih/work/go/src/github.com/prometheus/pushgateway/.build/gopath /usr/bin/go test ./...
?       _/home/richih/work/go/src/github.com/prometheus/pushgateway     [no test files]
ok      _/home/richih/work/go/src/github.com/prometheus/pushgateway/handler     0.034s
ok      _/home/richih/work/go/src/github.com/prometheus/pushgateway/storage     0.212s

Cross compilation fails due to go-bindata not in GOPATH

When cross-compiling, the Makefile installs go-bindata in .build/gopath/bin/linux_386/go-bindata but tries to call it via .build/gopath/bin/go-bindata:

$ GOOS=linux GOARCH=386 make archive
GOPATH=/home/fish/dev/go/src/github.com/prometheus/pushgateway/.build/gopath /home/fish/dev/go-src/bin/go get github.com/jteeuwen/go-bindata/...
/home/fish/dev/go/src/github.com/prometheus/pushgateway/.build/gopath/bin/go-bindata -prefix=resources resources/...
make: /home/fish/dev/go/src/github.com/prometheus/pushgateway/.build/gopath/bin/go-bindata: Command not found
Makefile:33: recipe for target 'bindata.go' failed
make: *** [bindata.go] Error 127

Allow for multiple levels in path

Currently, the path for pushing is job/<jobname>/<label>/<labelname>. This allows for 2 primary keys when updating metrics.

It would be great to allow more than 2 primary keys, by accepting multiple <label>/<labelname> levels.

error when pushing some metrics

Hi

I'm trying to push some metrics from a linux to another (to federate some datas of multiple prometheus server all over the world to one prometheus main server in the cloud).

I know it's not a good idea and I have read all the suggestion but I must do it this way accordingly to my security requirements.

As now the context I down here is some exemple of the issue I got.

Firstly I curl a metrics file from the local prometheus server into a metrics.txt file.
And then I push them to the push gateway (with PUT argument)

curl -g 'http://127.0.0.1:9090/federate?match[]={job="prometheus"}' -o metrics.txt
cat metrics.txt | curl -X PUT --data-binary @- http://testpushgatewayip:9091/metrics/job/test/instance/valpuppet01

With this I got a lot of issue (one by one) on some specific metrics. (Seems to be then with big value)

So I have edit the metrics.txt file to test part by part my metrics file. A lot of metrics works fine.

I got a lot of this kind of issue from prometheus metrics (more than 30 for the moment)

So i was wondering is there an issue in the pushgateway code or is there a settings to just ignore validation and get all the metrics easily ?

Here are some exemple of bad metrics
Metrics:

# TYPE go_memstats_alloc_bytes untyped
go_memstats_alloc_bytes{group="cadvisor",nickname="localcache",job="prometheus",instance="cadvisor:8080",monitor="docker01"} 1.3971984e+07 1482227918852
go_memstats_alloc_bytes{job="prometheus",instance="localhost:9090",group="prometheus",nickname="prometheus",monitor="docker01"} 1.92002016e+08 1482227913696

Error:

An error has occurred:

expected gauge in metric go_memstats_alloc_bytes label:<name:"group" value:"cadvisor" > label:<name:"instance" value:"valpuppet01" > label:<name:"job" value:"test" > label:<name:"monitor" value:"docker01" > label:<name:"nickname" value:"localcache" > untyped:<value:1.3971984e+07 > timestamp_ms:1482227918852 

Metrics:

# TYPE go_gc_duration_seconds untyped
go_gc_duration_seconds{quantile="0.5",nickname="localcache",job="prometheus",instance="cadvisor:8080",group="cadvisor",monitor="docker01"} 0.000990441 1482227918852
go_gc_duration_seconds{job="prometheus",instance="cadvisor:8080",group="cadvisor",nickname="localcache",quantile="0.75",monitor="docker01"} 0.0027681560000000003 1482227918852
go_gc_duration_seconds{quantile="0.25",job="prometheus",instance="cadvisor:8080",group="cadvisor",nickname="localcache",monitor="docker01"} 0.000717085 1482227918852
go_gc_duration_seconds{instance="cadvisor:8080",quantile="0",group="cadvisor",nickname="localcache",job="prometheus",monitor="docker01"} 0.000361441 1482227918852
go_gc_duration_seconds{nickname="prometheus",job="prometheus",instance="localhost:9090",group="prometheus",quantile="0.5",monitor="docker01"} 0.0019649850000000003 1482227913696
go_gc_duration_seconds{quantile="0",nickname="prometheus",job="prometheus",instance="localhost:9090",group="prometheus",monitor="docker01"} 0.000223264 1482227913696
go_gc_duration_seconds{group="prometheus",quantile="0.25",nickname="prometheus",job="prometheus",instance="localhost:9090",monitor="docker01"} 0.0005924060000000001 1482227913696
go_gc_duration_seconds{nickname="localcache",job="prometheus",instance="cadvisor:8080",group="cadvisor",quantile="1",monitor="docker01"} 0.01865201 1482227918852
go_gc_duration_seconds{instance="localhost:9090",group="prometheus",quantile="0.75",nickname="prometheus",job="prometheus",monitor="docker01"} 0.0044686840000000005 1482227913696
go_gc_duration_seconds{quantile="1",group="prometheus",nickname="prometheus",job="prometheus",instance="localhost:9090",monitor="docker01"} 0.045553997000000006 1482227913696

Error:

An error has occurred:

expected summary in metric go_gc_duration_seconds label:<name:"group" value:"cadvisor" > label:<name:"instance" value:"valpuppet01" > label:<name:"job" value:"test" > label:<name:"monitor" value:"docker01" > label:<name:"nickname" value:"localcache" > label:<name:"quantile" value:"0" > untyped:<value:0.000361441 > timestamp_ms:1482227918852 

Got protobuf decode error when sending protobuf type to pushgateway.

Hi,

I'm trying to post a protobuf type of a counter metric to pushgateway.

Here is the code:

label_pairs = { "status" => "200", "method" => "GET" }

prometheus_metric = ::Io::Prometheus::Client::MetricFamily.new do |metric_family|
  metric_family.name = "test"
  metric_family.type = ::Io::Prometheus::Client::MetricType::COUNTER
  metric_family.metric = [
    ::Io::Prometheus::Client::Metric.new do |metric|

      metric.label = label_pairs.map do |name, value|

        ::Io::Prometheus::Client::LabelPair.new do |label_pair|
          label_pair.name = name
          label_pair.value = value
        end
      end

      metric.counter = ::Io::Prometheus::Client::Counter.new(:value => 1)
    end
  ]
end

EM.run do
  df = ::EventMachine::HttpRequest.new("http://localhost:9091/metrics/job/test_job").post(
    :head => {
      "Content-Type" => "application/vnd.google.protobuf; proto=io.prometheus.client.MetricFamily; encoding=delimited"
    },
    :body => prometheus_metric.encode
  )

  df.callback do |http|
    puts "Response status: #{http.response_header.status}"
    puts http.response
    EM.stop
  end

  df.errback do |e|
    puts "Fail!"
    puts e.error

    EM.stop
  end
end

And this is the error:

Response status: 500
proto: io_prometheus_client.MetricFamily: wiretype end group for non-group

I'm using protobuf file from this proto

Have anyone ever seen it?
How can I resolve it?

Thank you in advance.

Cheers,
Bang

Basic setup of the pushgateway

Hi, I built the pushgateway from git and started the binary as described. From my understanding one uses port 8080 for pushing the metrics using http and port 9091 to let prometheus pull them from the gateway. In my case the curl examples are not working. Netstat shows me no sign of usage for port 8080. I guess i do something fundamentally wrong here. It would be nice if someone can help me out :/.

Gob'ing protobufs goes wrong in subtle ways

In contrast to what might or might not have been my assumptions back then, if you use gob to encode protobuf messages, the normal marshaling and unmarshaling of protobuf is sidestepped (unsurprisingly, in hindsight…).

Here are the issues:

gob encounters a proto message and sees it as the struct it is in Go. That struct has pointers as fields, and if a protobuf field is unset, the corresponding pointer is nil. Naively read, the gob documentation tells you that a nil pointer cannot be encoded and will lead to a panic or error. For some reason, however, the implementation tolerates nil pointers in structs and simply ignores them during encoding. This combines in an interesting fashion with another gob feature: If a field in a struct has its zero value (including the case if a pointer points to a zero value), gob will not encode it (as an optimization, I assume). During decoding of pointer fields in a struct, however, both cases (nil pointer as well as pointer to a zero value) will result in a nil pointer. (Both have been ignored during encoding.) That has the interesting side effect that any protobuf fields that happen to correspond to a Go zero value of their type, will come out as "unset" (in protobuf lingo) from the gob decoding.

There might be more issues with gob`ing protobuf messages. The original sin here is to gob protobuf messages in the first place. This has to be fixed (by introducing an intermediate interface that will use the protobuf Marshal and Unmarshal methods), which will unfortunately change the storage format once more, so some kind of transparent upgrade needs to be provided.

metrics expiry

It'd be nice if there was a configuration parameter to specify how long the metric should "live" inside the pushgateway and then expie.

Add documentation for adding release events

In various graphing tools (such as Grafana), it is useful to tag graphs with events, such as a release deployment. This way you can quickly see why a particular metric has suddenly jumped up / down.

The prometheus push gateway is an ideal tool for pushing a single event on a successful deployment, as the deployment is typically triggered from a ephemeral single-run task on a build agent.

With the available metrics in prometheus, it is unclear what the best method of storing this data is, and how exactly to push into the push gateway in such a way to allow annotation of events on metric graphs.

Right now, we're trying to push a metric as a counter, but this is appearing every minute in our graphing, presumably due to the push gateway being scraped every minute.

It would be nice if this (fairly common) use case was covered, even briefly, in the documentation. (even if the answer was: "We don't have great support for this yet, but you can use xyz in the meantime")

concurrent map read and map write

I use pushgateway and once in 2-3 days it fails with panic:

fatal error: concurrent map read and map write

goroutine 1195432 [running]:
runtime.throw(0xa235e0, 0x21)
        /usr/local/go/src/runtime/panic.go:547 +0x97 fp=0xc082b3f550 sp=0xc082b3
f538
runtime.mapaccess2(0x8b3a60, 0xc0824e9fb0, 0xc082c57310, 0x20d43d, 0x244000)
        /usr/local/go/src/runtime/hashmap.go:343 +0x61 fp=0xc082b3f598 sp=0xc082
b3f550
reflect.mapaccess(0x8b3a60, 0xc0824e9fb0, 0xc082c57310, 0xc0824e9fb0)
        /usr/local/go/src/runtime/hashmap.go:993 +0x3c fp=0xc082b3f5c8 sp=0xc082
b3f598
reflect.Value.MapIndex(0x8b3a60, 0xc082c56db8, 0x95, 0x82b200, 0xc082c57310, 0x9
8, 0x0, 0x0, 0x0)
        /usr/local/go/src/reflect/value.go:1041 +0x151 fp=0xc082b3f650 sp=0xc082
b3f5c8
encoding/gob.(*Encoder).encodeMap(0xc082082000, 0xc082082038, 0x8b3a60, 0xc082c5
6db8, 0x95, 0xa936f0, 0xc082003be0, 0x0, 0x0)
        /usr/local/go/src/encoding/gob/encode.go:381 +0x2d2 fp=0xc082b3f760 sp=0
xc082b3f650
encoding/gob.encOpFor.func3(0xc0820dcf30, 0xc08304d980, 0x8b3a60, 0xc082c56db8,
0x95)
        /usr/local/go/src/encoding/gob/encode.go:577 +0x111 fp=0xc082b3f7d0 sp=0
xc082b3f760
encoding/gob.(*Encoder).encodeStruct(0xc082082000, 0xc082082038, 0xc082a24d20, 0
x928560, 0xc082c56db0, 0x99)
        /usr/local/go/src/encoding/gob/encode.go:334 +0x3ed fp=0xc082b3f8b0 sp=0
xc082b3f7d0
encoding/gob.encOpFor.func4(0x0, 0xc08304d940, 0x928560, 0xc082c56db0, 0x99)
        /usr/local/go/src/encoding/gob/encode.go:587 +0xba fp=0xc082b3f900 sp=0x
c082b3f8b0
encoding/gob.encodeReflectValue(0xc08304d940, 0x928560, 0xc082c56db0, 0x99, 0xc0
82003c10, 0x0)
        /usr/local/go/src/encoding/gob/encode.go:369 +0x14e fp=0xc082b3f970 sp=0
xc082b3f900
encoding/gob.(*Encoder).encodeMap(0xc082082000, 0xc082082038, 0x8b39c0, 0xc0821a
ccf0, 0x15, 0xa93758, 0xc082003c10, 0x0, 0x0)
        /usr/local/go/src/encoding/gob/encode.go:381 +0x330 fp=0xc082b3fa80 sp=0
xc082b3f970
encoding/gob.encOpFor.func3(0xc0826dce70, 0xc08304d900, 0x8b39c0, 0xc0821accf0,
0x15)
        /usr/local/go/src/encoding/gob/encode.go:577 +0x111 fp=0xc082b3faf0 sp=0
xc082b3fa80
encoding/gob.(*Encoder).encodeSingle(0xc082082000, 0xc082082038, 0xc082a24d00, 0
x8b39c0, 0xc0821accf0, 0x15)
        /usr/local/go/src/encoding/gob/encode.go:307 +0x273 fp=0xc082b3fb88 sp=0
xc082b3faf0
encoding/gob.(*Encoder).encode(0xc082082000, 0xc082082038, 0x8b39c0, 0xc0821accf
0, 0x15, 0xc082394480)
        /usr/local/go/src/encoding/gob/encode.go:709 +0x21d fp=0xc082b3fbe8 sp=0
xc082b3fb88
encoding/gob.(*Encoder).EncodeValue(0xc082082000, 0x8b39c0, 0xc0821accf0, 0x15,
0x0, 0x0)
        /usr/local/go/src/encoding/gob/encoder.go:247 +0x6f4 fp=0xc082b3fd58 sp=
0xc082b3fbe8
encoding/gob.(*Encoder).Encode(0xc082082000, 0x8b39c0, 0xc0821accf0, 0x0, 0x0)
        /usr/local/go/src/encoding/gob/encoder.go:174 +0x79 fp=0xc082b3fda8 sp=0
xc082b3fd58
github.com/prometheus/pushgateway/storage.(*DiskMetricStore).persist(0xc0821ab3b
0, 0x0, 0x0)
        /go/src/github.com/prometheus/pushgateway/storage/diskmetricstore.go:249
 +0x224 fp=0xc082b3fea8 sp=0xc082b3fda8
github.com/prometheus/pushgateway/storage.(*DiskMetricStore).loop.func1.1()
        /go/src/github.com/prometheus/pushgateway/storage/diskmetricstore.go:150
 +0x84 fp=0xc082b3ff90 sp=0xc082b3fea8
runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:1998 +0x1 fp=0xc082b3ff98 sp=0xc08
2b3ff90
created by time.goFunc
        /usr/local/go/src/time/sleep.go:129 +0x41

goroutine 1 [IO wait]:
net.runtime_pollWait(0x379fe8, 0x72, 0xc0821221d8)
        /usr/local/go/src/runtime/netpoll.go:160 +0x67
net.(*pollDesc).Wait(0xc08248a770, 0x72, 0x0, 0x0)
        /usr/local/go/src/net/fd_poll_runtime.go:73 +0x41
net.(*ioSrv).ExecIO(0xc0821366f8, 0xc08248a660, 0x9aa6b0, 0x8, 0xc082e85800, 0xc
082d35800, 0x0, 0x0)
        /usr/local/go/src/net/fd_windows.go:183 +0x177
net.(*netFD).acceptOne(0xc08248a600, 0xc08212c2a0, 0x2, 0x2, 0xc08248a660, 0x0,
0x0, 0x0)
        /usr/local/go/src/net/fd_windows.go:583 +0x26c
net.(*netFD).accept(0xc08248a600, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/fd_windows.go:613 +0x173
net.(*TCPListener).AcceptTCP(0xc082136708, 0xc082be7c10, 0x0, 0x0)
        /usr/local/go/src/net/tcpsock_posix.go:254 +0x54
net.(*TCPListener).Accept(0xc082136708, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/tcpsock_posix.go:264 +0x44
net/http.(*Server).Serve(0xc08213a380, 0x379068, 0xc082136708, 0x0, 0x0)
        /usr/local/go/src/net/http/server.go:2117 +0x130
main.main()
        /go/src/github.com/prometheus/pushgateway/main.go:108 +0x1271

goroutine 6 [select]:
github.com/prometheus/pushgateway/storage.(*DiskMetricStore).loop(0xc0821ab3b0,
0x45d964b800)
        /go/src/github.com/prometheus/pushgateway/storage/diskmetricstore.go:166
 +0x4a9
created by github.com/prometheus/pushgateway/storage.NewDiskMetricStore
        /go/src/github.com/prometheus/pushgateway/storage/diskmetricstore.go:79
+0x5af

goroutine 17 [syscall, 2150 minutes]:
os/signal.signal_recv(0x0)
        /usr/local/go/src/runtime/sigqueue.go:116 +0x139
os/signal.loop()
        /usr/local/go/src/os/signal/signal_unix.go:22 +0x1f
created by os/signal.init.1
        /usr/local/go/src/os/signal/signal_unix.go:28 +0x3e

goroutine 131 [chan receive, 2150 minutes]:
main.interruptHandler(0x379068, 0xc082136708)
        /go/src/github.com/prometheus/pushgateway/main.go:135 +0x199
created by main.main
        /go/src/github.com/prometheus/pushgateway/main.go:107 +0x1182

goroutine 149 [IO wait]:
net.runtime_pollWait(0x379e68, 0x72, 0xc082691400)
        /usr/local/go/src/runtime/netpoll.go:160 +0x67
net.(*pollDesc).Wait(0xc0824771f0, 0x72, 0x0, 0x0)
        /usr/local/go/src/net/fd_poll_runtime.go:73 +0x41
net.(*ioSrv).ExecIO(0xc0821366f8, 0xc0824770e0, 0x9a1ac0, 0x7, 0xa93fc0, 0x23c,
0x0, 0x0)
        /usr/local/go/src/net/fd_windows.go:183 +0x177
net.(*netFD).Read(0xc082477080, 0xc082839000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/fd_windows.go:482 +0x17e
net.(*conn).Read(0xc082058040, 0xc082839000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/net.go:172 +0xeb
net/http.(*connReader).Read(0xc082b50240, 0xc082839000, 0x1000, 0x1000, 0x0, 0x0
, 0x0)
        /usr/local/go/src/net/http/server.go:526 +0x19d
bufio.(*Reader).fill(0xc082ac0300)
        /usr/local/go/src/bufio/bufio.go:97 +0x1f0
bufio.(*Reader).Peek(0xc082ac0300, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go/src/bufio/bufio.go:132 +0xd3
net/http.(*conn).readRequest(0xc0820a0280, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/http/server.go:702 +0x2ed
net/http.(*conn).serve(0xc0820a0280)
        /usr/local/go/src/net/http/server.go:1425 +0x94e
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:2137 +0x455

goroutine 296 [IO wait]:
net.runtime_pollWait(0x379f28, 0x72, 0xc082f890e0)
        /usr/local/go/src/runtime/netpoll.go:160 +0x67
net.(*pollDesc).Wait(0xc08249aa70, 0x72, 0x0, 0x0)
        /usr/local/go/src/net/fd_poll_runtime.go:73 +0x41
net.(*ioSrv).ExecIO(0xc0821366f8, 0xc08249a960, 0x9a1ac0, 0x7, 0xa93fc0, 0x460,
0x0, 0x0)
        /usr/local/go/src/net/fd_windows.go:183 +0x177
net.(*netFD).Read(0xc08249a900, 0xc08212b000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/fd_windows.go:482 +0x17e
net.(*conn).Read(0xc082f707a8, 0xc08212b000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/net.go:172 +0xeb
net/http.(*connReader).Read(0xc0821322c0, 0xc08212b000, 0x1000, 0x1000, 0x0, 0x0
, 0x0)
        /usr/local/go/src/net/http/server.go:526 +0x19d
bufio.(*Reader).fill(0xc082cfe180)
        /usr/local/go/src/bufio/bufio.go:97 +0x1f0
bufio.(*Reader).Peek(0xc082cfe180, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go/src/bufio/bufio.go:132 +0xd3
net/http.(*conn).readRequest(0xc082bf2180, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/http/server.go:702 +0x2ed
net/http.(*conn).serve(0xc082bf2180)
        /usr/local/go/src/net/http/server.go:1425 +0x94e
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:2137 +0x455

goroutine 1071460 [IO wait]:
net.runtime_pollWait(0x379c28, 0x72, 0xc08277b5a0)
        /usr/local/go/src/runtime/netpoll.go:160 +0x67
net.(*pollDesc).Wait(0xc082d35af0, 0x72, 0x0, 0x0)
        /usr/local/go/src/net/fd_poll_runtime.go:73 +0x41
net.(*ioSrv).ExecIO(0xc0821366f8, 0xc082d359e0, 0x9a1ac0, 0x7, 0xa93fc0, 0x768,
0x0, 0x0)
        /usr/local/go/src/net/fd_windows.go:183 +0x177
net.(*netFD).Read(0xc082d35980, 0xc083108000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/fd_windows.go:482 +0x17e
net.(*conn).Read(0xc082f70000, 0xc083108000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/net.go:172 +0xeb
net/http.(*connReader).Read(0xc082d1e080, 0xc083108000, 0x1000, 0x1000, 0x0, 0x0
, 0x0)
        /usr/local/go/src/net/http/server.go:526 +0x19d
bufio.(*Reader).fill(0xc082390000)
        /usr/local/go/src/bufio/bufio.go:97 +0x1f0
bufio.(*Reader).Peek(0xc082390000, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go/src/bufio/bufio.go:132 +0xd3
net/http.(*conn).readRequest(0xc0820a0080, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/http/server.go:702 +0x2ed
net/http.(*conn).serve(0xc0820a0080)
        /usr/local/go/src/net/http/server.go:1425 +0x94e
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:2137 +0x455

goroutine 866710 [IO wait]:
net.runtime_pollWait(0x379da8, 0x72, 0xc082976de8)
        /usr/local/go/src/runtime/netpoll.go:160 +0x67
net.(*pollDesc).Wait(0xc0839677f0, 0x72, 0x0, 0x0)
        /usr/local/go/src/net/fd_poll_runtime.go:73 +0x41
net.(*ioSrv).ExecIO(0xc0821366f8, 0xc0839676e0, 0x9a1ac0, 0x7, 0xa93fc0, 0x784,
0x0, 0x0)
        /usr/local/go/src/net/fd_windows.go:183 +0x177
net.(*netFD).Read(0xc083967680, 0xc082bc8000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/fd_windows.go:482 +0x17e
net.(*conn).Read(0xc082bc4008, 0xc082bc8000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/net.go:172 +0xeb
net/http.(*connReader).Read(0xc08260c060, 0xc082bc8000, 0x1000, 0x1000, 0x0, 0x0
, 0x0)
        /usr/local/go/src/net/http/server.go:526 +0x19d
bufio.(*Reader).fill(0xc083958000)
        /usr/local/go/src/bufio/bufio.go:97 +0x1f0
bufio.(*Reader).Peek(0xc083958000, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go/src/bufio/bufio.go:132 +0xd3
net/http.(*conn).readRequest(0xc082bf2080, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/http/server.go:702 +0x2ed
net/http.(*conn).serve(0xc082bf2080)
        /usr/local/go/src/net/http/server.go:1425 +0x94e
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:2137 +0x455

Guard state file correctness with a checksum

I'm not sure how I ended up in this situation, so I'll describe what I observed:

First kernel OOM-killer activity on a machine dedicated to prometheus, pushgatway and a couple other exporters. It appears that pushgateway is using several hundreds MB of memory (similar setup on another host, with roughly the same amount of events pushed to it: around 10MB).

Then taking a look at the /metrics endpoint, almost 900'000 lines returned, as opposed to around 500 on a "healthy" instance. Among them, a huge number of duplicates:

$ curl -s 0:9091/metrics | egrep '^conplicity_backupExitCode{instance="geoacorda-int-application-0",job="conplicity",volume="logging_agent"} 0$' | wc -l
2000

Next observation: the statefile is around 80MB in size instead of the usual 50kB-60kB. Peeking at the end of this corrupt statefile, I doesn't look the same as a healthy one, so I wonder if it might have been truncated by a transient diskfull or something:

$ cat -v pushgateway.state | tail
conplicity^@^A^Fvolume^A^Mlogging_agent^@^A^@^@^A^C^A^Hinstance^A^[geoacorda-int-application-1^@^A^Cjob^A
conplicity^@^A^Fvolume^A^Mlogging_agent^@^A^@^@^A^C^A^Hinstance^A^[geoacorda-int-application-1^@^A^Cjob^A
conplicity^@^A^Fvolume^A^Mlogging_agent^@^A^@^@^A^C^A^Hinstance^A^[geoacorda-int-application-1^@^A^Cjob^A
conplicity^@^A^Fvolume^A^Mlogging_agent^@^A^@^@^A^C^A^Hinstance^A^[geoacorda-int-application-1^@^A^Cjob^A
conplicity^@^A^Fvolume^A^Mlogging_agent^@^A^@^@^A^C^A^Hinstance^A^[geoacorda-int-application-1^@^A^Cjob^A
conplicity^@^A^Fvolume^A^Mlogging_agent^@^A^@^@^A^C^A^Hinstance^A^[geoacorda-int-application-1^@^A^Cjob^A
conplicity^@^A^Fvolume^A^Mlogging_agent^@^A^@^@^A^C^A^Hinstance^A^[geoacorda-int-application-1^@^A^Cjob^A
conplicity^@^A^Fvolume^A^Mlogging_agent^@^A^@^@^A^C^A^Hinstance^A^[geoacorda-int-application-1^@^A^Cjob^A
conplicity^@^A^Fvolume^A^Mlogging_agent^@^A^@^@^A^C^A^Hinstance^A^[geoacorda-int-application-1^@^A^Cjob^A
conplicity^@^A^Fvolume^A^Mlogging_agent^@^A^@^@^@^@^@

This is version 0.3.1 from the docker image on dockerhub (but same symptom with 0.3.0 or :latest), started with: /bin/pushgateway -persistence.file=/state/pushgateway.state. I have the corrupt statefile available if you want to take a look at it. Reproduicing the problem is as easy as loading this statefile in the pushgateway.

Thanks !

the same metric with different labels override each other

So, If I try to send to the same push gateway the same metric but with different values in the labels they override each other using post(and put) - it seems that it replaces the metric no matter if replace is set to false or true.

Document the "distributed counter" non-goal

Quite often, people think of the Pushgateway as a distributed counter. Then they realize it's not doing that, and file a feature request.

We should make clear in the README.md that the PGW is not a distributed counter, and that the functionality might be something implemented as a stand-alone tool or (much less likely) as an additional feature in the PGW.

Unable to run make

Attempting to run make on ubuntu 14.04 or in an OSX prompt, I'm getting the following error:

GOROOT=/home/vagrant/pushgateway/.deps/go GOPATH=/home/vagrant/pushgateway/.deps/gopath /home/vagrant/pushgateway/.deps/go/bin/go get -d
package bitbucket.org/ww/goautoneg
    imports code.google.com/p/goprotobuf/proto: unable to detect version control system for code.google.com/ path
make: *** [dependencies] Error 1
vagrant@vagrant-ubuntu-trusty-64:~/pushgateway$

I'm unfamiliar with Go but attempted to install the Go package from golang.org which also didn't seem to help. Base prometheus however installed fine on both my vagrant machine and my mbp.

Allow timeout for metrics

In some scenarios, a client will stop pushing metrics because it has gone away. Currently every node needs to be deleted explicitly or the last value will stick around forever. It would be good to be able to configure a timeout after which a metric is considered stale and removed. I think it would be best if the client could specify this.

Feature request: Gzip support

Hi.

Right now pushgateway can't accept requests with Content-Encoding: gzip.

It would be nice if pushgateway could to accept gzipped pushes, to reduce traffic.

Dockerfile broken

[...]
Running hooks in /etc/ca-certificates/update.d....done.
 ---> 9c2a82406edb
Removing intermediate container 926c3b659c6f
Step 6 : ADD . /pushgateway
 ---> c061737f1fe8
Removing intermediate container 325ba4628026
Step 7 : RUN make bin/pushgateway
 ---> Running in 35d739a885ff
make: *** No rule to make target `bin/pushgateway'.  Stop.
INFO[0088] The command [/bin/sh -c make bin/pushgateway] returned a non-zero code: 2

Too many open files

I'm running its dock image on ECS and this error happens periodically:

2017/01/24 16:08:59 http: Accept error: accept tcp [::]:9091: accept4: too many open files; retrying in 1s

Persisting not happen properly.

Hi Guys,

Have tried file based persistence not happen properly. but log says persisting

INFO[96978] Done checkpointing in-memory metrics and chunks in 278.893052ms. file=persistence.go line=563
INFO[97278] Checkpointing in-memory metrics and chunks... file=persistence.go line=539
INFO[97278] Done checkpointing in-memory metrics and chunks in 235.068046ms. file=persistence.go line=563

start command ::

./pushgateway -persistence.file file -persistence.interval 1m0s

When I open actual web page I can see data.

Please let me know any further details log. etc.

-web.telemetry-path not honored for pushes

Hi,

I'm using pushgateway behind a proxy served by /push/*.

For this I run:

/srv/pushgateway -web.telemetry-path=/push/metrics

This works just fine:

$ wget -O - http://localhost:9091/metrics | wc -l
2016-01-26 22:07:15 ERROR 404: Not Found.
0
$ wget -O - http://localhost:9091/push/metrics | wc -l
2016-01-26 22:07:21 (1.20 GB/s) - written to stdout [6777/6777]
118

But for pushing this is not the same:

$ echo "some_metric 3.14" | curl --data-binary @-  http://localhost:9091/metrics/job/some_job
$
$ echo "some_metric 3.14" | curl --data-binary @-  http://localhost:9091/push/metrics/job/some_job
404 page not found

Mark in the web UI if metrics are inconsistent, and consider better ways to represent it in /metrics

The javascript prometheus client library incorrectly has the process_cpu_seconds_total registered as a gauge (due to wanting to set it to a specific value, and the counter type not supporting that).

Pushing this metric to the push gateway causes the push gateway to crash. Pushgateway's /metrics returns

expected counter in metric process_cpu_seconds_total label:<name:"instance" value:"tristans:8181" > label:<name:"job" value:"tristan-test" > gauge:<value:0.128 >

This is also the case with metrics other than those used by the pushgateway itself, e.g., getting two jobs to post different values for a test metr, one posting gauge, and the other posting counter causes pushgateway's /metrics to return:

expected counter in metric metric_test label:<name:"instance" value:"tristans:8182" > label:<name:"job" value:"tristan-test" > gauge:<value:2 >

This should probably either be handled, or rejected more gracefully when the erroneous metric is posted.

cloning + making + running = not working

Note! complete golang noob speaking...

Perhaps there is something more to it, but in that case I think that the readme needs some work :-)

I can make and run, but the result is not pretty and the logs are being filled with logs like the one below:

2016/01/13 16:07:53 http: panic serving [::1]:51642: runtime error: invalid memory address or nil pointer dereference
goroutine 102 [running]:
net/http.(*conn).serve.func1(0xc82049e2c0, 0xe00000, 0xc82049c020)
    /usr/local/go/src/net/http/server.go:1287 +0xb5
github.com/elazarl/go-bindata-assetfs.(*AssetFS).Open(0xc8203b7500, 0xc820450035, 0x13, 0x0, 0x0, 0x0, 0x0)
    /Users/jesblo/dev/gopath/src/github.com/prometheus/pushgateway/.build/gopath/src/github.com/elazarl/go-bindata-assetfs/assetfs.go:148 +0x22e
net/http.serveFile(0xba4758, 0xc820030000, 0xc8204d4000, 0xb64ce0, 0xc8203b7500, 0xc820450034, 0x14, 0x1)
    /usr/local/go/src/net/http/fs.go:359 +0x158
net/http.(*fileHandler).ServeHTTP(0xc82034b3b0, 0xba4758, 0xc820030000, 0xc8204d4000)
    /usr/local/go/src/net/http/fs.go:483 +0x19c
net/http.(Handler).ServeHTTP-fm(0xba4758, 0xc820030000, 0xc8204d4000)
    /Users/jesblo/dev/gopath/src/github.com/prometheus/pushgateway/.build/gopath/src/github.com/prometheus/client_golang/prometheus/http.go:61 +0x50
github.com/prometheus/client_golang/prometheus.InstrumentHandlerFuncWithOpts.func1(0xba4680, 0xc8201f2000, 0xc8204d4000)
    /Users/jesblo/dev/gopath/src/github.com/prometheus/pushgateway/.build/gopath/src/github.com/prometheus/client_golang/prometheus/http.go:158 +0x335
net/http.HandlerFunc.ServeHTTP(0xc820273e00, 0xba4680, 0xc8201f2000, 0xc8204d4000)
    /usr/local/go/src/net/http/server.go:1422 +0x3a
github.com/julienschmidt/httprouter.(*Router).Handler.func1(0xba4680, 0xc8201f2000, 0xc8204d4000, 0xc82000e220, 0x1, 0x1)
    /Users/jesblo/dev/gopath/src/github.com/prometheus/pushgateway/.build/gopath/src/github.com/julienschmidt/httprouter/router.go:237 +0x50
github.com/julienschmidt/httprouter.(*Router).ServeHTTP(0xc820011c80, 0xba4680, 0xc8201f2000, 0xc8204d4000)
    /Users/jesblo/dev/gopath/src/github.com/prometheus/pushgateway/.build/gopath/src/github.com/julienschmidt/httprouter/router.go:299 +0x193
net/http.serverHandler.ServeHTTP(0xc82001bec0, 0xba4680, 0xc8201f2000, 0xc8204d4000)
    /usr/local/go/src/net/http/server.go:1862 +0x19e
net/http.(*conn).serve(0xc82049e2c0)
    /usr/local/go/src/net/http/server.go:1361 +0xbee
created by net/http.(*Server).Serve
    /usr/local/go/src/net/http/server.go:1910 +0x3f6

If I run godep save -r ./... and then make, everything seems to work as expected though... Is this expected?

Does not discard duplicate data

I tried pushing a bunch of historical data to pushgw:

# TYPE http_request_duration_microseconds summary
http_request_duration_microseconds{method="GET",quantile="0.5",source="web.1",status="200"} 3000 1438224000000
http_request_duration_microseconds{method="GET",quantile="0.9",source="web.1",status="200"} 3000 1438224000000
http_request_duration_microseconds{method="GET",quantile="0.99",source="web.1",status="200"} 3000 1438224000000
http_request_duration_microseconds_count{method="GET",source="web.1",status="200"} 1 1438224000000
http_request_duration_microseconds_sum{method="GET",source="web.1",status="200"} 3000 1438224000000

http_request_duration_microseconds{method="GET",quantile="0.5",source="web.1",status="200"} 4000 1438224720000
http_request_duration_microseconds{method="GET",quantile="0.9",source="web.1",status="200"} 4000 1438224720000
http_request_duration_microseconds{method="GET",quantile="0.99",source="web.1",status="200"} 4000 1438224720000
http_request_duration_microseconds_count{method="GET",source="web.1",status="200"} 1 1438224720000
http_request_duration_microseconds_sum{method="GET",source="web.1",status="200"} 4000 1438224720000

I get the same amount of output, but with all the timestamps set to equal the latest known for the series (fetching http://gw:9091/metrics). Except the _count and _sum has been deduplicated:

http_request_duration_microseconds{method="GET",source="web.1",status="200",quantile="0.5"} 3000 1438224720000
http_request_duration_microseconds{method="GET",source="web.1",status="200",quantile="0.9"} 3000 1438224720000
http_request_duration_microseconds{method="GET",source="web.1",status="200",quantile="0.99"} 3000 1438224720000

http_request_duration_microseconds{method="GET",source="web.1",status="200",quantile="0.5"} 4000 1438224720000
http_request_duration_microseconds{method="GET",source="web.1",status="200",quantile="0.9"} 4000 1438224720000
http_request_duration_microseconds{method="GET",source="web.1",status="200",quantile="0.99"} 4000 1438224720000

http_request_duration_microseconds_sum{method="GET",source="web.1",status="200"} 4000 1438224720000
http_request_duration_microseconds_count{method="GET",source="web.1",status="200"} 1 1438224720000

(Extra linebreaks are mine.)

Provide a HTTP Health Check

Kubernetes, Marathon, consul and others require health checks, and often an HTTP based one. While Prometheus and Alertmanager can somewhat use their status page this is quite painful with pushgateway because it contains all the metrics that have been pushed to it.

Could we consider adding either a separate light status page or a dedicated /health endpoint ?

Add arbitrary grouping.

As discussed in #18 , arbitrary grouping should be possible.

The job label will always be part of the group identifier. Any other labels can be added to it. All metrics pushed as part of a group will get the grouping labels forcefully set.

To push with arbitrary grouping, the REST API needs an extension to something like:

/metric/jobs/some_job/groupby/{instance="foo",shard="bar"}

The old form /metric/jobs/some_job/instances/some_instance is just a special case of the above.

The DELETE semantics will change as it will always affect one whole group. There is no hierarchy anymore, every group is top level, no matter by what labels it is defined. The UI will be changed accordingly.

Pushgateway 500 on histograms

It seems that the latest Pushgateway from docker prometheus/pushgateway:latest doesn't support histograms. When you push data with histograms it barfs with an error 500. Additionally, the error message is not very helpful, and there are no logs for Pushgateway.

Docker container versioning not implemented

Hi,
I am using pushgateway and I would like to use the official prom/pushgateway container from docker hub.
The versioning system is not implemented and currently prom/pushgateway:latest contains pushgateway 0.2.0. Is it possible to push containers with a sane versioning system?

Inconsistent metric type is not validated on metric push and value ignored during scrape

Hi,

I've observed following behavior with 0.3.1 version of push gateway - if pushing metric to gw and metric type changes then GW allows HTTP call with inconsistent type to complete but newer value does not propagate to prometheus server with error message dumped to log:

2016-12-14_18:29:19.58876 time="2016-12-14T18:29:19Z" level=warning msg="Metric families 'name:\"emr_job_execution_duration_s\" type:UNTYPED metric:<label:<name:\"instance\" value:\"local\" > label:<name:\"job\" value:\"emr_batch\" > label:<name:\"script\" value:\"foo\" > label:<name:\"status\" value:\"0\" > untyped:<value:1000 > timestamp_ms:1481739244000 > ' and 'name:\"emr_job_execution_duration_s\" type:GAUGE metric:<label:<name:\"instance\" value:\"\" > label:<name:\"job\" value:\"emr\" > label:<name:\"script\" value:\"blah\" > gauge:<value:500 > timestamp_ms:1481740382000 > metric:<label:<name:\"instance\" value:\"localhost.localdomain\" > label:<name:\"job\" value:\"emr\" > label:<name:\"script\" value:\"blah\" > gauge:<value:200 > timestamp_ms:1481740310000 > ' are inconsistent, help and type of the latter will have priority. This is bad. Fix your pushed metrics!" source="diskmetricstore.go:114"

Surprisingly, this contradicts to how prometheus server behaves - it allows metric type change !

Also this leaves people scratching their heads - why my metric did not appear in prometheus ?

Hope you'll be able to look into resolving the issue.

With regards, Ignat Zapolsky

Metrics from different instances replace each other

We've got app on two servers. Each of them pushes data to pushgateway. We found out that metrics replaces each other. Looks like instance should be included into the grouping key but it's not clear how to do it.

Can you clarify this problem? We use java simple api.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.