Giter VIP home page Giter VIP logo

statsd_exporter's Introduction

statsd exporter [Build Status][travis]

CircleCI Docker Repository on Quay Docker Pulls

statsd_exporter receives StatsD-style metrics and exports them as Prometheus metrics.

Overview

The StatsD exporter is a drop-in replacement for StatsD. This exporter translates StatsD metrics to Prometheus metrics via configured mapping rules.

We recommend using the exporter only as an intermediate solution, and switching to native Prometheus instrumentation in the long term. While it is common to run centralized StatsD servers, the exporter works best as a sidecar.

Transitioning from an existing StatsD setup

The relay feature allows for a gradual transition.

Introduce the exporter by adding it as a sidecar alongside the application instances. In Kubernetes, this means adding it to the pod. Use the --statsd.relay.address to forward metrics to your existing StatsD UDP endpoint. Relaying forwards statsd events unmodified, preserving the original metric name and tags in any format.

+-------------+    +----------+                  +------------+
| Application +--->| Exporter +----------------->|  StatsD    |
+-------------+    +----------+                  +------------+
                          ^
                          |                      +------------+
                          +----------------------+ Prometheus |
                                                 +------------+

Relaying from StatsD

To pipe metrics from an existing StatsD environment into Prometheus, configure StatsD's repeater backend to repeat all received metrics to a statsd_exporter process.

+----------+                         +-------------------+                        +--------------+
|  StatsD  |---(UDP/TCP repeater)--->|  statsd_exporter  |<---(scrape /metrics)---|  Prometheus  |
+----------+                         +-------------------+                        +--------------+

This allows trying out the exporter with minimal effort, but does not provide the per-instance metrics of the sidecar pattern.

Tagging Extensions

The exporter supports Librato, InfluxDB, DogStatsD, and SignalFX-style tags, which will be converted into Prometheus labels.

For Librato-style tags, they must be appended to the metric name with a delimiting #, as so:

metric.name#tagName=val,tag2Name=val2:0|c

See the statsd-librato-backend README for a more complete description.

For InfluxDB-style tags, they must be appended to the metric name with a delimiting comma, as so:

metric.name,tagName=val,tag2Name=val2:0|c

See this InfluxDB blog post for a larger overview.

For DogStatsD-style tags, they're appended as a |# delimited section at the end of the metric, as so:

metric.name:0|c|#tagName:val,tag2Name:val2

See Tags in the DogStatsD documentation for the concept description and Datagram Format. If you encounter problems, note that this tagging style is incompatible with the original statsd implementation.

For SignalFX dimension, add the tags to the metric name in square brackets, as so:

metric.name[tagName=val,tag2Name=val2]:0|c

Be aware: If you mix tag styles (e.g., Librato/InfluxDB with DogStatsD), the exporter will consider this an error and the behavior is undefined. Also, tags without values (#some_tag) are not supported and will be ignored.

The exporter parses all tagging formats by default, but individual tagging formats can be disabled with command line flags:

--no-statsd.parse-dogstatsd-tags
--no-statsd.parse-influxdb-tags
--no-statsd.parse-librato-tags
--no-statsd.parse-signalfx-tags

By default, labels explicitly specified in configuration take precedence over labels from tags. To set the label from the statsd event tag, use honor_labels.

Building and Running

NOTE: Version 0.7.0 switched to the kingpin flags library. With this change, flag behaviour is POSIX-ish:

  • long flags start with two dashes (--version)
  • boolean long flags are disabled by prefixing with no (--flag-name is true, --no-flag-name is false)
  • multiple short flags can be combined (but there currently is only one)
  • flag processing stops at the first --
  • see --help for a full list of flags

Lifecycle API

The statsd_exporter has an optional lifecycle API (disabled by default) that can be used to reload or quit the exporter by sending a PUT or POST request to the /-/reload or /-/quit endpoints.

Relay

The statsd_exporter has an optional mode that will buffer and relay incoming statsd lines to a remote server. This is useful to "tee" the data when migrating to using the exporter. The relay will flush the buffer at least once per second to avoid delaying delivery of metrics.

Tests

$ go test

Metric Mapping and Configuration

The statsd_exporter can be configured to translate specific dot-separated StatsD metrics into labeled Prometheus metrics via a simple mapping language. The config file is reloaded on SIGHUP.

A mapping definition starts with a line matching the StatsD metric in question, with *s acting as wildcards for each dot-separated metric component. The lines following the matching expression must contain one label="value" pair each, and at least define the metric name (label name name). The Prometheus metric is then constructed from these labels. $n-style references in the label value are replaced by the n-th wildcard match in the matching line, starting at 1. Multiple matching definitions are separated by one or more empty lines. The first mapping rule that matches a StatsD metric wins.

Metrics that don't match any mapping in the configuration file are translated into Prometheus metrics without any labels and with any non-alphanumeric characters, including periods, translated into underscores.

In general, the different metric types are translated as follows:

StatsD gauge   -> Prometheus gauge

StatsD counter -> Prometheus counter

StatsD timer, histogram, distribution   -> Prometheus summary or histogram

Glob matching

The default (and fastest) glob mapping style uses * to denote parts of the statsd metric name that may vary. These varying parts can then be referenced in the construction of the Prometheus metric name and labels.

An example mapping configuration:

mappings:
- match: "test.dispatcher.*.*.*"
  name: "dispatcher_events_total"
  labels:
    processor: "$1"
    action: "$2"
    outcome: "$3"
    job: "test_dispatcher"
- match: "*.signup.*.*"
  name: "signup_events_total"
  labels:
    provider: "$2"
    outcome: "$3"
    job: "${1}_server"

This would transform these example StatsD metrics into Prometheus metrics as follows:

test.dispatcher.FooProcessor.send.success
 => dispatcher_events_total{processor="FooProcessor", action="send", outcome="success", job="test_dispatcher"}

foo_product.signup.facebook.failure
 => signup_events_total{provider="facebook", outcome="failure", job="foo_product_server"}

test.web-server.foo.bar
 => test_web_server_foo_bar{}

Each mapping in the configuration file must define a name for the metric. The metric's name can contain $n-style references to be replaced by the n-th wildcard match in the matching line. That allows for dynamic rewrites, such as:

mappings:
- match: "test.*.*.counter"
  name: "${2}_total"
  labels:
    provider: "$1"

Glob matching offers the best performance for common mappings.

Ordering glob rules

List more specific matches before wildcards, from left to right:

a.b.c
a.b.*
a.*.d
a.*.*

This avoids unexpected shadowing of later rules, and performance impact from backtracking.

Alternatively, you can disable mapping ordering altogether. With unordered mapping, at each hierarchy level the most specific match wins. This has the same effect as using the recommended ordering.

Regular expression matching

The regex mapping style uses regular expressions to match the full statsd metric name. Use it if the glob mapping is not flexible enough to pull structured data from the available statsd metric names.

Regular expression matching is significantly slower than glob mapping as all mappings must be tested in order. Because of this, regex mappings are only executed after all glob mappings. In other words, glob mappings take preference over regex matches, irrespective of the order in which they are specified. Regular expression matches are always evaluated in order, and the first match wins.

The metric name can also contain references to regex matches. The mapping above could be written as:

mappings:
- match: "test\\.(\\w+)\\.(\\w+)\\.counter"
  match_type: regex
  name: "${2}_total"
  labels:
    provider: "$1"
- match: "(.*)\\.(.*)--(.*)\\.status\.(.*)\\.count"
  match_type: regex
  name: "request_total"
  labels:
    hostname: "$1"
    exec: "$2"
    protocol: "$3"
    code: "$4"

Be aware about yaml escape rules as a mapping like the following one will not work.

mappings:
- match: "test\\.(\w+)\\.(\w+)\\.counter"
  match_type: regex
  name: "${2}_total"
  labels:
    provider: "$1"

Special match groups

When using regex, the match group 0 is the full match and can be used to attach labels to the metric. Example:

mappings:
- match: ".+"
  match_type: regex
  name: "$0"
  labels:
    statsd_metric_name: "$0"

If a metric my.statsd_counter is received, the metric name will still be mapped to my_statsd_counter (Prometheus compatible name). But the metric will also have the label statsd_metric_name with the value my.statsd_counter (unchanged value).

Note: If you use the match like the example (i.e. .+), be aware that it will be a "catch-all" block. So it should come at the very end of the mapping list.

The same is not achievable with glob matching, for more details check this issue.

Naming, labels, and help

Please note that metrics with the same name must also have the same set of label names.

If the default metric help text is insufficient for your needs you may use the YAML configuration to specify a custom help text for each mapping:

mappings:
- match: "http.request.*"
  help: "Total number of http requests"
  name: "http_requests_total"
  labels:
    code: "$1"

Honor labels

By default, labels specified in the mapping configuration take precedence over tags in the statsd event.

To set the label value to the original tag value, if present, specify honor_labels: true in the mapping configuration. In this case, the label specified in the mapping acts as a default.

StatsD timers and distributions

By default, statsd timers and distributions (collectively "observers") are represented as a Prometheus summary with quantiles. You may optionally configure the quantiles and acceptable error, as well as adjusting how the summary metric is aggregated:

mappings:
- match: "test.timing.*.*.*"
  observer_type: summary
  name: "my_timer"
  labels:
    provider: "$2"
    outcome: "$3"
    job: "${1}_server"
  summary_options:
    quantiles:
      - quantile: 0.99
        error: 0.001
      - quantile: 0.95
        error: 0.01
      - quantile: 0.9
        error: 0.05
      - quantile: 0.5
        error: 0.005
    max_age: 30s
    age_buckets: 3
    buf_cap: 1000

The default quantiles are 0.99, 0.9, and 0.5.

The default summary age is 10 minutes, the default number of buckets is 5 and the default buffer size is 500. See also the golang_client docs. The max_summary_age corresponds to SummaryOptions.MaxAge, summary_age_buckets to SummaryOptions.AgeBuckets and stream_buffer_size to SummaryOptions.BufCap.

In the configuration, one may also set the observer type to "histogram". For example, to set the observer type for a single timer metric:

mappings:
- match: "test.timing.*.*.*"
  observer_type: histogram
  histogram_options:
    buckets: [ 0.01, 0.025, 0.05, 0.1 ]
    native_histogram_bucket_factor: 1.1
    native_histogram_max_buckets: 256
  name: "my_timer"
  labels:
    provider: "$2"
    outcome: "$3"
    job: "${1}_server"

If not set, then the default Prometheus client values are used for the histogram buckets: [.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10]. +Inf is added automatically. If your Prometheus server is enabled to scrape native histograms (v2.40.0+), then you can set the native_histogram_bucket_factor to configure precision of the buckets in the sparse histogram. More about this in the original client_golang docs. Also, a configuration of the maximum number of buckets can be set with native_histogram_max_buckets, this avoids the histograms to grow too large in memory. More about this in the original client_golang docs.

observer_type is only used when the statsd metric type is a timer, histogram, or distribution. buckets is only used when the statsd metric type is one of these, and the observer_type is set to histogram.

Timers will be accepted with the ms statsd type. Statsd timer data is transmitted in milliseconds, while Prometheus expects the unit to be seconds. The exporter converts all timer observations to seconds.

Histogram and distribution events (h and d metric type) are not subject to unit conversion.

DogStatsD Client Behavior

timed() decorator

The DogStatsD client's timed decorator emits the metric in seconds but uses the ms type. Set use_ms=True to send the correct units.

Regular expression matching

Another capability when using YAML configuration is the ability to define matches using raw regular expressions as opposed to the default globbing style of match. This may allow for pulling structured data from otherwise poorly named statsd metrics AND allow for more precise targetting of match rules. When no match_type parameter is specified the default value of glob will be assumed:

mappings:
- match: "(.*)\\.(.*)--(.*)\\.status\\.(.*)\\.count"
  match_type: regex
  name: "request_total"
  labels:
    hostname: "$1"
    exec: "$2"
    protocol: "$3"
    code: "$4"

Global defaults

One may also set defaults for the observer type, histogram options, summary options, and match type. These will be used by all mappings that do not define them.

An option that can only be configured in defaults is glob_disable_ordering, which is false if omitted. By setting this to true, glob match type will not honor the occurance of rules in the mapping rules file and always treat * as lower priority than a concrete string.

Setting buckets or quantiles in the defaults is deprecated in favor of histogram_options and summary_options, which will override the deprecated values.

If summary_options is present in a mapping config, it will only override the fields set in the mapping. Unset fields in the mapping will take the values from the defaults.

defaults:
  observer_type: histogram
  histogram_options:
    buckets: [.005, .01, .025, .05, .1, .25, .5, 1, 2.5 ]
    native_histogram_bucket_factor: 1.1
    native_histogram_max_buckets: 256
  summary_options:
    quantiles:
      - quantile: 0.99
        error: 0.001
      - quantile: 0.95
        error: 0.01
      - quantile: 0.9
        error: 0.05
      - quantile: 0.5
        error: 0.005
    max_age: 5m
    age_buckets: 2
    buf_cap: 1000
  match_type: glob
  glob_disable_ordering: false
  ttl: 0 # metrics do not expire
mappings:
# This will be a histogram using the buckets set in `defaults`.
- match: "test.timing.*.*.*"
  name: "my_timer"
  labels:
    provider: "$2"
    outcome: "$3"
    job: "${1}_server"
# This will be a summary using the summary_options set in `defaults`
- match: "other.distribution.*.*.*"
  observer_type: summary
  name: "other_distribution"
  labels:
    provider: "$2"
    outcome: "$3"
    job: "${1}_server_other"

drop action

You may also drop metrics by specifying a "drop" action on a match. For example:

mappings:
# This metric would match as normal.
- match: "test.timing.*.*.*"
  name: "my_timer"
  labels:
    provider: "$2"
    outcome: "$3"
    job: "${1}_server"
# Any metric not matched will be dropped because "." matches all metrics.
- match: "."
  match_type: regex
  action: drop
  name: "dropped"

You can drop any metric using the normal match syntax. The default action is "map" which does the normal metrics mapping.

Explicit metric type mapping

StatsD allows emitting of different metric types under the same metric name, but the Prometheus client library can't merge those. For this use-case the mapping definition allows you to specify which metric type to match:

mappings:
- match: "test.foo.*"
  name: "test_foo"
  match_metric_type: counter
  labels:
    provider: "$1"

Possible values for match_metric_type are gauge, counter and observer.

Mapping cache size and cache replacement policy

There is a cache used to improve the performance of the metric mapping, that can greatly improvement performance. The cache has a default maximum of 1000 unique statsd metric names -> prometheus metrics mappings that it can store. This maximum can be adjusted using the statsd.cache-size flag.

If the maximum is reached, entries are by default rotated using the least recently used replacement policy. This strategy is optimal when memory is constrained as only the most recent entries are retained.

Alternatively, you can choose a random-replacement cache strategy. This is less optimal if the cache is smaller than the cacheable set, but requires less locking. Use this for very high throughput, but make sure to allow for a cache that holds all metrics.

The optimal cache size is determined by the cardinality of the incoming metrics.

Time series expiration

The ttl parameter can be used to define the expiration time for stale metrics. The value is a time duration with valid time units: "ns", "us" (or "ยตs"), "ms", "s", "m", "h". For example, ttl: 1m20s. 0 value is used to indicate metrics that do not expire.

TTL configuration is stored for each mapped metric name/labels combination whenever new samples are received. This means that you cannot immediately expire a metric only by changing the mapping configuration. At least one sample must be received for updated mappings to take effect.

Unit conversions

The scale parameter can be used to define unit conversions for metric values. The value is a floating point number to scale metric values by. This can be useful for converting non-base units (e.g. milliseconds, kilobytes) to base units (e.g. seconds, bytes) as recommended in prometheus best practices.

mappings:
- match: foo.latency_ms
  name: foo_latency_seconds
  scale: 0.001
- match: bar.processed_kb
  name: bar_processed_bytes
  scale: 1024
- match: baz.latency_us
  name: baz_latency_seconds
  scale: 1e-6

Event flushing configuration

Internally statsd_exporter runs a goroutine for each network listener (UDP, TCP & Unix Socket). These each receive and parse metrics received into an event. For performance purposes, these events are queued internally and flushed to the main exporter goroutine periodically in batches. The size of this queue and the flush criteria can be tuned with the --statsd.event-queue-size, --statsd.event-flush-threshold and --statsd.event-flush-interval. However, the defaults should perform well even for very high traffic environments.

Using Docker

You can deploy this exporter using the prom/statsd-exporter Docker image.

For example:

docker pull prom/statsd-exporter

docker run -d -p 9102:9102 -p 9125:9125 -p 9125:9125/udp \
        -v $PWD/statsd_mapping.yml:/tmp/statsd_mapping.yml \
        prom/statsd-exporter --statsd.mapping-config=/tmp/statsd_mapping.yml

Library packages

Parts of the implementation of this exporter are available as separate packages. See the documentation for details.

For the time being, there are no stability guarantees for library interfaces. We will try to call out any significant changes in the changelog. Semantic versioning of the exporter is based on the impact on users of the exporter, not users of the library.

We encourage re-use of these packages and welcome issues related to their usability as a library.

statsd_exporter's People

Contributors

agneum avatar amitsaha avatar bakins avatar beorn7 avatar chai-nadig avatar claytono avatar davidsonff avatar dependabot[bot] avatar diafour avatar discordianfish avatar erickpintor avatar fffonion avatar grobie avatar ivanizag avatar jacksontj avatar juliusv avatar jwfang avatar kullanici0606 avatar matthiasr avatar pedro-stanaka avatar prombot avatar raags avatar sagikazarmark avatar sdurrheimer avatar shmsr avatar simonpasquier avatar spencermalone avatar superq avatar twooster avatar westphahl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

statsd_exporter's Issues

Handling of conflicting metric values

We had some developers on a new project starting to integrate statsd support in their application. statsd_exporter 0.3.0 is crashing constantly with this error message:

FATA[0000] A change of configuration created inconsistent metrics for "query_timer". You have to restart the statsd_exporter, and you should consider the effects on your monitoring setup. Error: a previously registered descriptor with the same fully-qualified name as Desc{fqName: "query_timer", help: "Metric autogenerated by statsd_exporter.", constLabels: {after_date="2017-01-19 00:00:00-05:00",application="โ€ฆ",component="โ€ฆ",environment="testing",server="โ€ฆ",sub_component="โ€ฆ"}, variableLabels: []} has different label names or a different help string  source=exporter.go:137

If I'm reading that correctly, this due to them having two versions of the app sending inconsistent metric values but while an error in this situation seems reasonable it actually causes the statsd_exporter process to crash.

Rename to statsd_exporter

To be consistent with the rest of our exporters, this repository should be renamed to the statsd_exporter as it takes in data from a non-prometheus system and exports it in a prometheus format.

By contrast a "bridge" would take data in some prometheus format, and output it in a non-prometheus format.

Clarify comment about "native solution" long term in Readme

Readme says:

We recommend this only as an intermediate solution and recommend switching to native Prometheus instrumentation in the long term.

I would like this somewhat clarified. For forking web servers such as Passenger, Unicorn and Puma in the Ruby ecosystem, requests are round robined between process forks. This means you have no simple way of sharing state between forks and thus have a hard time collating data for an exporter.

In this kind of setup a solution like statsd_exporter seems reasonable as the alternatives are all similarly messy. Be it sharing a pipe between all forks or nmaping a file and so on.

I think the Readme should clarify that this is a completely legitimate solution for specific use cases such as aggregation of data from forked processes that all share an HTTP listener.

Missing Docker Image

This is probably related to #13 .. but the docs currently reference a docker image at prom/statsd-exporter . This project exists on docker but there are no images uploaded. Could we get an image up there?

Log formatting with docker container v0.5.0

In the README it is noted that the -log.format command line parameter is supported to log in the JSON format.

When I try to run with -log.format=logger:stdout?json=true I get the error flag provided but not defined: -log.format

When checking the -help, -log.format is not listed at all...

docker run -i --rm prom/statsd-exporter:v0.5.0 -help                                                                                                                                            
Unable to find image 'prom/statsd-exporter:v0.5.0' locally
v0.5.0: Pulling from prom/statsd-exporter
aab39f0bc16d: Pull complete 
a3ed95caeb02: Pull complete 
2cd9e239cea6: Pull complete 
628ef644fd9a: Pull complete 
Digest: sha256:d08dd0db8eaaf716089d6914ed0236a794d140f4a0fe1fd165cda3e673d1ed4c
Status: Downloaded newer image for prom/statsd-exporter:v0.5.0
Usage of /bin/statsd_exporter:
  -statsd.listen-address string
    	The UDP address on which to receive statsd metric lines. DEPRECATED, use statsd.listen-udp instead.
  -statsd.listen-tcp string
    	The TCP address on which to receive statsd metric lines. "" disables it. (default ":9125")
  -statsd.listen-udp string
    	The UDP address on which to receive statsd metric lines. "" disables it. (default ":9125")
  -statsd.mapping-config string
    	Metric mapping configuration file name.
  -statsd.read-buffer int
    	Size (in bytes) of the operating system's transmit read buffer associated with the UDP connection. Please make sure the kernel parameters net.core.rmem_max is set to a value greater than the value specified.
  -version
    	Print version information.
  -web.listen-address string
    	The address on which to expose the web interface and generated Prometheus metrics. (default ":9102")
  -web.telemetry-path string
    	Path under which to expose metrics. (default "/metrics")

Add Support for Prefix

I'd like to prefix my metrics to add a namespace. For example, I send a metric to statsd as "login". It'd be nice to have prometheus read that as appname_login. If I have statsd prefix the metric myself as appname_ then the mappings break. Without the prefix, the following works:

login.*
name="login"
type="$1"

After the prefix

appname_login.*
name="login"
type="$1"

Doesn't seem to match. And I see appname_login_attempt 1.

Upgrade client_golang to reduce memory usage with many metrics

Hi,
I'm seeing a ratio of 10k metrics per 100M of RAM, for production metrics. Interested to hear first if that's normal, and then, what's a good strategy for handling 1M metrics (this also affects the metrics endpoint polled by prom, obviously).

In my tests for 1M metrics, I've gotten 28GB of RAM and growing, and all cores max out (had to kill the process).

Metric buckets are not clearing

Once statsd_bridge recieves a metric, it seems to hold a bucket for it (until the process dies).
Sometimes a metric is getting sent once in a while and does not require the persistency, is that by design that it will never clean/remove buckets with Nan values or with Nan values for a long period?

Bridge Crashes When Counter Decreases

Basically I'm forwarding nsq stats to statsd, with statsd repeating to the bridge. When NSQ restarts its stats restart, or when a NSQ reader disconnects and reconnects the stats reset, therefore the counter would reset.

I was told in IRC that Prometheus itself understands concepts like this.

Is there a way we can make the bridge not crash in this scenario?

2015/04/10 23:17:12 Starting StatsD -> Prometheus Bridge...
2015/04/10 23:17:12 Accepting StatsD Traffic on :9125
2015/04/10 23:17:12 Accepting Prometheus Requests on :9102
panic: counter cannot decrease in value

goroutine 16 [running]:
runtime.panic(0x6fef00, 0xc20870fb10)
        /usr/src/go/src/pkg/runtime/panic.c:279 +0xf5
github.com/prometheus/client_golang/prometheus.(*counter).Add(0xc208196200, 0xc0ef876000000000)
        /go/src/github.com/prometheus/client_golang/prometheus/counter.go:69 +0xf9
main.(*Bridge).Listen(0xc20800f500, 0xc2080a2000)
        /go/src/app/bridge.go:217 +0x617
main.main()
        /go/src/app/main.go:127 +0x5e3

goroutine 19 [finalizer wait, 8 minutes]:
runtime.park(0x41a430, 0x952a78, 0x950cc9)
        /usr/src/go/src/pkg/runtime/proc.c:1369 +0x89
runtime.parkunlock(0x952a78, 0x950cc9)
        /usr/src/go/src/pkg/runtime/proc.c:1385 +0x3b
runfinq()
        /usr/src/go/src/pkg/runtime/mgc0.c:2644 +0xcf
runtime.goexit()
        /usr/src/go/src/pkg/runtime/proc.c:1445

goroutine 26 [IO wait, 5 minutes]:
net.runtime_pollWait(0x7f1489fffb20, 0x72, 0x0)
        /usr/src/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc2080ba140, 0x72, 0x0, 0x0)
        /usr/src/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc2080ba140, 0x0, 0x0)
        /usr/src/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).accept(0xc2080ba0e0, 0x823cd8, 0x0, 0x7f1489ffe520, 0xb)
        /usr/src/go/src/pkg/net/fd_unix.go:419 +0x343
net.(*TCPListener).AcceptTCP(0xc20803a0b8, 0x4dae33, 0x0, 0x0)
        /usr/src/go/src/pkg/net/tcpsock_posix.go:234 +0x5d
net/http.tcpKeepAliveListener.Accept(0xc20803a0b8, 0x0, 0x0, 0x0, 0x0)
        /usr/src/go/src/pkg/net/http/server.go:1947 +0x4b
net/http.(*Server).Serve(0xc2080044e0, 0x7f1489ffffb0, 0xc20803a0b8, 0x0, 0x0)
        /usr/src/go/src/pkg/net/http/server.go:1698 +0x91
net/http.(*Server).ListenAndServe(0xc2080044e0, 0x0, 0x0)
        /usr/src/go/src/pkg/net/http/server.go:1688 +0x14d
net/http.ListenAndServe(0x776410, 0x5, 0x0, 0x0, 0x0, 0x0)
        /usr/src/go/src/pkg/net/http/server.go:1778 +0x79
main.serveHTTP()
        /go/src/app/main.go:36 +0x82
created by main.main
        /go/src/app/main.go:105 +0x2b8

goroutine 27 [IO wait]:
net.runtime_pollWait(0x7f1489fffbd0, 0x72, 0x0)
        /usr/src/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc20802bf00, 0x72, 0x0, 0x0)
        /usr/src/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc20802bf00, 0x0, 0x0)
        /usr/src/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).readFrom(0xc20802bea0, 0x7f1489e5fd38, 0x200, 0x200, 0x0, 0x0, 0x0, 0x7f1489ffe520, 0xb)
        /usr/src/go/src/pkg/net/fd_unix.go:269 +0x3db
net.(*UDPConn).ReadFromUDP(0xc20803a0a8, 0x7f1489e5fd38, 0x200, 0x200, 0xc2080a2000, 0x0, 0x0, 0x0)
        /usr/src/go/src/pkg/net/udpsock_posix.go:67 +0x129
main.(*StatsDListener).Listen(0xc20803a0b0, 0xc2080a2000)
        /go/src/app/bridge.go:288 +0x8b
created by main.main
        /go/src/app/main.go:116 +0x422

goroutine 28 [select, 8 minutes]:
main.watchConfig(0x7fff43d6af43, 0xd, 0xc20800f200)
        /go/src/app/main.go:77 +0x766
created by main.main
        /go/src/app/main.go:124 +0x5bb

goroutine 30 [syscall, 8 minutes]:
syscall.Syscall(0x0, 0x5, 0xc208125e98, 0x10000, 0x0, 0x0, 0x0)
        /usr/src/go/src/pkg/syscall/asm_linux_amd64.s:21 +0x5
syscall.read(0x5, 0xc208125e98, 0x10000, 0x10000, 0x0, 0x0, 0x0)
        /usr/src/go/src/pkg/syscall/zsyscall_linux_amd64.go:838 +0x75
syscall.Read(0x5, 0xc208125e98, 0x10000, 0x10000, 0x0, 0x0, 0x0)
        /usr/src/go/src/pkg/syscall/syscall_unix.go:136 +0x5c
github.com/howeyc/fsnotify.(*Watcher).readEvents(0xc2080041e0)
        /go/src/github.com/howeyc/fsnotify/fsnotify_linux.go:219 +0x133
created by github.com/howeyc/fsnotify.NewWatcher
        /go/src/github.com/howeyc/fsnotify/fsnotify_linux.go:126 +0x288

goroutine 31 [chan receive, 8 minutes]:
github.com/howeyc/fsnotify.(*Watcher).purgeEvents(0xc2080041e0)
        /go/src/github.com/howeyc/fsnotify/fsnotify.go:21 +0x51
created by github.com/howeyc/fsnotify.NewWatcher
        /go/src/github.com/howeyc/fsnotify/fsnotify_linux.go:127 +0x2a0

metric name should be split from labels in yaml mapping format

#66 was recently merged and I'm a huge fan of the new yaml mappings, however I think it is probably a bad idea to have the name label have magic behavior in setting the metric name in the exporter. I /think/ it probably makes more sense to have a metric_name parameter one level higher in the mapper config. This should disambiguate the result of the mapping AND allow for, if desired, a mapping which sets a label called name.

Proposed change...
this:

mappings:
- match: test.case.*
  labels:
    name: "test_case_total"
    blargle: "$1"

Would become this:

mappings:
- match: test.case.*
  metric_name: "test_case_total"
  labels:
    blargle: "$1"

Allow overriding metrics type

Sometimes (#122) a data source sends metrics that are not quite correct โ€“ for example, sending "c" statsd events but actually sending an all-time count every time. It would help to correct for this if the mapping would allow to override the automatically determined metric type. This would need another attribute in the mapping configuration, and a clear documentation of the implications.

Counter metrics are being added instead of replaced

I have a statsd producer that's emitting a constant counter type metric

i.e:

myhost.myexec.http.status.404.count:2329|c
myhost.myexec.http.status.200.count:30|c
myhost.myexec.http.status.304.count:10|c

Because the metric is a counter in the origin system, if I don't get any requests during the scraping period, it'll send again the same value the next time.

I've added the following mapping in my statsd_exporter to use prometheus dimentions like this:

mappings:
- match: ^([^.]*)\.([^.]*)--http.status.([^.]*).count
  match_type: "regex"
  name: "http_status_count"
  labels:
    code: "$3"

So the ending metric is something like:

http_status_count{code="200"} 1557
http_status_count{code="304"} 18
http_status_count{code="404"} 6987

What's happening is that statsd_exporter is constantly adding the values that the origin is pushing even though they don't change. So what I end up seing in the prometheus metrics is a constant increment even though the metrics don't change.

Any ideas?

Switch to kingpin, enable log flags

As reported in #110, we don't expose the logging flags from github.com/prometheus/common/log. That has switched to kingpin, like the other Prometheus projects. We should switch over too, so that we can use log.AddFlags.

"missing terminating newline" error when creates config in k8s

Hello!
There is situation when statsd_exporter won't start if there is no newline in config file.
We are using kubernetes 1.6 and unfortunately it creates config files from configmaps
with no newline at the end of file... Is it possible to eliminate this restriction?

Malformed statsd lines should be reported at lower log level.

ERRO[0785] Error building event on line docker-lb2.fabio--garbage.tacostand_com./.12_117_14_4_21506.mean:79292673.35|gf: Bad stat type gf source=exporter.go:51

Is an example of a badly formed statsd metric for the non-existent type gf which raises an error level log message from statsd_exporter. This doesn't smell like an error and should probably be downgraded to info level.

https://dave.cheney.net/2015/11/05/lets-talk-about-logging

Documentation unclear on whether statsd-bridge requires statsd

As I understand statsd's repeater mode, it forwards the unprocessed traffic it receives from statsd clients. This makes me think that instead of continuing to run statsd and having it repeat traffic to statsd-bridge, statsd-bridge can replace statsd and receive traffic directly from statsd client libraries. If this is correct, it would be nice to update the Overview section of the README to mention this style of usage.

Relay received statsd traffic to different server(s) for "inline" usage

Hi,
the recommended usage of statsd_exporter from README.md is with a statsd server relaying metrics to it.
Our current statsd deployment involves a central statsd traffic receiving traffic for "global aggregation" purposes, though we'd like to gradually move to Prometheus. During the transition we'd co-locate statsd_exporter on the same host running the service and scrape from Prometheus while still keeping the statsd traffic to the central server.

Would you be interested in a PR to add relaying capabilities to statsd_exporter itself? I think this would simplying deployment in cases where the exporter acts "inline" with statsd traffic, without an additional statsd repeater.

Thanks!

Published DockerHub tag has changed

The default tag for Docker images is latest but a new tag went up under master on DockerHub. This means the pull command fails:

$ docker pull prom/statsd-exporter
Using default tag: latest
Pulling repository docker.io/prom/statsd-exporter
Tag latest not found in repository docker.io/prom/statsd-exporter

One must now run docker pull prom/statsd-exporter:master instead.

It would be lovely to have it published under the latest tag again as it would otherwise mean a lot of config file updates. Thanks!

New histogram feature with conversion error

Either my day is far too long today or PR #66 has a little mathematic error which I simply have not seen in my test environment because my setup was too little and fast.

If I push some statsd timers to statsd and let them export with statsd_exporter all metrics are send to the +Inf bucket. statsd uses metrics in milliseconds per default but the histogram buckets are set to seconds.

This came to my attention when I deployed the current master branch of statsd_exporter to one of our production host. To reproduce it, simply send some "realistic" metrics to statsd, like:

echo "test.timing:100|ms|@0.1" | nc -u -w0 127.0.0.1 8125
echo "test.timing:320|ms|@0.1" | nc -u -w0 127.0.0.1 8125
echo "test.timing:500|ms|@0.1" | nc -u -w0 127.0.0.1 8125
echo "test.timing:900|ms|@0.1" | nc -u -w0 127.0.0.1 8125

It will export to:

test_timing_bucket{job="debug.goethe.local",le="0.005"} 0
test_timing_bucket{job="debug.goethe.local",le="0.01"} 0
test_timing_bucket{job="debug.goethe.local",le="0.025"} 0
test_timing_bucket{job="debug.goethe.local",le="0.05"} 0
test_timing_bucket{job="debug.goethe.local",le="0.1"} 0
test_timing_bucket{job="debug.goethe.local",le="0.25"} 0
test_timing_bucket{job="debug.goethe.local",le="0.5"} 0
test_timing_bucket{job="debug.goethe.local",le="1"} 0
test_timing_bucket{job="debug.goethe.local",le="2.5"} 0
test_timing_bucket{job="debug.goethe.local",le="5"} 0
test_timing_bucket{job="debug.goethe.local",le="10"} 0
test_timing_bucket{job="debug.goethe.local",le="+Inf"} 4

Can someone confirm this or am I doing something wrong?

Issues with exposed metrics

statsd_exporter_packets_total does not count packets. It counts events, and the dimensions are not used consistantly. dogstatsd metrics get double counted as both legal and dogstatsd.

networkStats.WithLabelValues("legal").Inc() should be moved out of the per-line loop.
I can PR this, but the other counters need some thought too, the various forms of invalid should probably be on events (lines), rather than packets.

initFromString should allow comments in config strings

Problem
It appears that the initFromFile for the mapping config file is only read and then passed to initFromString. That method does not appear to have any way to handle comments in the string (or in this specific thought, the mapping file)

Possible Solution
Support golang style (//) comments in the initFromString and ignore them in the case statement in the if statement

Reading long lines from TCP closes the connection

When reading from TCP, we abort and close the connection if any line does not fit into the bufio.Reader buffer. This is really not necessary, and the standard library has higher-level methods to reliably read lines from a stream.

Any solution should make sure to process the last line in the stream, even if it does not end in a newline.

Support raw regex for mapping in addition to globbing

While trying to use this exporter with some particularly heinous 3rd party software I quickly ran into the limitation that I would like to be able to define mappings which are more precisely defined than what is allowed in the current globbing match syntax. Looking through the code, the globbing is actually handled internally by being translated to regex.

I'd propose that there should be a configuration option available utilizing the newer yaml mapper configuration that allows for selection between match_type with a value of either glob or regex.

I've been testing the changes necessary to implement this and will have a PR shortly.

statsd mapping not taking effect

I get the following metrics wihtout mapping

# HELP consul_consul_dns_domain_query_ip_10_113_143_167_ec2_internal Metric autogenerated by statsd_exporter.
# TYPE consul_consul_dns_domain_query_ip_10_113_143_167_ec2_internal summary
consul_consul_dns_domain_query_ip_10_113_143_167_ec2_internal{quantile="0.5"} 1.453363
consul_consul_dns_domain_query_ip_10_113_143_167_ec2_internal{quantile="0.9"} 1.556893
consul_consul_dns_domain_query_ip_10_113_143_167_ec2_internal{quantile="0.99"} 1.556893
consul_consul_dns_domain_query_ip_10_113_143_167_ec2_internal_sum 4.627288
consul_consul_dns_domain_query_ip_10_113_143_167_ec2_internal_count 3

I want the metric name like so

consul_consul_dns_domain_query

I have a statsd_mmaping.conf like so

consul.consul.dns.domain.query.*
name="consul_dns_domain_query"

But still the metrics show


consul_consul_dns_domain_query_ip_10_113_143_167_ec2_internal{quantile="0.5"} 1.453363
consul_consul_dns_domain_query_ip_10_113_143_167_ec2_internal{quantile="0.9"} 1.556893
consul_consul_dns_domain_query_ip_10_113_143_167_ec2_internal{quantile="0.99"} 1.556893
consul_consul_dns_domain_query_ip_10_113_143_167_ec2_internal_sum 4.627288
consul_consul_dns_domain_query_ip_10_113_143_167_ec2_internal_count 3

Allow for dropping unmapped metrics

The default behavior of passing unmapped metrics through can result in really polluted metric namespaces when using software that generates a large number of dynamic metrics OR a large number of metrics that are irrelevent to the user's interest. I'd like to be able to switch the behavior in the config to blackholing unmapped metrics.

followup question about statsd_exporter vs. pushgateway

Hi @matthiasr,
I still have a follow-up question. I plan to use short living batch jobs, long living batch jobs (they will be converted to streaming jobs over time) and spark streaming jobs. Can't I not just simply push all metrics to the pushgateway rather than using statsd_exporter at all and prometheus scrapes the metrics from there? What speaks against it?
For example: Imagine statsd_exporter restarts, all metrics get lost?
Thanks,
Arnold

Feature: Completely Dynamic Mappings

Context

Our statsd server collects a large number of time series for many different hosts. Most of these metrics are in the form of:

stats.<namespace>.<host-id>.<metric_namespace>.<metric_name>
or
stats.<namespace>.<host-id>.<metric_namespace>.<metric_namespace_2>.<metric_namespace...>.<metric_name>

These metrics when collected should simply be:
<escaped_metric_name>{host_id=<host_id>,namespace=<namespace>_<metric_namespace>}
and
<escaped_metric_name>{host_id=<host_id>,namespace=<namespace>_<metric_namespace>_<metric_namespace2>_<metric_namespace...>}

This is hard because our current mappings won't allow for dynamic names and also don't allow for full regex on metrics.

Solution

I suggest we write the mapping config with full golang regex. This allows full control over how the metric's should be mapped.

Example Config

# This creates the most basic mapping. It escapes the metric and then exports it 
(.*)
name=$1

# This creates the more advanced mapping needed above. All the groups would be escaped automatically. 
stats.([^.]+)\.([^.]+)\.(.+)\.([^.]+)$
name=$4
namespace=$1_$3
host=$2

Hopefully we can kick off the discussion

client_golang not updated with latest files

client_golang at https://github.com/prometheus/statsd_exporter/tree/master/vendor/github.com/prometheus/client_golang is not updated with the latest files from https://github.com/prometheus/client_golang
This causes discrepancies in the metrics generated by statsd and prometheus.
The pushgateway floods it's stdout with messages like below
time="2017-08-30T08:05:00Z" level=warning msg="Metric families 'name:"go_memstats_sys_bytes" help:"Number of bytes obtained by system. Sum of all system allocations." type:GAUGE metric:<label:<name:"instance" value:"core-services-ad-level-canary-1654703993-21nr1" > label:<name:"job" value:"target_9102" > gauge:<value:1.9605752e+07 > timestamp_ms:1504080298125 > ' and 'name:"go_memstats_sys_bytes" help:"Number of bytes obtained from system." type:GAUGE metric:<label:<name:"instance" value:"core-services-host-level-ds-blfqn" > label:<name:"job" value:"target_9126" > gauge:<value:1.44943352e+08 > timestamp_ms:1504080291333 > metric:<label:<name:"instance" value:"core-services-host-level-ds-gxkg4" > label:<name:"job" value:"target_9126" > gauge:<value:1.83412984e+08 > timestamp_ms:1504080296389 > ' are inconsistent, help and type of the latter will have priority. This is bad. Fix your pushed metrics!" source="diskmetricstore.go:114"

Godep manifest and release

Please add godep manifest and make new release.
If checkout version 0.1.0 and try make godep get i get this result

github.com/howeyc/fsnotify (download)
github.com/prometheus/client_golang (download)
package github.com/prometheus/client_golang/model: cannot find package "github.com/prometheus/client_golang/model" in any of:
    /usr/local/go/src/github.com/prometheus/client_golang/model (from $GOROOT)
    /go/src/github.com/prometheus/client_golang/model (from $GOPATH)
github.com/beorn7/perks (download)
github.com/golang/protobuf (download)
github.com/prometheus/client_model (download)
github.com/prometheus/common (download)
github.com/matttproud/golang_protobuf_extensions (download)
github.com/prometheus/procfs (download)
godep: exit status 1

Allow for setting "#help" text in mapping config

The current behavior of providing a help text like
# HELP test_case_total Metric autogenerated by statsd_exporter.

falls pretty far short of the intent of informing the consumer of the metric what the metric represents in easy language. This should be a very small modification that would improve usability.

Buffer for reading udp packets is too small.

512 bytes are ok for the wild Internet. For local network we can send udp packets up to 1432 bytes or greater. For example it's very important if we use buffered client (github.com/cactus/go-statsd-client/statsd/client_buffered.go). So I suggest increasing buf size to 2-8k.
Also MTU is important for client. Udp packet can be split or dropped by ISP because of size. Server shouldn't evaluate MTU.

{code}
func (l *StatsDListener) Listen(e chan<- Events) {
// TODO: evaluate proper size according to MTU
var buf [512]byte
for {
n, _, err := l.conn.ReadFromUDP(buf[0:])
if err != nil {
log.Fatal(err)
}
l.handlePacket(buf[0:n], e)
}
}
{code}

should support sampling for timer metrics

I'm sending statsd metrics from Openstack Swift to the statsd_exporter and seeing error messages like:

time="2017-01-06T10:18:08Z" level=error msg="Illegal sampling factor for non-counter metric on line swift.object-server.REPLICATE.timing:0.654935836792|ms|@0.1" source="exporter.go:390"

The statsd_exporter only allows sampling factors for counter metrics, but it should also do so for timer metrics. See for example https://github.com/etsy/statsd/blob/master/docs/metric_types.md#timing for upstream documentation that references sampling factors for timer metrics.

Metric expiry/reset

statsd will automatically reset counters when publishing to graphite and i'm looking for a similar feature in the exporter too. Did i miss the settings for this or isn't this available?

Currently we use counters which get reset on every publish, so we can use that values to detect inactivity or outages (when the metric is no longer updated). Technically it's a gauge with automatic expiry when not updated for a while.

Is there an option to expire or reset after a certain amount of time? Otherwise we'll need to look into the full prometheus instrumentation migration which was actually planned for later this year.

Documentation and real behaviour of metric name mapping not match

README contains:

Metrics that don't match any mapping in the configuration file are translated into Prometheus metrics without any labels and with certain characters escaped (_ -> __; - -> __; . -> _)

But in reality metrics with names metric_name1, metric-name2, metric.name3 become metric_name1, metric_name2, metric_name3 accordingly.

there is the code to reproduce this doc vs reality mismatch https://gist.github.com/nordicdyno/061a50dc29336dc6a60d4d0470edd151

Time for a release?

Since 0.4, the config file format has changed to YAML as well as several other features. I'm currently running a version close to master in production to ensure I can use histograms.

Is it time to do a 0.5 release? What other things would we liked to be merged before then? I'm happy to do the mechanics of a releases if we decide its time.

How to improve availability?

How can I scale stastd_exporter in order to improve availability?

I was considering to have a load balancer pointing to two stastd_exporter instances. Then I would query for the sum of every instance, given a certain metric. This approach works fine with counters, but fails with gauges.

Would you guys have a suggestion?

Gauges do not work according to Promethius/Statsd specs

Gauges should provide functionality to increase/decrease the metric by the specified value, not just always set the metric to the value.

For instance;

myGauge:1|g should set this gauge to 1
myGauge:+1|g should increment the gauge by 1
myGauge:-1|g should decrement the gauge by 1

Mapped metric name cannot be dynamic

When I tried taking parts of the parsed output and use it for the metric name, for example: "name": "$2_$3", it seems that statsd_bridge fail to start if I configure it and complains about it.
Using a static name it works.

some common metrics names already in use by exporter

I'm not sure that I have good proposal to solve this, but I /just/ ran into the issue when mapping some statsd metrics that I cannot define a metric http_requests_total because it is already defined by default.

I suppose if an existing metric is redefined by the mapping that the user defined mapping should win out. I'm less sure what to do with the internal telemetry/instrumentation which had originally defined the metric.

feature request: persistence

This exporter seem to be loosing metrics on restarts, can we get persistence for them as it's done in the push gateway?

Fix use of client_golang to allow inconsistent labels on metrics

from time to time, looks like the metrics endpoint dies and start returning 500 errors, and only way to restore it is to restart the pod it's on.
Iogs are empty and I didn't see any options to switch on a more verbose logging.
what can I do in order to provide a more detailed log and help debug it?

Usage of histogram for timer metrics

Currently, the statsd_bridge uses summary as the mapping for statsd timers and I am wondering what you think about allowing the bridge to decide between using summary or histogram for statsd timers?

Some backstory:
As our clients are already instrumented with statsd, we're considering using the statsd_bridge as an initial step to migrate our webservices metrics into prometheus and validate our setup.

Long term, we plan to use the python-client (and contribute to it) but in the very short term I am somewhat blocked on how to deal with pre-fork servers like gunicorn in a clean way. The statsd_bridge would provide us with an IPC mechanism to send metrics from multiple instance/processes and expose it to prometheus. With the histogram support, that would allow us to run statsd_bridge per instance and expose a per instance metrics endpoint (a very desirable feature).

If that's a direction that is acceptable, a few things will need to be designed/implemented:

  • Are "summary vs histogram" a runtime configuration or something that could be set on a per mapping basis?
  • If the choice can be made on per mapping basis, should that be extended to handle bucket sizes or use a default set of buckets?

To get more familiar with the code, I made a small test branch to validate the histogram implementation:
https://github.com/prometheus/statsd_bridge/compare/master...marcusmartins:histogram_support?expand=1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.