Giter VIP home page Giter VIP logo

autometrics-dev / autometrics-py Goto Github PK

View Code? Open in Web Editor NEW
215.0 6.0 7.0 1002 KB

Easily add metrics to your code that actually help you spot and debug issues in production. Built on Prometheus and OpenTelemetry.

Home Page: https://autometrics.dev

License: Apache License 2.0

Python 99.10% Starlark 0.44% Dockerfile 0.46%
metrics monitoring observability opentelemetry prometheus python telemetry

autometrics-py's Introduction

GitHub_headerImage

Tests Discord Shield

A Python port of the Rust autometrics-rs library

Metrics are a powerful and cost-efficient tool for understanding the health and performance of your code in production. But it's hard to decide what metrics to track and even harder to write queries to understand the data.

Autometrics provides a decorator that makes it trivial to instrument any function with the most useful metrics: request rate, error rate, and latency. It standardizes these metrics and then generates powerful Prometheus queries based on your function details to help you quickly identify and debug issues in production.

See Why Autometrics? for more details on the ideas behind autometrics.

Features

  • โœจ @autometrics decorator instruments any function or class method to track the most useful metrics
  • ๐Ÿ’ก Writes Prometheus queries so you can understand the data generated without knowing PromQL
  • ๐Ÿ”— Create links to live Prometheus charts directly into each function's docstring
  • ๐Ÿ” Identify commits that introduced errors or increased latency
  • ๐Ÿšจ Define alerts using SLO best practices directly in your source code
  • ๐Ÿ“Š Grafana dashboards work out of the box to visualize the performance of instrumented functions & SLOs
  • โš™๏ธ Configurable metric collection library (opentelemetry or prometheus)
  • ๐Ÿ“ Attach exemplars to connect metrics with traces
  • โšก Minimal runtime overhead

Quickstart

  1. Add autometrics to your project's dependencies:
pip install autometrics
  1. Instrument your functions with the @autometrics decorator
from autometrics import autometrics

@autometrics
def my_function():
  # ...
  1. Configure autometrics by calling the init function:
from autometrics import init

init(tracker="prometheus", service_name="my-service")
  1. Export the metrics for Prometheus
# This example uses FastAPI, but you can use any web framework
from fastapi import FastAPI, Response
from prometheus_client import generate_latest

# Set up a metrics endpoint for Prometheus to scrape
#   `generate_latest` returns metrics data in the Prometheus text format
@app.get("/metrics")
def metrics():
    return Response(generate_latest())
  1. Run Prometheus locally with the Autometrics CLI or configure it manually to scrape your metrics endpoint
# Replace `8080` with the port that your app runs on
am start :8080
  1. (Optional) If you have Grafana, import the Autometrics dashboards for an overview and detailed view of all the function metrics you've collected

Using autometrics-py

  • You can import the library in your code and use the decorator for any function:
from autometrics import autometrics

@autometrics
def sayHello:
  return "hello"
  • To show tooltips over decorated functions in VSCode, with links to Prometheus queries, try installing the VSCode extension.

    Note: We cannot support tooltips without a VSCode extension due to behavior of the static analyzer used in VSCode.

  • You can also track the number of concurrent calls to a function by using the track_concurrency argument: @autometrics(track_concurrency=True).

    Note: Concurrency tracking is only supported when you set with the environment variable AUTOMETRICS_TRACKER=prometheus.

  • To access the PromQL queries for your decorated functions, run help(yourfunction) or print(yourfunction.__doc__).

    For these queries to work, include a .env file in your project with your prometheus endpoint PROMETHEUS_URL=your endpoint. If this is not defined, the default endpoint will be http://localhost:9090/

Dashboards

Autometrics provides Grafana dashboards that will work for any project instrumented with the library.

Alerts / SLOs

Autometrics makes it easy to add intelligent alerting to your code, in order to catch increases in the error rate or latency across multiple functions.

from autometrics import autometrics
from autometrics.objectives import Objective, ObjectiveLatency, ObjectivePercentile

# Create an objective for a high success rate
# Here, we want our API to have a success rate of 99.9%
API_SLO_HIGH_SUCCESS = Objective(
    "My API SLO for High Success Rate (99.9%)",
    success_rate=ObjectivePercentile.P99_9,
)

@autometrics(objective=API_SLO_HIGH_SUCCESS)
def api_handler():
  # ...

The library uses the concept of Service-Level Objectives (SLOs) to define the acceptable error rate and latency for groups of functions. Alerts will fire depending on the SLOs you set.

Not sure what SLOs are? Check out our docs for an introduction.

In order to receive alerts, you need to add a special set of rules to your Prometheus setup. These are configured automatically when you use the Autometrics CLI to run Prometheus.

Already running Prometheus yourself? Read about how to load the autometrics alerting rules into Prometheus here.

Once the alerting rules are in Prometheus, you're ready to go.

To use autometrics SLOs and alerts, create one or multiple Objectives based on the function(s) success rate and/or latency, as shown above.

The Objective can be passed as an argument to the autometrics decorator, which will include the given function in that objective.

The example above used a success rate objective. (I.e., we wanted to be alerted when the error rate started to increase.)

You can also create an objective for the latency of your functions like so:

from autometrics import autometrics
from autometrics.objectives import Objective, ObjectiveLatency, ObjectivePercentile

# Create an objective for low latency
#   - Functions with this objective should have a 99th percentile latency of less than 250ms
API_SLO_LOW_LATENCY = Objective(
    "My API SLO for Low Latency (99th percentile < 250ms)",
    latency=(ObjectiveLatency.Ms250, ObjectivePercentile.P99),
)

@autometrics(objective=API_SLO_LOW_LATENCY)
def api_handler():
  # ...

The caller Label

Autometrics keeps track of instrumented functions that call each other. So, if you have a function get_users that calls another function db.query, then the metrics for latter will include a label caller="get_users".

This allows you to drill down into the metrics for functions that are called by your instrumented functions, provided both of those functions are decorated with @autometrics.

In the example above, this means that you could investigate the latency of the database queries that get_users makes, which is rather useful.

Settings and Configuration

Autometrics makes use of a number of environment variables to configure its behavior. All of them are also configurable with keyword arguments to the init function.

  • tracker - Configure the package that autometrics will use to produce metrics. Default is opentelemetry, but you can also use prometheus. Look in pyproject.toml for the corresponding versions of packages that will be used.
  • histogram_buckets - Configure the buckets used for latency histograms. Default is [0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10.0].
  • enable_exemplars - Enable exemplar collection. Default is False.
  • service_name - Configure the service name.
  • version, commit, branch, repository_url, repository_provider - Used to configure build_info.

Below is an example of initializing autometrics with build information, as well as the prometheus tracker. (Note that you can also accomplish the same confiugration with environment variables.)

from autometrics import autometrics, init
from git_utils import get_git_commit, get_git_branch

VERSION = "0.0.1"

init(
  tracker="prometheus",
  version=VERSION,
  commit=get_git_commit(),
  branch=get_git_branch()
)

Identifying commits that introduced problems

Autometrics makes it easy to identify if a specific version or commit introduced errors or increased latencies.

NOTE - As of writing, build_info will not work correctly when using the default setting of AUTOMETRICS_TRACKER=opentelemetry. If you wish to use build_info, you must use the prometheus tracker instead (AUTOMETRICS_TRACKER=prometheus).

The issue will be fixed once the following PR is merged and released on the opentelemetry-python project: open-telemetry/opentelemetry-python#3306

autometrics-py will track support for build_info using the OpenTelemetry tracker via this issue

The library uses a separate metric (build_info) to track the version and git metadata of your code - repository url, provider name, commit and branch.

It then writes queries that group metrics by these metadata, so you can spot correlations between code changes and potential issues.

Configure these labels by setting the following environment variables:

Label Run-Time Environment Variables Default value
version AUTOMETRICS_VERSION ""
commit AUTOMETRICS_COMMIT or COMMIT_SHA ""
branch AUTOMETRICS_BRANCH or BRANCH_NAME ""
repository_url AUTOMETRICS_REPOSITORY_URL ""*
repository_provider AUTOMETRICS_REPOSITORY_PROVIDER ""*

* Autometrics will attempt to automagically infer these values from the git config inside your working directory. To disable this behavior, explicitly set the corresponding setting or environment variable to "".

This follows the method outlined in Exposing the software version to Prometheus.

Service name

All metrics produced by Autometrics have a label called service.name (or service_name when exported to Prometheus) attached, in order to identify the logical service they are part of.

You may want to override the default service name, for example if you are running multiple instances of the same code base as separate services, and you want to differentiate between the metrics produced by each one.

The service name is loaded from the following environment variables, in this order:

  1. AUTOMETRICS_SERVICE_NAME (at runtime)
  2. OTEL_SERVICE_NAME (at runtime)
  3. First part of __package__ (at runtime)

Exemplars

NOTE - As of writing, exemplars aren't supported by the default tracker (AUTOMETRICS_TRACKER=opentelemetry). You can track the progress of this feature here: #41

Exemplars are a way to associate a metric sample to a trace by attaching trace_id and span_id to it. You can then use this information to jump from a metric to a trace in your tracing system (for example Jaeger). If you have an OpenTelemetry tracer configured, autometrics will automatically pick up the current span from it.

To use exemplars, you need to first switch to a tracker that supports them by setting AUTOMETRICS_TRACKER=prometheus and enable exemplar collection by setting AUTOMETRICS_EXEMPLARS=true. You also need to enable exemplars in Prometheus by launching Prometheus with the --enable-feature=exemplar-storage flag.

Exporting metrics

There are multiple ways to export metrics from your application, depending on your setup. You can see examples of how to do this in the examples/export_metrics directory.

If you want to export metrics to Prometheus, you have two options in case of both opentelemetry and prometheus trackers:

  1. Create a route inside your app and respond with generate_latest()
# This example uses FastAPI, but you can use any web framework
from fastapi import FastAPI, Response
from prometheus_client import generate_latest

# Set up a metrics endpoint for Prometheus to scrape
@app.get("/metrics")
def metrics():
    return Response(generate_latest())
  1. Specify prometheus as the exporter type, and a separate server will be started to expose metrics from your app:
exporter = {
    "type": "prometheus",
    "address": "localhost",
    "port": 9464
}
init(tracker="prometheus", service_name="my-service", exporter=exporter)

For the OpenTelemetry tracker, you have more options, including a custom metric reader. You can specify the exporter type to be otlp-proto-http or otlp-proto-grpc, and metrics will be exported to a remote OpenTelemetry collector via the specified protocol. You will need to install the respective extra dependency in order for this to work, which you can do when you install autometrics:

pip install autometrics[exporter-otlp-proto-http]
pip install autometrics[exporter-otlp-proto-grpc]

After installing it you can configure the exporter as follows:

exporter = {
    "type": "otlp-proto-grpc",
    "address": "http://localhost:4317",
    "insecure": True
}
init(tracker="opentelemetry", service_name="my-service", exporter=exporter)

To use a custom metric reader you can specify the exporter type to be otel-custom and provide a custom metric reader:

my_custom_metric_reader = PrometheusMetricReader("")
exporter = {
    "type": "otel-custom",
    "reader": my_custom_metric_reader
}
init(tracker="opentelemetry", service_name="my-service", exporter=exporter)

Development of the package

This package uses poetry as a package manager, with all dependencies separated into three groups:

  • root level dependencies, required
  • dev, everything that is needed for development or in ci
  • examples, dependencies of everything in examples/ directory

By default, poetry will only install required dependencies, if you want to run examples, install using this command:

poetry install --with examples

Code in this repository is:

  • formatted using black.
  • contains type definitions (which are linted by mypy)
  • tested using pytest

In order to run these tools locally you have to install them, you can install them using poetry:

poetry install --with dev --all-extras

After that you can run the tools individually

# Formatting using black
poetry run black .
# Lint using mypy
poetry run mypy .
# Run the tests using pytest
poetry run pytest
# Run a single test, and clear the cache
poetry run pytest --cache-clear -k test_tracker

autometrics-py's People

Contributors

actualwitch avatar brettimus avatar dependabot[bot] avatar flenter avatar nlea avatar v-aparna avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

autometrics-py's Issues

Automatically serve `/metrics` on port 9464

From the Typescript docs:

By default the TypeScript library makes the metrics available on <your_host>:9464/metrics. Make sure your Prometheus is configured correctly to find it.

autometrics-py should do something similar!

Ruby language support

Would it be hard to add support for Ruby? I see Python is supported - maybe it would be similar to that one? maybe could be contributed if not too hard to add.

Why not export `Objective`s from the top-level `autometrics` module?

Right now, to import and use Objectives, I need to write

from autometrics import autometrics, init
from autometrics.objectives import Objective, ObjectiveLatency, ObjectivePercentile

Could we just change this so you can import everything from the top-level module?

E.g.,

from autometrics import autometrics, init, Objective, ObjectiveLatency, ObjectivePercentile

Follow Prometheus' metrics naming convention

The metric names don't follow the naming conventions of Prometheus

COUNTER_NAME = "function.calls"
HISTOGRAM_NAME = "function.calls.duration"
CONCURRENCY_NAME = "function.calls.concurrent"

COUNTER_NAME_PROMETHEUS = COUNTER_NAME.replace(".", "_")
HISTOGRAM_NAME_PROMETHEUS = HISTOGRAM_NAME.replace(".", "_")
CONCURRENCY_NAME_PROMETHEUS = CONCURRENCY_NAME.replace(".", "_")
SERVICE_NAME_PROMETHEUS = SERVICE_NAME.replace(".", "_")

It'd be nice to have them suffixed with the units

Modify build_info query to also look for `build_info_total`

Because of #38, the build_info functionality does not work out-of-the-box

What's happening is that opentelemetry-python is converting the gauge to a counter, thus the metric name is build_info_total instead of build_info.

We could modify the queries to accommodate this issue, until it's resolved upstream:

* on (instance, job) group_left(version, commit) (
  last_over_time({
    __name__=~"build_info(_total)?"
  }[${interval}])
  or on (instance, job) up
)

Rename metrics

  • Rename counter to function.calls and ensure it is exported to Prometheus as function_calls_total
  • Ensure histogram is exported to Prometheus as function_calls_duration_seconds

Feature: add integrations for common frameworks like FastAPI

Taking inspiration from the way Strawberry handles integrations with common frameworks.

For instance, they have a package called strawberry-graphql[fastapi], which allows you to import a GraphQLRouter, which you can then use directly with FastAPI as follows (docs here):

import strawberry

from fastapi import FastAPI
from strawberry.fastapi import GraphQLRouter


@strawberry.type
class Query:
    @strawberry.field
    def hello(self) -> str:
        return "Hello World"


schema = strawberry.Schema(Query)

graphql_app = GraphQLRouter(schema)

app = FastAPI()
app.include_router(graphql_app, prefix="/graphql")

I could imagine something similar for autometrics-py, which allows you to do the following

import strawberry

from fastapi import FastAPI
# NOTE - you'll need to `pip install 'autometrics[fastapi]` to use `MetricsRouter`
from autometrics.fastapi import MetricsRouter

metrics_app = MetricsRouter()

app = FastAPI()
app.include_router(metrics_app, prefix="/metrics")

Effectively, this MetricsRouter would be sugar for

from prometheus_client import generate_latest

# Set up a metrics endpoint for Prometheus to scrape
#   `generate_latest` returns metrics data in the Prometheus text format
@app.get("/metrics")
def metrics():
    return Response(generate_latest())

autometrics doesn't pick up fastapi routes/handlers

I'm working on a fastapi sample app and have the folowing simple code:

from typing import Union

from fastapi import FastAPI
from autometrics.autometrics import autometrics
from prometheus_client import start_http_server

app = FastAPI()

@autometrics
@app.get("/")
async def read_root():
    do_something()
    return {"Hello": "World"}

start_http_server(8080)

# looking up this function in prometheus works
@autometrics
def do_something():
    print("done")

In this case do_something does get picked up by prometheus (running on 9090) but read_root does not. I don't know if its the chaining/order of the decorators. Changing the order of the decorators in the read_root actually breaks the handler.

Add open telemetry support for python

And switch to that implementation by default.

The current implementation will be the fallback implementation. This will make more in line with the other flavors of autometrics

Stdout output for OpenTelemetry Exporter

I use the autometrics-py version 0.9 and like to setup the Otel Exporter. The only output I see in my console is a warning that seems to come from the underlying Otel Exporter saying:
Overriding of current MeterProvider is not allowed

I have tried out two different setups for calling init:

  1. running the exact example from the repo:
    https://github.com/autometrics-dev/autometrics-py/blob/main/examples/export_metrics/otlp-http.py
  2. In my FastAPI SetUp
@app.on_event("startup")
def initOtelExporter():
    init(
    exporter={
        "type": "otlp-proto-http",
        "endpoint": "http://otel-collector:4318",
        "push_interval": 1000,
    },
    service_name="otlp-exporter",   
)

The printed output stays the same in both scenarios.

Improve top section of the README

Add some links out similar to what the rust flavor of the library does:

image

i.e. Link to pypi, discord and hopefully in the future: a docs website

allow newer prometheus client versions

Any reason to force the version of prometheus client to 0.16?

prometheus-client = "0.16.0"

This blocks us from trying out the module, since we have in our projects 0.17 setup already.

Ability to define 'ok' versus 'error' results

If an error response from an HTTP handler is, e.g., a 400, we might not want to count that as an "error" in our metrics.

We should support an additional param to the decorator that accepts a predicate function that returns true if an error result should be an error.

Example

def treat_401_as_ok(exception: Exception):
  if exception instanceof(HTTPError):
    return  exception.response.status_code != 401

@autometrics(error_handler=treat_401_as_ok)
def my_tracked_function():
  ...

Rust: autometrics-dev/autometrics-rs#61

Investigate: Get qualified name for function so autometrics does not need to be the topmost decorator

In the README, we write:

Autometrics by default will try to store information on which function calls a decorated function. As such you may want to place the autometrics in the top/first decorator, as otherwise you may get inner or wrapper as the caller function.

It would be nice to implement our decorator so that it does not rely on being the topmost decorator above a function.

This should be possible. For reference, check out the utility function used by Sentry in their SDK's decorators

Installation issues with Python3.7

I am trying to follow the readme but consistently running into multiple issues.

Installation details:
$ pip3 install autometrics Collecting autometrics Downloading autometrics-0.6-py3-none-any.whl (21 kB) Collecting prometheus-client==0.16.0 Downloading prometheus_client-0.16.0-py3-none-any.whl (122 kB) โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 122.5/122.5 kB 2.1 MB/s eta 0:00:00 Collecting opentelemetry-sdk<2.0.0,>=1.17.0 Downloading opentelemetry_sdk-1.18.0-py3-none-any.whl (101 kB) โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 101.6/101.6 kB 4.3 MB/s eta 0:00:00 Collecting python-dotenv==1.0.0 Downloading python_dotenv-1.0.0-py3-none-any.whl (19 kB) Collecting opentelemetry-api<2.0.0,>=1.17.0 Downloading opentelemetry_api-1.18.0-py3-none-any.whl (57 kB) โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 57.3/57.3 kB 3.2 MB/s eta 0:00:00 Collecting typing-extensions<5.0.0,>=4.5.0 Downloading typing_extensions-4.7.1-py3-none-any.whl (33 kB) Collecting opentelemetry-exporter-prometheus<2.0.0,>=1.12.0rc1 Downloading opentelemetry_exporter_prometheus-1.12.0rc1-py3-none-any.whl (10 kB) Collecting importlib-metadata~=6.0.0 Downloading importlib_metadata-6.0.1-py3-none-any.whl (21 kB) Requirement already satisfied: setuptools>=16.0 in /usr/local/lib/python3.10/site-packages (from opentelemetry-api<2.0.0,>=1.17.0->autometrics) (63.4.3) Collecting deprecated>=1.2.6 Downloading Deprecated-1.2.14-py2.py3-none-any.whl (9.6 kB) Collecting opentelemetry-semantic-conventions==0.39b0 Downloading opentelemetry_semantic_conventions-0.39b0-py3-none-any.whl (26 kB) Collecting wrapt<2,>=1.10 Downloading wrapt-1.15.0-cp310-cp310-macosx_10_9_x86_64.whl (35 kB) Collecting zipp>=0.5 Downloading zipp-3.16.0-py3-none-any.whl (6.7 kB) Installing collected packages: zipp, wrapt, typing-extensions, python-dotenv, prometheus-client, opentelemetry-semantic-conventions, importlib-metadata, deprecated, opentelemetry-api, opentelemetry-sdk, opentelemetry-exporter-prometheus, autometrics Successfully installed autometrics-0.6 deprecated-1.2.14 importlib-metadata-6.0.1 opentelemetry-api-1.18.0 opentelemetry-exporter-prometheus-1.12.0rc1 opentelemetry-sdk-1.18.0 opentelemetry-semantic-conventions-0.39b0 prometheus-client-0.16.0 python-dotenv-1.0.0 typing-extensions-4.7.1 wrapt-1.15.0 zipp-3.16.0

`$ python3
Python 3.7.9 (v3.7.9:13c94747c7, Aug 15 2020, 01:31:08)
[Clang 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.

from autometrics.objectives import Objective, ObjectiveLatency, ObjectivePercentile
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'autometrics.objectives'
`

I have tried it on multiple computers but the same issue. Is there something more I need to do when it comes to installation?

INVESTIGATE: Setting build info with `init` can cause duplicate build_info gauges that break autometrics queries

When you set, e.g., a version via init, then I'm pretty sure we end up adding two gauges, one with a version label and one without a version label. These will both be set within a short period of time.

The result is that our group left query that joins build info to function metrics will cause a 422 from prometheus, with an error like:

Error executing query: 
found duplicate series for the match group {instance="localhost:8082", job="am_0"} on the right hand-side of the operation: 

[
  {__name__="build_info", instance="localhost:8082", job="am_0", service_name="autometrics", version="0.0.1"},
  {__name__="build_info", instance="localhost:8082", job="am_0", service_name="autometrics"}
];

many-to-many matching not allowed: matching labels must be unique on one side

The culprit is likely:

def default_tracker():
    """Setup the default tracker."""
    preferred_tracker = get_tracker_type()
    return init_tracker(preferred_tracker)


tracker: TrackMetrics = default_tracker()

We initialize a tracker out of the box. When a user calls init, they effectively "re-initialize" the tracker with new build information, which sets a new build info gauge.

Need to confirm though.

Create docs

We probably want to generate docs and host them on something like readthedocs.org

Add support for async functions

Right now the decorator assumes that the function it is wrapping when it returns it is also finished. This is not the case with async functions.

async def async_function():
    """This is an async function."""
    await asyncio.sleep(0.1)
    return True

When calling this function it will return quickly but when you use await it suddenly takes ~0.1s longer.

OpenTelemetry Support

Refactor the library so it uses OTel for the instrumentation and then the Prometheus exporter to make it available for Prometheus.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.