hynek / prometheus-async Goto Github PK
View Code? Open in Web Editor NEWAsync helpers for prometheus_client.
Home Page: https://prometheus-async.readthedocs.io/
License: Apache License 2.0
Async helpers for prometheus_client.
Home Page: https://prometheus-async.readthedocs.io/
License: Apache License 2.0
Consul's service registration API supports a bunch of other parameters apart from the ones implemented in the ConsulAgent class. I'm not familiar with all of them so I'm not sure which would make sense for prometheus_async
, but the one I would like to use is Meta
to associate some extra key-value pairs with the service.
Perhaps the simplest would be to have ConsulAgent take an optional argument called something like json_extra
that is added to the JSON sent to consul - which would give future-proofing for arbitrary attributes?
Hello,
I'm currently using aioprometheus in a FastAPI Uvicorn app, but the metric exposure is blocking all the routes:
claws/aioprometheus#98
Looking at the official client, it seems that they added async ASGI metric exposure. But I don't know if their metrics update will then be blocking. I didn't try it yet.
See prometheus/client_python#512
So I would like to know if prometheus-async is non-blocking for metrics update and metrics exposure, i.e. these two operations won't block execution of other HTTP requests.
The upstream prometheus client chooses the correct version of generate_latest
to use based on accept headers.
However, it seems that this library only uses the default (non-openmetrics) generate_latest
function.
I have an async app and I want to expose the metrics with the asynchronous server, for security purposes I want to use SSL/TLS.
ssl_context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
metric_server = asyncio.create_task(start_http_server(addr='localhost', port=3011, ssl_ctx=ssl_context))
When testing the endpoint, I got the following:
$ curl -vk https://localhost:3011/metrics
* Trying 127.0.0.1:3011...
* Connected to localhost (127.0.0.1) port 3011 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:3011
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:3011
What am I doing wrong ?
Hello, apologies if I missed it, but it doesn't seem like there is support yet for the Counter
metric type?
The upstream client supports adding custom collectors. Adding support for async collectors would make it possible easily to gather metrics over http for example.
tox installs the package into the virtualenv it creates, then runs the tests inside that virtualenv. However, the source
option from .coveragerc
points at the source tree, not at the installation inside the tox virtualenv. As a result, coverage should not collect any data, and you should get an empty coverage report. But you don't.
Apparently, the tests run at least partly not in the tox virtualenv, but in the source tree.
This can be confirmed when modifying the test command to show some debugging information (thanks to @nedbat for suggesting this):
commands =
coverage run --debug=sys,config,trace -a {envbindir}/py.test -s tests
Sample output:
tests/test_tx.py Tracing '/tmp/prometheus_async/prometheus_async/aio/__init__.py'
Right now, I have no idea what causes this. I will try to investigate this further. However, you probably want to fix this, since it may cause unintended effects further down the road. It may be worthwhile to investigate the changedir
option of tox (thanks to @brechtm for the suggestion).
The official python Prometheus client outlines a problem caused by using multiple Gunicorn workers and provides a solution:
https://github.com/prometheus/client_python/#multiprocess-mode-gunicorn
I am working on an asynchronous web-app using FastAPI with 4 Gunicorn workers, therefore I cannot use the synchronous Prometheus client. However, this library has no mention of dealing with the multiprocessing issue, and after running some tests, has no built-in solution to deal with it.
My question is, is there any plan to add support for multiprocessing? If not, what solution would you recommend? I would prefer to continue using the pull model.
Thank You.
Hello,
Currently it is only possible to use Histogram
and Summary
to use with @time
decorator.
These metrics produced a kind of aggregated parameters while I'd like to had plain graph, i.e. to use Gauge
as metric the the @time
decorator.
To achieve that I'd like to add fallback from metric.observe
method to metric.set
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.