weaveworks / promjs Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
Is it dead?
I wanted to ask if there is any way to make this library run on react-native. I am succesfully able to build the app but I get this error when the code runs:
error: Error: While trying to resolve module promjs
from file D:\Repos\field-rep-app\mobile\node_modules\@cabify\prom-react\dist\index.js
, the package D:\Repos\field-rep-app\mobile\node_modules\promjs\package.json
was successfully found. However, this package itself specifies a main
module field that could not be resolved (D:\Repos\field-rep-app\mobile\node_modules\promjs\lib\index.js
. Indeed, none of these files exist:
I am using "@cabify/prom-react": "^0.3.0"
which utilizes promjs "^0.4.1"
.
react: 17.0.2 => 17.0.2
react-native: 0.68.2 => 0.68.2
As noted at weaveworks/prom-aggregation-gateway#31, the path /api/ui/metrics
is something Weaveworks picked, and does not match /metrics/
which someone using the Prometheus push gateway would expect.
So we should at least note the mismatch and perhaps make it configurable.
When pushing metrics on an interval, after doin a registry.reset(), if there have not been any new metrics recorded it would be nice to be able to skip pushing if all metrics are 0. As far as I can tell there isn't an easy way to check this, is there?
This is somewhat related to #40 in looking for convenient ways to make pushing metrics as efficient as possible by not pushing things we don't need to.
registry.reset()
resets all existing metrics to 0. But registry.metrics()
would still return them, even if 0, making the payload quite large with many useless 0 values in it (those metrics had been sent in previous calls and were reset after). Example:
# HELP fe_hs_app_bundle_load_seconds Time to load an HS app bundle
# TYPE fe_hs_app_bundle_load_seconds histogram
fe_hs_app_bundle_load_seconds_count 0
fe_hs_app_bundle_load_seconds_sum 0
fe_hs_app_bundle_load_seconds_bucket{le="1"} 0
fe_hs_app_bundle_load_seconds_bucket{le="2"} 0
fe_hs_app_bundle_load_seconds_bucket{le="3"} 0
fe_hs_app_bundle_load_seconds_bucket{le="5"} 0
fe_hs_app_bundle_load_seconds_bucket{le="7"} 0
fe_hs_app_bundle_load_seconds_bucket{le="10"} 0
fe_hs_app_bundle_load_seconds_bucket{le="+Inf"} 0
fe_hs_app_bundle_load_seconds_bucket{le="0.25"} 0
fe_hs_app_bundle_load_seconds_bucket{le="0.5"} 0
fe_hs_app_bundle_load_seconds_bucket{le="1.5"} 0
fe_hs_app_bundle_load_seconds_bucket{le="2.5"} 0
fe_hs_app_bundle_load_seconds_count{hsApp="hs-app-composer",hasError="false"} 0
fe_hs_app_bundle_load_seconds_sum{hsApp="hs-app-composer",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="1",hsApp="hs-app-composer",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="2",hsApp="hs-app-composer",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="3",hsApp="hs-app-composer",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="5",hsApp="hs-app-composer",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="7",hsApp="hs-app-composer",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="10",hsApp="hs-app-composer",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="+Inf",hsApp="hs-app-composer",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="0.25",hsApp="hs-app-composer",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="0.5",hsApp="hs-app-composer",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="1.5",hsApp="hs-app-composer",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="2.5",hsApp="hs-app-composer",hasError="false"} 0
fe_hs_app_bundle_load_seconds_count{hsApp="hs-app-directory",hasError="false"} 0
fe_hs_app_bundle_load_seconds_sum{hsApp="hs-app-directory",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="1",hsApp="hs-app-directory",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="2",hsApp="hs-app-directory",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="3",hsApp="hs-app-directory",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="5",hsApp="hs-app-directory",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="7",hsApp="hs-app-directory",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="10",hsApp="hs-app-directory",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="+Inf",hsApp="hs-app-directory",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="0.25",hsApp="hs-app-directory",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="0.5",hsApp="hs-app-directory",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="1.5",hsApp="hs-app-directory",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="2.5",hsApp="hs-app-directory",hasError="false"} 0
fe_hs_app_bundle_load_seconds_count{hsApp="hs-app-inbox",hasError="false"} 1
fe_hs_app_bundle_load_seconds_sum{hsApp="hs-app-inbox",hasError="false"} 29.47082499996759
fe_hs_app_bundle_load_seconds_bucket{le="1",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="2",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="3",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="5",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="7",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="10",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="+Inf",hsApp="hs-app-inbox",hasError="false"} 1
fe_hs_app_bundle_load_seconds_bucket{le="0.25",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="0.5",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="1.5",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="2.5",hsApp="hs-app-inbox",hasError="false"} 0
Would it be possible for registry.metrics()
not to return empty metrics (or at least have a param that allows you to do this)?
Using registry.clear()
is not feasible in my situation, for the reason mentioned here
Ideally this is the payload I'd like to send instead of the above:
# HELP fe_hs_app_bundle_load_seconds Time to load an HS app bundle
# TYPE fe_hs_app_bundle_load_seconds histogram
fe_hs_app_bundle_load_seconds_count{hsApp="hs-app-inbox",hasError="false"} 1
fe_hs_app_bundle_load_seconds_sum{hsApp="hs-app-inbox",hasError="false"} 29.47082499996759
fe_hs_app_bundle_load_seconds_bucket{le="1",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="2",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="3",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="5",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="7",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="10",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="+Inf",hsApp="hs-app-inbox",hasError="false"} 1
fe_hs_app_bundle_load_seconds_bucket{le="0.25",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="0.5",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="1.5",hsApp="hs-app-inbox",hasError="false"} 0
fe_hs_app_bundle_load_seconds_bucket{le="2.5",hsApp="hs-app-inbox",hasError="false"} 0
I could do this processing on my side, but it would be easier to not add those zero metrics in the first place, and maybe it can be a feature if more peeps find it useful.
The semi-official node.js client for prometheus (as in, linked in Prometheus' docs) is prom-client. If we adopted this library at work, we'd still use that for server-side metrics, but using two different APIs in the same language would be sorta weird. Would you be opposed to changing your api to more closely resemble prom-client
's?
From a first look, the difference is mostly in how labels are handled, and how metrics are constructed.
If interested, I could probably provide a PR
Hi,
Thanks for creating and open-sourcing this awesome lib.
I was wondering if you have a recommended way (or an interface to impl) for communication with prom-aggregation-gateway. I think this could be useful for this project.
Thanks again.
It would be nice to provide some metrics out of the box for folks unfamiliar with prometheus.
For Node.js: a basic middleware to record latencies and status codes of server responses. Also expose a /metrics
path.
For the browser: wrap the XMLHttpRequest
prototype to instrument client requests.
Hurdles:
Hi there,
Not a critical issue, but something that will be nice to have. Right now, whenever the user refresh the page, the registry state is lost forever. The way I'm doing now to solve this problem is quite brittle.
I store the JSON stringfied registry in the localStorage:
export function storePromRegistry() {
localStorage.setItem("promjs_registry_json", JSON.stringify(promRegistry));
}
And then, whenever the registry is created, I check localStorage to see if there's any saved state there, loading it if so:
// Getting state from local storage
let promRegistryStored = localStorage.getItem("promjs_registry_json");
if (promRegistryStored) {
let promRegistryDataJSON = JSON.parse(promRegistryStored);
each(promRegistryDataJSON.data, (collectorGroup) => {
each(collectorGroup, (collectorData, collectorName) => {
let collector = promRegistry.get(collectorData.type, collectorName)
if (collector && collectorData.instance.data[0]) (
collector.set(collectorData.instance.data[0].value)
)
});
});
}
Yep, it's an edge case and losing some metrics on refreshes is not the end of the world, but would be nice to have support from the lib for keeping the state if needed. ๐
No Summaries yet! Someone should add that...
Hey! Great work with this library, i'm currently working on using it to start instrumenting my org's app.
I'm curious to hear what the best approach is for listening to changes to counters, histograms, etc created by a registry so you know to send an update to your Prometheus Gateway.
I was expecting that maybe the registry would expose some sort of onChange listener you could use to trigger new requests to send the latest metrics to your backend.
For now i'm just checking the registry.metrics() output for changes on an interval and using that to send the latest metrics to the backend. Curious to know if anyone has a recommended way to do this?
Some of the pinned dependencies in this package causes SYNK warnings, could you please cut a new release using updated deps?
You should be able to .set
any value on a gauge, in addition to incrementing and decrementing.
I am using this library in Angular application. I can see the metrics in the browser by logging in each component.
I am not sure how/what to implement in Angular so that Prometheus can pick these metrics by "/metrics" url end point.
Any ideas. Please share me some sample code if anyone has done it already.
Hey, could you please update the NPM package to the latest commit? We're using TypeScript and the currently published types definition on NPM is broken.
Many thanks in advance! โค
Having trouble using the package with a simple import as documented
import prom from 'promjs';
Working around it by doing...
import prom from 'promjs/index';
I think this is due to a miss match between the file layout in the git repo vs the published package
package.json refrences lib/index.js
Line 5 in dc88ea5
In the package on npm however the actual index.js
is in the root of the package, not lib
.
I suspect this is due to the cd lib
in your push script
Line 14 in dc88ea5
uglifyjs-webpack-plugin was deprecated and it is suggested to move to https://github.com/webpack-contrib/terser-webpack-plugin instead.
When installed the package has an internal node_modules directory.
Reproduce:
promjs
with yarn add promjs
(or npmls -l node_modules/promjs/node_modules/
There shouldn't be a node_modules dir there. This is causing lodash to be included in the bundle twice.
I'm using promjs with some code on Cloudflare workers and I noticed that lodash contributes ~71kb of bloat (promjs without lodash is otherwise 9kb). For example, I think lodash.filter
/lodash.each
/lodash.reduce
/lodash.find
/lodash.map
are completely redundant since ES arrays offer those out of the box I think. lodash.sum
can be expressed in terms of reduce
. lodash.isEqual
could be replacable with a custom version of since it's just being used to compare two arrays of strings I think (& even a nested one wouldn't be hard).
I'm sure other pieces might be difficult to tease out but it would be nice because of how heavy a dependency lodash is.
FWIW I'm using esbuild. I've tried https://www.npmjs.com/package/@optimize-lodash/esbuild-plugin and it makes no difference to the minified size.
We'd like to retire our CircleCI subscription this year.
First: thanks for building this! And for using TypeScript, which makes my life easier.
So I see in the docs that clear()
is intended to "Reset all existing metrics to 0." But looking at the code, it simply sets registry.data
to empty counter/gauge/histogram, meaning all of the .create
calls would need to happen again? As far as I can see, attempting to increment an existing metric fails silently after clear()
. That doesn't seem to match described behavior?
const registry = promjs();
const myCounter = registry.create('counter', 'my_counter', 'This is a sample counter');
myCounter.inc({ foo: 'bar' });
registry.metrics();
// # HELP my_counter This is a sample counter
// # TYPE my_counter counter
// {foo="bar"} 1
registry.clear();
myCounter.inc({ foo: 'baz' });
registry.metrics()
// ''
registry.data;
// {"counter":{},"gauge":{},"histogram":{}}
Is the intent that metrics be re-initialized after .clear()
? Is there a way to actually "reset metrics to 0" (rather than removing them), or a will to support that?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.