dundee / disk_usage_exporter Goto Github PK
View Code? Open in Web Editor NEWDisk Usage Prometheus Exporter
License: MIT License
Disk Usage Prometheus Exporter
License: MIT License
i deploy disk_usage_expoter in k8s cluster. I want to collect disk usage of dir /var/lib/docker.
my ds is:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: disk-usage-exporter
spec:
selector:
matchLabels:
app: disk-usage-exporter
template:
metadata:
labels:
app: disk-usage-exporter
name: disk-usage-exporter
name: disk-usage-exporter
annotations:
prometheus.io/scrape: "true"
spec:
containers:
- name: disk-usage-exporter
args:
- -p=/host/var
- -l=0
image: disk-usage-exporter:v0.2.0
ports:
- containerPort: 9995
name: scrape
volumeMounts:
- mountPath: /host/var/lib/docker
mountPropagation: HostToContainer
name: docker
readOnly: true
- mountPath: /host/root
name: root
mountPropagation: HostToContainer
readOnly: true
- mountPath: /host/proc
name: proc
serviceAccountName: default
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- hostPath:
path: /proc
type: ""
name: proc
- hostPath:
path: /var/lib/docker
type: ""
name: docker
- hostPath:
path: /
type: ""
name: root
i think the pods use too much mem .
command :kubectl top pod
disk-usage-exporter-h9d97 363m 9248Mi
disk-usage-exporter-jfh2p 2457m 2106Mi
disk-usage-exporter-wvx9c 2886m 1659Mi
when i curl podIP:9995/metrics , it takes seconds to get metrics
[root@master]# curl 172.31.139.237:9995/metrics | grep var
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6470 0 6470 0 0 417 0 --:--:-- 0:00:15 --:--:-- 1449
node_disk_usage_bytes{path="/host/var"} 6.3816163328e+10
and the pod disk-usage-exporter-h9d97 which cost mem shows 9248Mi doesn't work, its log is:
time="2022-04-22T03:25:10Z" level=info msg="setting GC percent to 3, alloc: 8067910992, free: 318873600"
time="2022-04-22T03:25:10Z" level=info msg="setting GC percent to 3, alloc: 8067911312, free: 318873600"
time="2022-04-22T03:25:10Z" level=info msg="setting GC percent to 3, alloc: 8067911728, free: 318873600"
time="2022-04-22T03:25:10Z" level=info msg="setting GC percent to 3, alloc: 8067912048, free: 318873600"
time="2022-04-22T03:25:10Z" level=info msg="setting GC percent to 3, alloc: 8067912368, free: 318873600"
time="2022-04-22T03:25:10Z" level=info msg="setting GC percent to 3, alloc: 8067911632, free: 318873600"
time="2022-04-22T03:25:10Z" level=info msg="setting GC percent to 3, alloc: 8067914976, free: 318873600"
time="2022-04-22T03:25:10Z" level=info msg="setting GC percent to 3, alloc: 8067916296, free: 318873600"
time="2022-04-22T03:25:10Z" level=info msg="setting GC percent to 3, alloc: 8067915608, free: 318873600"
time="2022-04-22T03:25:10Z" level=info msg="setting GC percent to 3, alloc: 8067915928, free: 318873600"
time="2022-04-22T03:25:10Z" level=info msg="setting GC percent to 3, alloc: 8067918040, free: 318873600"
time="2022-04-22T03:25:10Z" level=info msg="setting GC percent to 3, alloc: 8067909112, free: 318873600"
time="2022-04-22T03:25:10Z" level=info msg="setting GC percent to 3, alloc: 8067915592, free: 318873600"
time="2022-04-22T03:25:10Z" level=info msg="setting GC percent to 3, alloc: 8067919008, free: 318873600"
time="2022-04-22T03:25:10Z" level=info msg="setting GC percent to 3, alloc: 8067919672, free: 318873600
can any one give some help?
disk_usage_exporter -p=/data
: node_disk_usage_bytes{path="/data/xxxx"} 8192
disk_usage_exporter -p=/data/
: node_disk_usage_bytes{path="/data/data/xxxx"} 8192
disk_usage_exporter_linux_amd64: Mach-O 64-bit arm64 executable, flags:<|DYLDLINK|PIE>
Hi
we have some directories that have sym-links in but the exporter see these are empty folders and does not calculate the size , Is there way to treat the links as real folders?
Thanks
Adam
I've compiled your exporter on windows with go build
and it compiled without any error.
Also it's possible to execute the resulting binary disk_usage_exporter.exe.
E:\disk_usage_exporter-0.4.0>disk_usage_exporter.exe -m file -p z:\
time="2023-09-13T10:48:10+02:00" level=info msg="Disk Usage Prometheus Exporter <<< filled in by build >>>\tbuild date: <<< filled in by build >>>\tsha1: <<< filled in by build >>>\tGo: go1.21.1\tGOOS: windows\tGOARCH: amd64"
time="2023-09-13T10:48:18+02:00" level=info msg="Analysis done"
time="2023-09-13T10:48:19+02:00" level=info msg="Stored stats in file ./disk-usage-exporter.prom"
time="2023-09-13T10:48:19+02:00" level=info msg="Done - exiting."
However, the directory and file sizes in the output do not match the real sizes in the file system.
# TYPE node_disk_usage_level_1_bytes gauge
node_disk_usage_level_1_bytes{path="e:\\Install"} 12288
node_disk_usage_level_1_bytes{path="e:\\temp"} 1.3725696e+07
Hello there, I am new to prometheus and I just take a look at prometheus guides basic auth.
I tried the same in disk usage exporter config, more or less like this:
basic_auth_users:
admin: <password-hash>
analyzed-path: /var/www/media
dir-level: 1
but I still can access the endpoint without username and password.
I already restart the systemd service but there's no change.
So I want to make sure, is disk usage exporter support basic auth?
If it is, then how can we implement basic auth with disk usage exporter?
Thank you!
Hello,
I am using 3 pods and disk_usage_exporter as sidecar. But out of 3 only one pod running without any issue and scraping the metrics even with less resources. But other two pods behaving weird, not scraping the metrics, if I try to scrape the metrics locally, pods are getting restarted with OOM issue. I have allocated enough resources still the same.
[root@local-path-provisioner-monitoring-wwp9n /]# curl http://localhost:9995/metrics curl: (52) Empty reply from server
And the other one immediate response
`[root@local-path-provisioner-monitoring-plq4d /]# curl http://localhost:9995/metrics
go_gc_duration_seconds{quantile="0"} 6.5438e-05
go_gc_duration_seconds{quantile="0.25"} 8.8972e-05`
And other two
Definitely it's not for scrape timeout as it's failing locally .
Could you please help me out to fix this issue.
package name:disk_usage_exporter_linux_amd64.tgz
[root@localhost opt]# tar -zxvf disk_usage_exporter_linux_amd64.tgz -C diskusage/
._disk_usage_exporter_linux_amd64
tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.com.apple.provenance'
disk_usage_exporter_linux_amd64
`[root@localhost diskusage]# file disk_usage_exporter_linux_amd64
disk_usage_exporter_linux_amd64: Mach-O 64-bit executable
`
How can I run it on CentOS? Please advise.
Hi @dundee !
I suggest you an improvement in order to monitor different repository with customisable "dir-level".
The goal of the suggestion is to evaluate several targeted repository while consuming the least system resources as possible.
Regards,
Bruno
Hello.
I faced the following issue and hope you'll be able to give some advice about it.
I use disk_usage_exporter to monitor disk usage of databases inside mysql instance. Recently I noticed that exporter shows old data.
Disk_usage_exporter runs as additional container of mysql pod:
disk-exporter:
Image: ghcr.io/dundee/disk_usage_exporter/disk_usage_exporter-c4084307c537335c2ddb6f4b9b527422:latest
Image ID: docker-pullable://ghcr.io/dundee/disk_usage_exporter/disk_usage_exporter-c4084307c537335c2ddb6f4b9b527422@sha256:320b9e61b93ce4d339de9b9462fbcad862710f8a6884bd62ba631a9055eecb27
Port: 9995/TCP
Host Port: 0/TCP
Args:
-l=1
-p=/var/lib/mysql
State: Running
Started: Sat, 31 Dec 2022 13:30:50 +0200
Ready: True
Restart Count: 0
Limits:
cpu: 200m
memory: 384Mi
Requests:
cpu: 100m
memory: 256Mi
Environment: <none>
Mounts:
/var/lib/mysql from data (rw)
There are folders that are not present in pod itself as they were removed long time ago, for example:
sh-4.2$ ls -lah /var/lib/mysql/old_database
ls: cannot access /var/lib/mysql/old_database: No such file or directory
sh-4.2$ ls -lah /var/lib/mysql/xtrabackup_logfile
ls: cannot access /var/lib/mysql/xtrabackup_logfile: No such file or directory
However I still can see those folders metrics in browser at http://127.0.0.1:8080/metrics:
node_disk_usage_bytes{path="/var/lib/mysql/old_database"} 1.3676023808e+10
node_disk_usage_bytes{path="/var/lib/mysql/xtrabackup_logfile"} 7.314219008e+09
And in general from endpoint I get much more metrics than currently existing folders/files. Disk_usage_exporter returns not only current folders/files, but also all folders/files that ever existed at monitored path.
My expectation was that disk_usage_exporter provides data only for existing folders/files.
Is the output with all ever existed data is expected? If there any cache within disk_usage_exporter that holds old data can it be cleared?
Do you have a grafana dashboard file example to understand what we need to do to show metrics ?
I seen the link you shared but I'm not sure about what I need to do to create dashboard for metric available with disk_usage_exporter.
Hello,
Is it possible to package this exporter as a docker image? I have Grafana, Prometheus and various other exporters running as docker containers. believe many others are doing the same.
As I'm still learning Linux, I can't even begin to understand how to install this standalone.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.