Giter VIP home page Giter VIP logo

Comments (9)

RazCrimson avatar RazCrimson commented on September 28, 2024 1

@siddheshtv Feel free to take a stab at this.

Though glances still lacks typing at many layers so defining the tables for all the plugins might be a bit hard

from glances.

nicolargo avatar nicolargo commented on September 28, 2024

Great proposal.

I will have a look on the TimescaleDB:

from glances.

nicolargo avatar nicolargo commented on September 28, 2024

@oliv3r most of the time, Glances exports data to NoSQL databases (InfluxDB, ELK, MongoDB...). Before inserting data into TimeScale/Postgre, Glances needs to create a database and a relational table. I am a little bit confuse about this data model.

I think that the best way is to create one table per plugin. The documentation also talk about hypertable, perhaps more adapted for the Glances data ?

Can you give me a simple example of CREATE TABLE command for the following plugin ?

  • cpu (a key/value dictionary): {'total': 0.0, 'user': 2.0, 'nice': 0.0, 'system': 0.0, 'idle': 32.0, 'iowait': 0.0, 'irq': 0.0, 'steal': 0.0, 'guest': 0.0, 'ctx_switches': 0, 'interrupts': 0, 'soft_interrupts': 0, 'syscalls': 0, 'cpucore': 16, 'time_since_update': 2.406388282775879, 'ctx_switches_gauge': 91081182, 'interrupts_gauge': 64894442, 'soft_interrupts_gauge': 23251131, 'syscalls_gauge': 0}
  • diskio (a list of key/value dictionaries): [{'read_count': 0, 'write_count': 0, 'read_bytes': 0, 'write_bytes': 0, 'key': 'disk_name', 'disk_name': 'dm-0', 'time_since_update': 2.3856899738311768, 'read_count_gauge': 146671, 'write_count_gauge': 303436, 'read_bytes_gauge': 4313531392, 'write_bytes_gauge': 5372370944}, {'read_count': 0, 'write_count': 0, 'read_bytes': 0, 'write_bytes': 0, 'key': 'disk_name', 'disk_name': 'dm-1', 'time_since_update': 2.3856899738311768, 'read_count_gauge': 146609, 'write_count_gauge': 301750, 'read_bytes_gauge': 4312515584, 'write_bytes_gauge': 5372370944}]

from glances.

oliv3r avatar oliv3r commented on September 28, 2024

I think that the best way is to create one table per plugin.

I have no idea what's best here. I always thought so too. But a) I noticed that their own 'stock-exchange' example actually uses a single table, but then this data is a bit correlated. While there's different stock symbols for different companies, the data is the same. Also mangaging things becomes different, because when a new symbol is added/removed, you have to create/drop a table. IMO it still makes logical sense. But then I know that home-assistant also puts all its sensor data into a single table. This is puzzeling for me still. Because then the data is not correlated at all. I would expect that each sensor has its own table. And that extends to here as well. Each plugin/sensor should have its own table. There surely must be a performance benefit here. Asking AI, it also states, performance should be better on multiple tables, with the downside that it's more work to manage, but the upside that related queries on a single timestamp might be faster, if you store each plugin in its own column. But I cant' figure out why the single table option would be better.

Regardless, while you can store json directly in postgres, that's probably not what you have in mind ;)

CPU

CREATE TABLE "cpu" (
  "time" TIMESTAMP WITH TIME ZONE NOT NULL,
  "total" NUMERIC(8, 2),
  "user" NUMERIC(8, 2),
  "nice" NUMERIC(8, 2),
  "idle" NUMERIC(8, 2),
  "iowait" NUMERIC(8, 2),
  "irq" NUMERIC(8, 2),
  "steal" NUMERIC(8, 2),
  "guest" NUMERIC(8, 2),
  "ctx_switches" INTEGER,
  "interrupts" INTEGER,
  "soft_interrupts" INTEGER,
  "syscalls" INTEGER,
  "cpucore" INTEGER,
  "time_since_update" DOUBLE PRECISION NOT NULL,
  "ctx_switches_guage" INTEGER,
  "interrupts_guage" INTEGER,
  "soft_interrupts_guage" INTEGER,
  "syscalls_guage" INTEGER
);

This creates a regular postgres table

For the low integer value types, SMALLINT might be more appropiate, but the docs state 'only if space is at a premium', as it's slightly slower (e.g. internally everything still works with 32bit ints most probably).

Some ints might need 'BIGINT' but sure on the the range from your example ;)

I think for timescale to work effectively, you'd have to use actual timestamps instead of 'time_since_update', but I'm not a timescale expert, which is why I added time as well. Also, 'inteveral' might be a better type, but not familiar enough with this type.

Then, if the timescale extension is available,

SELECT extversion FROM pg_extension WHERE extname = 'timescaledb';

we can convert it (and enable compression to add a significant performance and storage benefit.

SELECT create_hypertable('cpu', by_range('time'));
CREATE INDEX idx_cpu_core_time ON "cpu" ("cpucore", "time" DESC);
ALTER TABLE "cpu" SET (
    timescaledb.compress,
    timescaledb.compress_segmentby='cpucore',
    timescaledb.compress_orderby='time DESC'
);
SELECT add_compression_policy('cpu', INTERVAL '8 days');

If alter fails, the wrong license/container image was chosen, so this could just 'warn' that we are continuing without compression.

The segment/index needs to be figured out what fits best here (of if anything at all). so cpu_core needs to be a column that it makes sense having an index on. If there is none, it might not be worthwhile to compress/have an index. So for now I assumed the stats are unique per CPU, but I know this is also not true ... (load is system-wide)

From what I understood from this example here: https://docs.timescale.com/use-timescale/latest/compression/about-compression/#segment-by-columns it could be that in some cases, you want to keep different data in similar tables.

For diskio, we'd end up wtih

CREATE TABLE "diskio" (
  "time" TIMESTAMP WITH TIME ZONE NOT NULL,
  "read_count" BIGINT,
  "write_count" BIGINT,
  "read_bytes" BIGINT,
  "write_bytes" BIGINT,
  "disk_name" TEXT,
  "time_since_update" DOUBLE PRECISION NOT NULL,
  "read_count_guage" INTEGER,
  "write_count_guage" INTEGER,
  "read_bytes_guage" INTEGER,
  "write_bytes_guage" INTEGER
);
SELECT extversion FROM pg_extension WHERE extname = 'timescaledb';
SELECT create_hypertable('diskio', by_range('time'));
CREATE INDEX idx_disk_name_time ON "diskio" ("disk_name", "time" DESC);
ALTER TABLE "diskio" SET (
    timescaledb.compress,
    timescaledb.compress_segmentby='disk_name',
    timescaledb.compress_orderby='time DESC'
);
SELECT add_compression_policy('diskio', INTERVAL '8 days');

One other timescale specific feature, would then be to create automated 'aggregated views' where postgres/timescaledb would automatically take averages etc over longer periods of time, and efficiently stores them for quick access. E.g. the zoomed out view. https://blog.timescale.com/blog/achieving-the-best-of-both-worlds-ensuring-up-to-date-results-with-real-time-aggregation/

from glances.

RazCrimson avatar RazCrimson commented on September 28, 2024

If we do plan on creating the tables from Glances end and go with one table per plugin, using something like sqlalchemy to abstract out the DB connector implementation would probably be better.

Ref: https://www.sqlalchemy.org/

from glances.

nicolargo avatar nicolargo commented on September 28, 2024

@oliv3r thanks for the implementation proposal.

For the moment, export plugins do not have any feature to create table with the needed information (variables are not typed in a standard way). For example, CPU plugin fields description is the following: https://github.com/nicolargo/glances/blob/v4.0.8/glances/plugins/cpu/__init__.py#L24

...
    'system': {
        'description': 'Percent time spent in kernel space. System CPU time is the \
time spent running code in the Operating System kernel.',
        'unit': 'percent',
    },
... 

It will increase code complexity/maintenance only for PostgreSQL export.

Another approach will be init tables out of Glances but as a consequence we should maintain the script/documentation used to create tables with Glances data model.

from glances.

github-actions avatar github-actions commented on September 28, 2024

This issue is available for anyone to work on. Make sure to reference this issue in your pull request. ✨ Thank you for your contribution ! ✨

from glances.

siddheshtv avatar siddheshtv commented on September 28, 2024

Hi there, I am inclined towards contributing to this feature if its still available to work on.
I know about sql alchemy and postgres and thinking performance wise, multiple tables might be the way to go.

from glances.

siddheshtv avatar siddheshtv commented on September 28, 2024

Alright, I'll give it a go.

from glances.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.