Giter VIP home page Giter VIP logo

centreon-stream-connector-scripts's Introduction

Centreon - IT and Application monitoring software

stable version  License  Centreon bug tracker  Official documentation  Official Slack  Official website  Official Download  Official The Watch  

Centreon Twitter  Centreon Labs Twitter  

Centreon ScreenShot

Introduction

Centreon is one of the most flexible and powerful monitoring softwares on the market; it is absolutely free and Open Source.

Getting Started

Centreon software can be set up

Supported versions

Regarding the Products Lifecycle Policy, only the following versions are supported:

  • Centreon 21.10.x, released on November 2, 2021, full support
  • Centreon 21.04.x, released on April 21, 2021, security and blocking issue support only
  • Centreon 20.10.x, released on October 21, 2020, security support only

If your version is not one of the 3 versions specified above, we recommend that you upgrade your platform immediately.

Authors

Project leaders

   

Product managers

       

Development team

                             

Quality Assurance

       

Documentation

 

See also the list of our contributors

Security Acknowledgement page

We want to thank all reporters and pentesters who help us improve our product each day.

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, bug report, feature request and the process for submitting pull requests for us.

License

This project is licensed under the Apache 2.0 License - see the LICENSE.md file for details

centreon-stream-connector-scripts's People

Contributors

bouda1 avatar cgagnaire avatar hamzabessa avatar interstar001 avatar kduret avatar leoncx avatar lucie-dubrunfaut avatar matoy avatar moujimouja avatar nohzoh avatar omercier avatar paulpremont avatar pkippes avatar pkriko avatar ponchoh avatar ppremont-capensis avatar psamecentreon avatar quanghungb avatar s-duret avatar sc979 avatar sdepassio avatar sims24 avatar someaveragedev avatar tanguyvda avatar tuntoja avatar urbnw avatar xenofree avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

centreon-stream-connector-scripts's Issues

(stream) macro conversion may fail with strings containing %

a service with the following output
OK: Interface 'GigabitEthernet2/0/45' Status : up (admin: up), Traffic In : 0.00b/s (0.00%), Traffic Out : 13.06Kb/s (0.01%)

can lead to the below error when some conversion occurs

error: lua: error running function `write'/usr/share/lua/5.3/centreon-stream-connectors-lib/sc_macros.lua:222: invalid use of '%' in replacement string

Wrong pattern in influxdb-neb stream connector

Hello in this function:

local function parse_perfdata(perfdata)
    retval = {}
    for i in string.gmatch(perfdata, "%S+") do
        local it = string.gmatch(i, "[^=]+")
        local field = it()
        local value = it()
        if field and value then
            for v in string.gmatch(value, "[0-9.]+") do
                retval[field] = v
                break
            end
        end
    end
    return retval
end

the splitting pattern will not work with spaces in performance data label. instead you have to use the following pattern to make it work :

'?[^']+'?=[^%s]+%s?

Example with both patterns:

x="'SLOT #1: TEMP #1'=45 'SLOT #2: TEMP#4'=55 SLOT#3=45;;;;45 "
for s in string.gmatch(x, "'%S+") do print(s) end

will return :

'SLOT
'=45
'SLOT
'=55

and with the fixed pattern

x="'SLOT #1: TEMP #1'=45 'SLOT #2: TEMP#4'=55 SLOT#3=45;;;;45 "
for s in string.gmatch(x, "'?[^']+'?=[^%s]+%s?") do print(s) end

will return :

'SLOT #1: TEMP #1'=45
'SLOT #2: TEMP#4'=55
 SLOT#3=45;;;;45

NB: it will not be compatible with double single quotes in label name (plugins guideline at https://nagios-plugins.org/doc/guidelines.html#AEN200). But i never saw this use case before.

(stream) new features for the Splunk Stream Connector

Bonjour,

J'aimerais savoir s'il était possible d'améliorer le steam connector concernant Splunk-Event ? En plus des éléments déjà envoyés, il serait intéressant de pouvoir ajouter les choses suivantes à l'envoi :
Pour les services :

  • Servicegroup
  • Severity Level
  • Notes
  • URL
  • Action URL
  • ID

Pour les hôtes :

  • Hostgroups
  • Severity Level
  • Notes
  • URL
  • Actions URL
  • ID

De plus, j'ai pu constater que lorsque l'output d'un service était très importante (plus de 50 000 caractères par exemple), Splunk avait parfois du mal à bien récupéré le contenu. Il serait intéressant d'avoir une option pour pouvoir limiter le nombre de caractères (au choix) lors de l'envoi vers Splunk. J'ai essayé de modifier moi-même le script mais je ne suis pas développeur et je n'ai pas réussi à exécuter ce point.

Bien cordialement,

Deghinsea


Hello,

I would like to know if it was possible to improve the steam connector regarding Splunk-Event? In addition to the elements already sent, it would be interesting to be able to add the following things to the sending:
For services:

  • Servicegroup
  • Severity Level
  • Notes
  • URL
  • Action URL
  • ID

For hosts:

  • Hostgroups
  • Severity Level
  • Notes
  • URL
  • URL Actions
  • ID

Moreover, I noticed that when the output of a service was very large (more than 50,000 characters for example), Splunk sometimes had trouble retrieving the content properly. It would be interesting to have an option to be able to limit the number of characters (of your choice) when sending to Splunk. I tried to edit the script myself but I'm not a developer and failed to execute this point.

Best regards,

Deghinsea

broker.parse_perfdata function and perfdata not well formatted

Hello,

There is an issue with broker.parse_perfdata function when perfdata is not well formatted :
[1550162271] error: lua: error running function `write'/usr/share/centreon-broker/lua/export-warp10.lua:76: storage: error while parsing perfdata << 'events'=;;;0; >>: storage: invalid perfdata format: no numeric value after equal sign

My workaround to avoid broker retention is this in the lua script :
local status, pd = pcall(broker.parse_perfdata(d.perfdata))
if not status then
broker_log:error(0, "host:service: '" .. d.host_id .. ":" .. d.service_id .. "'")
return true
-- pd=
end

But maybe it would be better to implement an exception directly in broker.parse_perfdata function.

Best Regards

(stream) Possible memory leak with new queue system

When is not sent, the queue is not flushed. Next event is then put in the queue, and we still can't send the event because it fails. This goes on and on. THe queue table being bigger and bigger it leads to a memory leak.

two solutions

  • completely drop the event (no broker retention, nothing in the queue)
  • remove the event from the queue but keep it in broker retention meaning that we endlessly loop over it until the issue is fixed. (thus creating broker retention while we are looping on this event)

Prometheus lua could not be compiled

I installed Lua-cURL abd prometheus-gateway-apiv1.lua in /usr/share/centreon-broker/lua/.
But when I enable Stream connector output I have this log.

error: lua: '/usr/share/centreon-broker/lua/prometheus-gateway-apiv1.lua' could not be compiled

Do you know how can I solve that ?

Opsgenie Stream Connector does not close alerts

Description

When an alert is closed, any monitoring system (including Centreon) should not send a new alert in Opsgenie, but it should close the existing one.

Steps to Reproduce

This is reproducible by just shutting down a host (Centreon will send a Critical notification) and then powering it up (Centreon will send an OK notification).

Describe the received result

The received result is a new alert in Opsgenie

Describe the expected result

The expected result is to close the existing alert in Opsgenie.

Additional relevant information

Opsgenie has a feature called alert de-duplication. This feature is used for 2 things:
/ To increment existing alerts, and thus reducing incoming noise to Opsgenie
/ To close an alert if the issue is shown as resolved in Centreon

Opsgenie does that by using the Alias field. The best way to do it (for example for a service) would be to change this line:

alias = string.sub(event.cache.host.name .. "" .. event.cache.service.description .. "" .. state, 1, 512)
to this line
alias = string.sub(event.cache.host.name .. "_" .. event.cache.service.description, 1, 512)

... so that the alias is the same for the critical event and the OK event.

And then, for Closing the alert to send a POST request to the https://api.opsgenie.com/v2/alerts/:identifier:/close URL - instead of the https://api.opsgenie.com/v2/alerts, as per the Close Alert API documentation: https://docs.opsgenie.com/docs/alert-api#close-alert

(stream/feature) deadman endpoint

A community user pushed us this feature request:

In order to be alerted when a stream connector stops unexpectedly, a solution would be to offer an option to configure a deadman API endpoint. It would act as a regular heartbeat and then allow to send notifications when something goes wrong.

Some manufacturers offers such endpoint as a built-in feature but there also dedicated service like Deadman Snitch.

IMO, it would be nice to have it at the lib level so the retained solution could be use by any stream no matter what's the software vendor.

DEDUP not Working for HOST

Hello ,

Strange issue with activating DEDUP on Stream Connector on CENTREON 22.10 and 21.04
It works perfectly when dedup is activated for SERVICES but for HOST no.

Nothing is send .

I notice this behavior when polling :
image
First check give us 1/2(S)
Second give 2/2(H)
Third give 1/2(H)

Logs :
image

When i repass to green my Host (change IP) so it should be a state change !
Logs :
image

And when i disable dedup for host .... i see stream-connector sending HOST event as well ....

Do not hesitate to contact me or ask question .

(stream) pagerduty-events-apiv2.lua broken near format_event()

host critical triggered with check_dummy doesn't generate events.

We dig this a little bit, and found out that :
we need to disable host_status_dedup

the two first soft state are processed well :

Thu Jun 16 00:54:21 2022: INFO: [sc_event:is_host_status_event_duplicated]: host status is not enabled option enable_host_status_dedup is set to: 0
Thu Jun 16 00:54:21 2022: WARNING: [sc_event:is_valid_event_state_type]: event is not in an valid state type. Event state type must be above or equal to 1. Current state type: 0
Thu Jun 16 00:54:21 2022: WARNING: [sc_event:is_valid_host_status_event]: host_id: 3524 is not in a validated downtime, ack or hard/soft state

but when check enter in hard state, .lua script is entering "starting format event" function, but does not out.

Thu Jun 16 00:56:21 2022: INFO: [sc_flush:get_queues_size]: size of queue for category neb and element: host_status is: 0
Thu Jun 16 00:56:21 2022: INFO: [sc_flush:get_queues_size]: size of queue for category neb and element: service_status is: 0
Thu Jun 16 00:56:21 2022: INFO: [sc_event:is_host_status_event_duplicated]: host status is not enabled option enable_host_status_dedup is set to: 0
Thu Jun 16 00:56:21 2022: INFO: [EventQueue:format_event]: starting format event
Thu Jun 16 00:56:21 2022: INFO: [sc_flush:get_queues_size]: size of queue for category neb and element: host_status is: 0
Thu Jun 16 00:56:21 2022: INFO: [sc_flush:get_queues_size]: size of queue for category neb and element: service_status is: 0

with some logger added to the code at every part of the script, we found out the script doesn't pass the line
self.format_event[category][element]()

splunk metrics error

when using the splunk-metrics-apiv2.lua stream connector, I have the following issue :

[2022-06-10T15:29:33.188+02:00] [lua] [error] lua: error running function `write' /usr/share/centreon-broker/lua/splunk-metrics-apiv2.lua:79: attempt to call a nil value (method 'format_metrics_host')

line 79 is:

self.format_event = {
    [categories.neb.id] = {
      [elements.host_status.id] = function () return self:format_metrics_host() end,
      [elements.service_status.id] = function () return self:format_metrics_service() end
    }
  }

(stream) PagerDuty - event.output is too big for summary field

Sometimes, the PagerDuty API may reject some data because the "summary" field in the JSON is bigger than 1024 character.

We must:

  • add the output in the "custom_details" data structure
  • make the summary a real summary (Host / Service : Severity)

In the meantime, a workaround it to implement with custom data formatting, see:

Broker error, could not be compiled

Hello! I saw that 4 days ago you updated this repo. I have been trying all morning to use it in my Centreon platform but without success (I only tried the neb one). I actually wrote a Lua script last week and it works although it doesn't use any buffer, it just sends the data as soon as available.

I had several problems with Lua (as I am using 5.3 and this script seems to have been tested for 5.1) and Luarocks as the platform I'm working with is a CentOs. Firt of all, I'm trying this solution as the official InfluxDB connector didn't work (an error with the endpoint, but I guess was not our fault).

The thing is that I get this on the broker log.

[1530537343] error: lua: '_PATH_/influxdb-neb.lua' could not be compiled

Which doesn't happen when I try my version without imports nor buffers. Could be a problem with those modules? I can't really see the error. I've set the logging options to very detailed and enabled all options but nothing else than "could not be compiled" appears. I have also tried to compile manually the script and everything is fine.

Thank-you!

Retrieve service categories

Hello,
Is it possible to retrieve service categories from the broker the same way we fetch hostgroups ?

Thank you in advance.

influxdb-neb stream connector: missing metrics

Centreon 10.10.15
OS: CentOS 7.8

We have configured the influxdb-neb stream connector. On InfluxDB we can see metrics for the hosts, but there are missing a lot of metrics like interfaces, discs. At the moment we can see this kind of metrics:
image

But there are a lot off other metrics that are missing at the moment. Any idea?

We have some error on the stream connector like

Fri 28 Aug 2020 03:16:31 PM CEST: ERROR: EventQueue:flush: HTTP POST request FAILED: return code is 400
Fri 28 Aug 2020 03:16:31 PM CEST: ERROR: EventQueue:flush: HTTP POST request FAILED: message line 1 is "{"error":"partial write: invalid field name: input field \"time\" on measurement \"host-latency\" is invalid dropped=204"}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.