Giter VIP home page Giter VIP logo

icingabeat's Introduction

Build Status

Icingabeat

The Beats are lightweight data shippers, written in Go, that you install on your servers to capture all sorts of operational data (think of logs, metrics, or network packet data). The Beats send the operational data to Elasticsearch, either directly or via Logstash, so it can be visualized with Kibana.

Icingabeat is an Elastic Beat that fetches data from the Icinga 2 API and sends it either directly to Elasticsearch or Logstash.

icingabeat-checkresult-dashboard

Documentation

Please read the documentation on icinga.com/docs/icingabeat/latest for more information

Development

Building and running manually

Requirements

Clone

To clone Icingabeat from the git repository, run the following commands:

mkdir -p ${GOPATH}/github.com/icinga
cd ${GOPATH}/github.com/icinga
git clone https://github.com/icinga/icingabeat

For further development check out the beat developer guide.

Build

Ensure that this folder is at the following location: ${GOPATH}/github.com/icinga

To build the binary for Icingabeat run the command below. This will generate a binary in the same directory with the name icingabeat.

mage build

Run

To run Icingabeat with debugging output enabled, run:

./icingabeat -c icingabeat.yml -e -d "*"

Packaging

The beat frameworks provides tools to crosscompile and package your beat for different platforms. This requires docker and vendoring as described above. To build packages of your beat, run the following command:

export PLATFORMS="linux/amd64 linux/386"
mage package

This will fetch and create all images required for the build process. The whole process can take several minutes to finish.

icingabeat's People

Contributors

bobapple avatar lx183 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

icingabeat's Issues

Icingabeat v.6.3.3 does not send the hostname for the check, instead send own hostname

After installing icingabeat (6.3.3) on a complete new machine, with minimal changes to the icingabeat.yml and outputting to a completely new elasticsearch index (5.5), accessing an icinga2 (2.9.1), icingabeat will log checkresults with the name of the host running icingabeat as host.name:
screenshot_2018-09-12 kibana

My icingabeat.yml (with comments, hostnames and credentials redacted):

icingabeat:
  host: "XXXX"
  port: 5665
  user: "XXXX"
  password: "****"
  ssl.verify: false
  eventstream.types:
    - CheckResult
    - StateChange
    - Notification
    - AcknowledgementSet
    - AcknowledgementCleared
    - CommentAdded
    - CommentRemoved
    - DowntimeAdded
    - DowntimeRemoved
    - DowntimeStarted
    - DowntimeTriggered

  eventstream.filter: ""
  eventstream.retry_interval: 10s
  statuspoller.interval: 30s
setup.dashboards.enabled: false
setup.kibana:
output.elasticsearch:
  hosts: ["XXXX:9200"]
  protocol: "https"
  username: "XXXX"
  password: "*****"

Representative checkresult (in json):

{
  "_index": "icingabeat-6.3.3-2018.09.12",
  "_type": "doc",
  "_id": "AWXO07xNOFh-eQsJ_2M4",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2018-09-12T17:27:45.615Z",
    "check_result": {
      "active": true,
      "type": "CheckResult",
      "execution_end": "2018-09-12T17:25:24.348Z",
      "check_source": "icinga2-sat...",
      "state": 0,
      "vars_before": {
        "reachable": true,
        "state": 0,
        "state_type": 1,
        "attempt": 1
      },
      "exit_status": 0,
      "output": "OK",
      "schedule_end": "2018-09-12T17:25:24.348Z",
      "ttl": 0,
      "schedule_start": "2018-09-12T17:25:23.970Z",
      "execution_start": "2018-09-12T17:25:23.970Z",
      "vars_after": {
        "reachable": true,
        "state": 0,
        "state_type": 1,
        "attempt": 1
      },
      "command": [
        "/usr/lib/nagios/custom_plugins/check_multi",
       "..."
      ]
    },
    "beat": {
      "version": "6.3.3",
      "name": "icingabeat-stg-01",
      "hostname": "icingabeat-stg-01"
    },
    "host": {
      "name": "icingabeat-stg-01"
    },
    "service": "$SERVICENAME",
    "timestamp": "2018-09-12T17:25:24.352Z",
    "type": "icingabeat.event.checkresult"
  },
  "fields": {
    "check_result.execution_end": [
      1536773124348
    ],
    "check_result.schedule_end": [
      1536773124348
    ],
    "@timestamp": [
      1536773265615
    ],
    "check_result.execution_start": [
      1536773123970
    ],
    "check_result.schedule_start": [
      1536773123970
    ],
    "timestamp": [
      1536773124352
    ]
  },
  "sort": [
    1536773265615
  ]
}

Expected Behavior

Somewhere in the document should the hostname for the checkresult (notification, ...) be analog to the service.

Current Behavior

All hostnames will be the icingabeat hostname

Steps to Reproduce (for bugs)

  1. install icingabeat as documented
  2. configure icingabeat with access to icinga & elasticsearch
  3. let icingabeat create the elasticsearch template icingabeat setup
  4. restart icingabeat
  5. create index patterns in kibana (and ignore error while importing kibana dashboards)
  6. check resulting documents in Kibana

Context

We can not use icingabeat 6.3.3 as there is no way to know to which of our hosts the messages are related to.

Your Environment

  • Beat version (icingabeat -version): icingabeat version 6.3.3 (amd64), libbeat 6.3.3
  • Icinga 2 version (icinga2 --version): r2.9.1-1
  • Elasticsearch version (curl -XGET 'localhost:9200'): 5.5.0
  • Logstash version, if used (bin/logstash -V): 5.6.7 (not configured to be used)
  • Kibana version, if used (curl -XGET http://localhost:5601/status -I): 5.5.0
  • Operating System and version: Debian GNU/Linux 9.5 (stretch)

running two icingabeats

Expected Behavior

Two icingabeats sending data to two logstash listeners, only one input gets inserted to elasticsearch.

Current Behavior

Without configuration the data will be written twice, with the elasticsearch fingerprint configuration (https://www.elastic.co/guide/en/logstash/current/plugins-filters-fingerprint.html) it should only write it once (fingerprint already exists if icingabeat #1 was faster)

The problem with the fingerprint is that icingabeat has a lot of different field so I can't just set a fingerprint for "message".

Possible Solution

Write if else for every icingabeat field if not empty and then set a fingerprint.
Not sure if this is the best way, there might be an easier solution from icinga direct?

Steps to Reproduce (for bugs)

  1. two icingas with two running icingabeat services
  2. both send data to logstash
  3. logstash will write the data but it will be written twice (no check of if already written from icingabeat #1 || #2)

Context

At the moment e run two icingabeats on two icingas but one is not running. If icingabeat on #1 is not working we will start #2 , a clean icingabeat cluster would be nice.

Icingabeat installation on all nodes?

Hi,
this is just a question:
In an Icinga2 cluster with 1 Master and X zones with 2 satellite nodes each, is it usefull to install icingabeat on all nodes or just on the master?

Don't use the reserved `host` field anymore

In 6.4 versions Elasticsearch invented a reserved nested field host.

When Icingabeat connects directly to Elasticsearch it wants to write into the field host like it has done before. (And like almost any user does :-( ) But Elasticsearch will refuse to write the new data into an index.

Please change icingabeat so it won't write a "host" field anymore.

/ref/NC/558885

Icingabeat template and kibana dashboard not compatible with ELK 7.0

I installed a complete new ELK Stack in our test enviroment and tried to use the icingabeat from our productive icinga2 system.
Unfortenetely the elastic search template and the kibana dashboards are not compatible with the new version.
When I start icingabeat in the debugging mode

2019-04-18T17:34:41.781+0200    INFO    elasticsearch/client.go:713     Connected to Elasticsearch version 7.0.0
2019-04-18T17:34:41.781+0200    DEBUG   [elasticsearch] elasticsearch/client.go:731     HEAD http://IP:9200/_template/icingabeat-6.5.4  <nil>
2019-04-18T17:34:41.783+0200    INFO    template/load.go:82     Loading template for Elasticsearch version: 7.0.0
2019-04-18T17:34:41.783+0200    DEBUG   [template]      template/load.go:116    Load default fields.yml
2019-04-18T17:34:41.817+0200    DEBUG   [template]      template/load.go:139    Try loading template with name: icingabeat-6.5.4
2019-04-18T17:34:41.820+0200    DEBUG   [elasticsearch] elasticsearch/client.go:731     PUT http://IP:9200/_template/icingabeat-6.5.4  map[index_patterns:[icingabeat-6.5.4-*] mappings:{"doc":{"_meta":{"version":"6.5.4"},"date_detection":false,"dynamic_templates":[{"fields":{"mapping":{"type":"keyword"},"match_mapping_type":"string","path_match":"fields.*"}},{"docker.container.labels":{"mapping":{"type":"keyword"},"match_mapping_type":"string","path_match":"docker.container.labels.*"}},{"strings_as_keyword":{"mapping":{"ignore_above":1024,"type":"keyword"},"match_mapping_type":"string"}}],"properties":{"@timestamp":{"type":"date"},"beat":{"properties":{"hostname":{"ignore_above":1024,"type":"keyword"},"name":{"ignore_above":1024,"type":"keyword"},"timezone":{"ignore_above":1024,"type":"keyword"},"version":{"ignore_above":1024,"type":"keyword"}}},"docker":{"properties":{"container":{"properties":{"id":{"ignore_above":1024,"type":"keyword"},"image":{"ignore_above":1024,"type":"keyword"},"labels":{"type":"object"},"name":{"ignore_above":1024,"type":"keyword"}}}}},"error":{"properties":{"code":{"type":"long"},"message":{"norms":false,"type":"text"},"type":{"ignore_above":1024,"type":"keyword"}}},"fields":{"type":"object"},"host":{"properties":{"architecture":{"ignore_above":1024,"type":"keyword"},"id":{"ignore_above":1024,"type":"keyword"},"ip":{"type":"ip"},"mac":{"ignore_above":1024,"type":"keyword"},"name":{"ignore_above":1024,"type":"keyword"},"os":{"properties":{"family":{"ignore_above":1024,"type":"keyword"},"platform":{"ignore_above":1024,"type":"keyword"},"version":{"ignore_above":1024,"type":"keyword"}}}}},"icinga":{"properties":{"acknowledgement_type":{"type":"long"},"author":{"ignore_above":1024,"type":"keyword"},"check_result":{"properties":{"active":{"type":"boolean"},"check_source":{"ignore_above":1024,"type":"keyword"},"command":{"norms":false,"type":"text"},"execution_end":{"type":"date"},"execution_start":{"type":"date"},"exit_status":{"type":"long"},"output":{"norms":false,"type":"text"},"performance_data":{"norms":false,"type":"text"},"schedule_end":{"type":"date"},"schedule_start":{"type":"date"},"state":{"type":"long"},"ttl":{"type":"long"},"type":{"ignore_above":1024,"type":"keyword"},"vars_after":{"properties":{"attempt":{"type":"long"},"reachable":{"type":"boolean"},"state":{"type":"long"},"state_type":{"type":"long"}}},"vars_before":{"properties":{"attempt":{"type":"long"},"reachable":{"type":"boolean"},"state":{"type":"long"},"state_type":{"type":"long"}}}}},"comment":{"properties":{"__name":{"norms":false,"type":"text"},"author":{"ignore_above":1024,"type":"keyword"},"entry_time":{"type":"date"},"entry_type":{"type":"long"},"expire_time":{"type":"date"},"host_name":{"ignore_above":1024,"type":"keyword"},"legacy_id":{"type":"long"},"name":{"ignore_above":1024,"type":"keyword"},"package":{"ignore_above":1024,"type":"keyword"},"service_name":{"ignore_above":1024,"type":"keyword"},"templates":{"norms":false,"type":"text"},"text":{"norms":false,"type":"text"},"type":{"ignore_above":1024,"type":"keyword"},"version":{"ignore_above":1024,"type":"keyword"},"zone":{"ignore_above":1024,"type":"keyword"}}},"downtime":{"properties":{"__name":{"norms":false,"type":"text"},"author":{"ignore_above":1024,"type":"keyword"},"comment":{"norms":false,"type":"text"},"config_owner":{"norms":false,"type":"text"},"duration":{"type":"long"},"end_time":{"type":"date"},"entry_time":{"type":"date"},"fixed":{"type":"boolean"},"host_name":{"ignore_above":1024,"type":"keyword"},"legacy_id":{"type":"long"},"name":{"ignore_above":1024,"type":"keyword"},"package":{"ignore_above":1024,"type":"keyword"},"scheduled_by":{"norms":false,"type":"text"},"service_name":{"ignore_above":1024,"type":"keyword"},"start_time":{"type":"date"},"templates":{"norms":false,"type":"text"},"trigger_time":{"type":"date"},"triggered_by":{"norms":false,"type":"text"},"triggers":{"norms":false,"type":"text"},"type":{"ignore_above":1024,"type":"keyword"},"version":{"ignore_above":1024,"type":"keyword"},"was_cancelled":{"type":"boolean"},"zone":{"ignore_above":1024,"type":"keyword"}}},"expiry":{"type":"date"},"host":{"ignore_above":1024,"type":"keyword"},"notification_type":{"ignore_above":1024,"type":"keyword"},"notify":{"ignore_above":1024,"type":"keyword"},"service":{"ignore_above":1024,"type":"keyword"},"state":{"type":"long"},"state_type":{"type":"long"},"status":{"properties":{"active_host_checks":{"type":"long"},"active_host_checks_15min":{"type":"long"},"active_host_checks_1min":{"type":"long"},"active_host_checks_5min":{"type":"long"},"active_service_checks":{"type":"long"},"active_service_checks_15min":{"type":"long"},"active_service_checks_1min":{"type":"long"},"active_service_checks_5min":{"type":"long"},"api":{"properties":{"identity":{"ignore_above":1024,"type":"keyword"},"num_conn_endpoints":{"type":"long"},"num_endpoints":{"type":"long"},"num_not_conn_endpoints":{"type":"long"}}},"avg_execution_time":{"type":"long"},"avg_latency":{"type":"long"},"checkercomponent":{"properties":{"checker":{"properties":{"idle":{"type":"long"},"pending":{"type":"long"}}}}},"filelogger":{"properties":{"main-log":{"type":"long"}}},"icingaapplication":{"properties":{"app":{"properties":{"enable_event_handlers":{"type":"boolean"},"enable_flapping":{"type":"boolean"},"enable_host_checks":{"type":"boolean"},"enable_notifications":{"type":"boolean"},"enable_perfdata":{"type":"boolean"},"enable_service_checks":{"type":"boolean"},"node_name":{"ignore_above":1024,"type":"keyword"},"pid":{"type":"long"},"program_start":{"type":"long"},"version":{"ignore_above":1024,"type":"keyword"}}}}},"idomysqlconnection":{"properties":{"ido-mysql":{"properties":{"connected":{"type":"boolean"},"instance_name":{"ignore_above":1024,"type":"keyword"},"query_queue_items":{"type":"long"},"version":{"ignore_above":1024,"type":"keyword"}}}}},"max_execution_time":{"type":"long"},"max_latency":{"type":"long"},"min_execution_time":{"type":"long"},"min_latency":{"type":"long"},"notificationcomponent":{"properties":{"notification":{"type":"long"}}},"num_hosts_acknowledged":{"type":"long"},"num_hosts_down":{"type":"long"},"num_hosts_flapping":{"type":"long"},"num_hosts_in_downtime":{"type":"long"},"num_hosts_pending":{"type":"long"},"num_hosts_unreachable":{"type":"long"},"num_hosts_up":{"type":"long"},"num_services_acknowledged":{"type":"long"},"num_services_critical":{"type":"long"},"num_services_flapping":{"type":"long"},"num_services_in_downtime":{"type":"long"},"num_services_ok":{"type":"long"},"num_services_pending":{"type":"long"},"num_services_unknown":{"type":"long"},"num_services_unreachable":{"type":"long"},"num_services_warning":{"type":"long"},"passive_host_checks":{"type":"long"},"passive_host_checks_15min":{"type":"long"},"passive_host_checks_1min":{"type":"long"},"passive_host_checks_5min":{"type":"long"},"passive_service_checks":{"type":"long"},"passive_service_checks_15min":{"type":"long"},"passive_service_checks_1min":{"type":"long"},"passive_service_checks_5min":{"type":"long"},"uptime":{"type":"long"}}},"text":{"norms":false,"type":"text"},"timestamp":{"type":"date"},"type":{"ignore_above":1024,"type":"keyword"},"users":{"ignore_above":1024,"type":"keyword"}}},"kubernetes":{"properties":{"annotations":{"type":"object"},"container":{"properties":{"image":{"ignore_above":1024,"type":"keyword"},"name":{"ignore_above":1024,"type":"keyword"}}},"labels":{"type":"object"},"namespace":{"ignore_above":1024,"type":"keyword"},"node":{"properties":{"name":{"ignore_above":1024,"type":"keyword"}}},"pod":{"properties":{"name":{"ignore_above":1024,"type":"keyword"},"uid":{"ignore_above":1024,"type":"keyword"}}}}},"meta":{"properties":{"cloud":{"properties":{"availability_zone":{"ignore_above":1024,"type":"keyword"},"instance_id":{"ignore_above":1024,"type":"keyword"},"instance_name":{"ignore_above":1024,"type":"keyword"},"machine_type":{"ignore_above":1024,"type":"keyword"},"project_id":{"ignore_above":1024,"type":"keyword"},"provider":{"ignore_above":1024,"type":"keyword"},"region":{"ignore_above":1024,"type":"keyword"}}}}},"tags":{"ignore_above":1024,"type":"keyword"},"timestamp":{"type":"date"},"type":{"ignore_above":1024,"type":"keyword"}}}} order:1 settings:{"index":{"mapping":{"total_fields":{"limit":10000}},"number_of_routing_shards":30,"query":{"default_field":["type","icinga.type","icinga.host","icinga.service","icinga.author","icinga.notification_type","icinga.text","icinga.users","icinga.notify","icinga.check_result.check_source","icinga.check_result.command","icinga.check_result.output","icinga.check_result.performance_data","icinga.check_result.type","icinga.comment.__name","icinga.comment.author","icinga.comment.host_name","icinga.comment.name","icinga.comment.package","icinga.comment.service_name","icinga.comment.templates","icinga.comment.text","icinga.comment.type","icinga.comment.version","icinga.comment.zone","icinga.downtime.__name","icinga.downtime.author","icinga.downtime.comment","icinga.downtime.config_owner","icinga.downtime.host_name","icinga.downtime.name","icinga.downtime.package","icinga.downtime.scheduled_by","icinga.downtime.service_name","icinga.downtime.templates","icinga.downtime.triggered_by","icinga.downtime.triggers","icinga.downtime.type","icinga.downtime.version","icinga.downtime.zone","icinga.status.api.identity","icinga.status.icingaapplication.app.node_name","icinga.status.icingaapplication.app.version","icinga.status.idomysqlconnection.ido-mysql.instance_name","icinga.status.idomysqlconnection.ido-mysql.version","beat.name","beat.hostname","beat.timezone","beat.version","tags","error.message","error.type","meta.cloud.provider","meta.cloud.instance_id","meta.cloud.instance_name","meta.cloud.machine_type","meta.cloud.availability_zone","meta.cloud.project_id","meta.cloud.region","docker.container.id","docker.container.image","docker.container.name","host.name","host.id","host.architecture","host.os.platform","host.os.version","host.os.family","host.mac","kubernetes.pod.name","kubernetes.pod.uid","kubernetes.namespace","kubernetes.node.name","kubernetes.container.name","kubernetes.container.image","fields.*"]},"refresh_interval":"5s"}}]

So I think the elasticsearch changed somethink to icingabeat needs a new template.
The kibana dashboards are not working too. I can post the debugging log if someone need it at the moment the ELK is down because of easter holiday.

Im available for Tests next week if needed

Enviroment
icingabeat version 6.5.4 (amd64), libbeat 6.5.4
icinga2 (version: r2.10.4-1)
elastic search
{
"name" : "HOST",
"cluster_name" : "Cluster",
"cluster_uuid" : "UUID",
"version" : {
"number" : "7.0.0",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "b7e28a7",
"build_date" : "2019-04-05T22:55:32.697037Z",
"build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.7.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Kibana 7.0
OS Ubuntu 16.04 LTS on Icinga2
Ubuntu 18.04 LTS for ELK Stack

With Icinga 2 2.11+ the Icingabeat gets disconnected every 10 seconds

Expected Behavior

The Icingabeat's connection to the event API of Icinga 2 is stable

Current Behavior

With Icinga 2 2.11+ the Icingabeat gets disconnected every 10 seconds

Possible Solution

This is a reference issue for Icinga/icinga2#8213, so the problem likely has to be fixed on the Icinga side.

Steps to Reproduce (for bugs)

  1. Install and configure icinga2
  2. Install and configure icingabeat
  3. Wafch the logs

Your Environment

  • Beat version (icingabeat -version): 7.5.2
  • Icinga 2 version (icinga2 --version): 2.12.1
  • Elasticsearch version (curl -XGET 'localhost:9200'): 7.9.2
  • Operating System and version: CentOS 7

Facing Problem while installing icingabeat

Hi

i am getting bellow error while doing "make"

go build
main.go:6:2: cannot find package "github.com/elastic/beats/libbeat/beat" in any of:
/usr/lib/go-1.6/src/github.com/elastic/beats/libbeat/beat (from $GOROOT)
($GOPATH not set)
main.go:8:2: cannot find package "github.com/icinga/icingabeat/beater" in any of:
/usr/lib/go-1.6/src/github.com/icinga/icingabeat/beater (from $GOROOT)
($GOPATH not set)
vendor/github.com/elastic/beats/libbeat/scripts/Makefile:74: recipe for target 'icingabeat' failed
make: *** [icingabeat] Error 1

and i have set bellow variable in bashrc file

export GOROOT="/usr/lib/go-1.6"
export GOPATH="/usr/lib/go-1.6"
export PATH="$GOROOT/bin:$PATH"

i have installed go1.6

please help

Handle missing eventstream.retry_interval value better

Expected Behavior

If the eventstream.retry_interval configuration option is left out from the configuration I expect the program to either pick a reasonable default or abort with a useful error message.

Current Behavior

panic: non-positive interval for NewTicker

goroutine 47 [running]:
time.NewTicker(0x0, 0xc4200a2b00)
	/usr/local/go/src/time/tick.go:23 +0x1a1
github.com/icinga/icingabeat/beater.(*Eventstream).Run(0xc4204b4180, 0x0, 0x0)
	/go/src/github.com/icinga/icingabeat/beater/eventstream.go:127 +0x873
created by github.com/icinga/icingabeat/beater.(*Icingabeat).Run
	/go/src/github.com/icinga/icingabeat/beater/icingabeat.go:49 +0x32d

Possible Solution

Updating DefaultConfig in config/config.go to set a default value for Eventstream.RetryInterval should be enough. IIRC the Beats library has hooks for validation as well if we really want to force users to specify that option, but I don't think the retry interval is that important that we should force users to make a choice.

Steps to Reproduce (for bugs)

  1. Start with a working Icingabeat configuration.
  2. Comment out the eventstream.retry_interval line in icingabeat.yml.
  3. Panic.

Your Environment

  • Beat version (icingabeat -version): 6.5.4

Packaging metadata missing/wrong

I found the icingabeat package in the icinga-stable-release/7 repo. However its Summary, Description and URL seem incorrect.

Expected Behavior

Summary and Description should be accurate, i686 has: "Icingabeat ships Icinga 2 events and states to Elasticsearch or Logstash."
URL should be useful.

Current Behavior

Summary and Description for x86_64 seem to be a placeholder: "One sentence description of the Beat."
URL returns 404: https://www.elastic.co/products/beats/icingabeat

Possible Solution

Update x86_64 package's metadata

Steps to Reproduce (for bugs)

yum info icingabeat

Context

Package metadata is inaccurate, not useful

Your Environment

CentOS 7

  • Beat version (icingabeat -version):n/a
  • Icinga 2 version (icinga2 --version):n/a
  • Elasticsearch version (curl -XGET 'localhost:9200'):n/a
  • Logstash version, if used (bin/logstash -V):n/a
  • Kibana version, if used (curl -XGET http://localhost:5601/status -I):n/a
  • Operating System and version:CentOS 7, baseurl=http://packages.icinga.com/epel/$releasever/release/

I realize this should be simple enough to fix (assuming the correct URL is known), I'm not sure exactly how to test/verify so I'm just entering this Issue to bring awareness.

Icingabeat v6.1.1 is trying to send more than 1000 different fields

Expected Behavior

Icingabeat should write only a very small set of different fields to Elasticsearch.

Current Behavior

  • Some events from Icingabeat are sent as expected. Giving a total of about 190 distinct fields.
  • About every 5 Minutes Elasticsearch refuses a bulk request which would add enough new fields to get over 1000 distinct fields in the index.

Possible Solution

Review what Icingabeat is sending to Elasticsearch and restrict the overall count of distinct fields.

Steps to Reproduce (for bugs)

  1. Start Icingabeat 6.1.1 with the following configuration to send data to Elastic Stack 6.1.2
icingabeat:
  host: "icinga01.example.com"
  port: 5665
  user: "logstash"
  password: "supersecret"
  ssl.verify: false
  eventstream.types:
    - CheckResult
    - StateChange
  eventstream.filter: ""
  eventstream.retry_interval: 10s 
  statuspoller.interval: 60s 
setup.kibana:
output.logstash:
  hosts: ["logstash01:5044","logstash02:5044","logstash03:5044"]

Context

Yesterdays icingabeat index failed completely and no shard is available anymore. Todays Index is working but Elasticsearchs log is filling up with the following messages:

[2018-02-02T09:43:02,924][DEBUG][o.e.a.b.TransportShardBulkAction] [icingabeat-2018.02.02][4] failed to execute bulk item (index) BulkShardRequest [[icingabeat-2018.02.02][4]] containing [index {[icingabeat-2018.02.02][doc][IFCuVWEBKbzJIpsc7RVV], source[n/a, actual length: [44.1kb], max length: 2kb]}]

java.lang.IllegalArgumentException: Limit of total fields [1000] in index [icingabeat-2018.02.02] has been exceeded

Having too many distinct fields could lead to "mapping explosion". This is why Elasticsearch doesn't allow more than 1000 distinct fieldnames per Index (no matter if they are nested or not). It seems like about every 5 minutes Icingabeat tries to send a bulk request with a massive count of new fields which gets refused. So the index stays on a healthy count of under 200 fields.

Your Environment

  • Beat version: 6.1.1
  • Icinga 2 version: 2.8.1
  • Elasticsearch version : 6.1.2
  • Logstash version, if used: 6.1.2
  • Kibana version, if used: 6.1.2
  • Operating System and version: RHEL 7

should have an option to verify SSL CA for Icinga API

Expected Behavior

IcingaBeat should allow me to verify Icinga2 API SSL certificate against a specific CA (or list thereof).

Current Behavior

I cannot specify a [list of] CA and IcingaBeat complains:

2017-08-03T14:48:41+02:00 ERR Error connecting to API: Get https://foo.bar.baz:5665/v1/status: x509: certificate signed by unknown authority

Possible Solution

Have a config parameter to specify a CA, just like ssl.certificate_authorities in the beat output.* sections.

Steps to Reproduce (for bugs)

  1. install IcingaBeat and let it talk to Icinga2 API
  2. it doesn't

Ugly workaround

icingabeat:
  # Skip SSL verification
  skip_ssl_verify: true

Your Environment

  • Beat version: icingabeat version 5.3.2 (amd64), libbeat 5.3.2 (installed from the deb file released on GitHub)
  • Icinga 2 version: icinga2 - The Icinga 2 network monitoring daemon (version: r2.6.3-1) (installed from packages.icinga.com)
  • Operating System and version: Ubuntu 16.04 fully updated

Enable X-Pack Monitoring ?

Hi,
since Elastic opened the X-Pack and monitoring is now enabled by default and usable with a basic license, how to I enable the monitoring metric for Icingabeat like I do with
xpack.monitoring: enabled: true elasticsearch: hosts: [....]
in filebeat?

Regards,
Marcus

P.S.: Damn good work guys!

Help with host downtime configuration

I'm trying to query in ES a total number of icinga2 hosts with downtimes set but I am unable to and think its because I am not seeing any data being brought in by icingabeat. Is this a bug or a configuration problem on my end? I'm not making any headway with the documentation and seems light with regard to configuring /etc/icingabeat/icingabeat.yml.

My config is pretty barebones but including it below for reference

grep -v # /etc/icingabeat/icingabeat.yml | sed -e '/^$/d'
icingabeat:
host: "XXXXX"
port: 5665
user: "XXXXX"
password: "XXXXX"
skip_ssl_verify: true
eventstream:
types:
- CheckResult
- StateChange
- Notification
- AcknowledgementSet
- AcknowledgementCleared
- CommentAdded
- CommentRemoved
- DowntimeAdded
- DowntimeRemoved
- DowntimeStarted
- DowntimeTriggered
filter: ""
retry_interval: 10s
statuspoller:
interval: 60s
output.elasticsearch:
hosts: ["XXXXX:9200"]

Possible Stale Connection Icingabeat and Icinga2

Hello,
I want to report a possible issue with the icingabeat application.
My testing environment has the following setup: Icinga Master(with Icingabeats) and clients, no zones defined, just the standard thing and another VM with ELK.
The problem was that when i left the ELK VM down for the weekend, and turned it back up on monday i was receiving events from saturday with a huge delay between them beeing displayed on Kibana, left it down to test the scenario of a client losing his Link (this one being on a different GeoLocation from the ELK Server). Several hours passed and still no "LIVE EVENT STREAM" were being displayed, only when i restarted the icingabeat it became "live".

Best Regards,

DMMCA

OpenSearch support: Native Output Support & Icingabeat Dashboards for OpenSearch Dashboards

Hi,

I managed to get IcingaBeat v.7.17.4 running via a Logstash-OSS plugin forwarding to OpenSearch via this way: https://repost.aws/knowledge-center/opensearch-connect-filebeat-logstash .

a)

  • Native-Output:
    Compared to this, Elasticsearch (8.8.2) (as an output) worked out-the-box without Logstash.
    -> The Problem is that 7.17.4 version is still unaware of the "OpenSearch output" keyword.
    Exiting: error initializing publisher: output type opensearch undefined
    2023-08-08T13:04:49.985+0200 ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to backoff(elasticsearch(https://10.3.0.171:9200)): Connection marked as failed because the onConnect callback failed: could not connect to a compatible version of Elasticsearch: 400 Bad Request: {"error":{"root_cause":[{"type":"invalid_index_name_exception","reason":"Invalid index name [_license], must not start with '_'.","index":"_license","index_uuid":"_na_"}],"type":"invalid_index_name_exception","reason":"Invalid index name [_license], must not start with '_'.","index":"_license","index_uuid":"_na_"},"status":400}

b)

  • Importing Icingabeat-Dashboards:
    -> I tried two things, firstly importing into Kibana, and then exporting into Opensearch Dashboards (Kibana for OpenSearch is called OpenSearch Dashboards).
    Secondly directly importing to OpenSearch.
    In both cases it appears some of the IDs and Strings are mapped only to Kibana, causing the problems.

Errors:

Sorry, there was an error

The file could not be processed due to error: "Unprocessable Entity: Document "db2785a5-a257-44bd-a7dd-bcfb9d7f5878" has property "dashboard" which belongs to a more recent version of OpenSearch Dashboards [7.17.3]. The last known version is [7.9.3]"

Could not find reference "panel_614fcb21-5773-4e0f-8ebc-e606a261ee2b"

double-export

Possible solutions: Maybe a bump to a higher Filebeat version would fix problem a) . Problem b) could require shipping different dashboards for both Kibana & OpenSearch Dashboards.

Your Environment

  • Beat version (icingabeat -version): 7.17.4
  • Icinga 2 version (icinga2 --version): 2.13.7+780.g2e4af46d4
  • Elasticsearch version (curl -XGET 'localhost:9200'): 8.8.2
  • Logstash version, if used (bin/logstash -V): 8.8.2 with OSS-Plugin
  • Kibana version, if used (curl -XGET http://localhost:5601/status -I): 8.8.2
  • OpenSearch version: 2.8
  • OpenSearch Dashboards version: 2.8
  • Operating System and version: RHEL 8.8 / Alma 8.8

Allow certificate login for API user

The yml file only shows options for connecting via password to the Icinga-API. Some users might want to use certificates instead.

/ref/NC/545319

Passwords are sent in clear text to logstash

Expected Behavior

Current Behavior

Output from Icinga with a password is sent unfiltered to Logstash and then show:s up in Kibana.
[check_result][command]: /usr/lib/nagios/plugins/check_mysql, -H, sql.example.org, -p, zhi*!qwdf1234vi<, -u, MyMonitorUser

The field after "-p" should be filtered.
[check_result][command]: /usr/lib/nagios/plugins/check_mysql, -H, sql.example.org, -p, FILTERED_PASSWORD, -u, MyMonitorUser

Possible Solution

Steps to Reproduce (for bugs)

Context

Your Environment

  • Beat version (icingabeat -version):
    6.1.1
  • Icinga 2 version (icinga2 --version):
    2.8
  • Elasticsearch version (curl -XGET 'localhost:9200'):
    6.2.2
  • Logstash version, if used (bin/logstash -V):
    6.2.2
  • Kibana version, if used (curl -XGET http://localhost:5601/status -I):
    6.2.2
  • Operating System and version:
    Ubuntu 16

Minor documentation optimization

unzip icingabeat-dashboards-1.0.0.zip

=>

unzip icingabeat-dashboards-1.0.0.zip -d /tmp

Because the next command expects the files in the directory /tmp

Issue 25/26 in Debian Package?

Hello friends,

i currently playing around with icingabeats and installed it following the instruction from the icinga-website with apt-get install icingabeat.

I get many errors like this:
[2019-02-08T18:23:37,453][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2019.02.08", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x1a8bc760>], :response=>{"index"=>{"_index"=>"logstash-2019.02.08", "_type"=>"doc", "_id"=>"mUwizmgBuAkVP_3V215q", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [host] tried to parse field [host] as object, but found a concrete value"}}}}

Debug-output of icingabeats:
"@timestamp": "2019-02-08T17:11:40.851Z", "@metadata": { "beat": "icingabeat", "type": "doc", "version": "6.5.4" }, "host": { "os": { "platform": "debian", "version": "9 (stretch)", "family": "debian", "codename": "stretch" }, "id": "ae14dd8d192c433d96d6e2516d507291", "containerized": false, "name": "icinga", "architecture": "x86_64" }, "icinga": {and so on...

when i understood issue25 and 26 correct the content of "host" is wrong and was moved to "icingaanything".
Is it possible that this fix didn't find his way into the debian-package?

Versions:
icingabeat is running on icingahost Version 6.5.4
ELK on a different Server 6.5.4

Thank you,
Viper539

And BTW: What function has the file: /etc/icingabeats/fields.yml?
It would have been very impressive if you could have solved the error with it.

[edit]: changed URL to installation manual

Icingabeat does not send perf data of check results

Expected Behavior

Icingabeat sents the performance data of the check results, in the field "icinga.check_result.performance_data" as described here: https://github.com/Icinga/icingabeat/blob/master/docs/fields.asciidoc#icingabeat-fields

Current Behavior

The field "icinga.check_result.performance_data" is not present in the created index.

The only available fields are:

  • icinga.check_result.active
  • icinga.check_result.check_source
  • icinga.check_result.execution_end
  • icinga.check_result.execution_start
  • icinga.check_result.exit_status
  • icinga.check_result.output
  • icinga.check_result.schedule_end
  • icinga.check_result.schedule_start
  • icinga.check_result.scheduling_source
  • icinga.check_result.state
  • icinga.check_result.ttl
  • icinga.check_result.type
  • icinga.check_result.vars_after.attempt
  • icinga.check_result.vars_after.reachable
  • icinga.check_result.vars_after.state
  • icinga.check_result.vars_after.state_type
  • icinga.check_result.vars_before.attempt
  • icinga.check_result.vars_before.reachable
  • icinga.check_result.vars_before.state
  • icinga.check_result.vars_before.state_type

Context

We are trying to get the performance data into elasticsearch to further process it from there.

Your Environment

  • Beat version (icingabeat -version): icingabeat version 7.17.4 (amd64), libbeat 7.17.4 [6c3bb8f built 2022-05-31 12:53:45 +0000 UTC]
  • Icinga 2 version (icinga2 --version): 2.13.2-1
  • Elasticsearch version (curl -XGET 'localhost:9200'):
    {
    "version" : {
    "number" : "7.10.2",
    "build_type" : "tar",
    "build_hash" : "744ca260b892d119be8164f48d92b8810bd7801c",
    "build_date" : "2022-11-15T04:42:29.671309257Z",
    "build_snapshot" : false,
    "lucene_version" : "9.4.1",
    "minimum_wire_compatibility_version" : "7.10.0",
    "minimum_index_compatibility_version" : "7.0.0"
    },
    "tagline" : "The OpenSearch Project: https://opensearch.org/"
    }
  • Logstash version, if used (bin/logstash -V): logstash 8.4.0
  • Kibana version, if used (curl -XGET http://localhost:5601/status -I): opendistro-for-elasticsearch-kibana:1.13.2
  • Operating System and version: CentOS Linux release 7.9.2009 (Core)

icingabeat setup command flags not working

Able to load dashboards using "icingabeat setup --dashboards -E data" but machine learning flag is not creating any ml jobs in Kibana and also it is not even throwing any error.

Can someone tell me why is this flag not working.

Update Icingabeat to Elastic Stack 6.x (libbeat, Kibana dashboards)

Beats 6.x now use their own Kibana API to automatically setup dashboards.

https://www.elastic.co/guide/en/beats/filebeat/6.0/filebeat-configuration.html
https://www.elastic.co/guide/en/beats/filebeat/6.0/load-kibana-dashboards.html

Context

The old 5.x way using the import_dashboards script does not work anymore.

/usr/share/icingabeat/scripts/import_dashboards -dir /tmp/icingabeat-dashboards-1.1.1 -es http://127.0.0.1:9200

gives a 400 bad request. Seems you cannot use Icingabeat 5.3.2 with Elastic Stack 6.x.

The script removal in 6.x is mentioned here: elastic/beats#3740 (comment)

Possible Solution

  • Upgrade to libbeat 6.0
  • Include setup configuration details (should be coming from libbeat itself)
  • Regenerate Kibana dashboards
  • Create new release builds

Allow to set target field

Allow to create all fields from icingabeat as a nested field.

Currently all fields are created under root but some setups want to use nested fields like namespaces so there won't be any collisions between rulesets from different users.

You can use the target field of Logstashes kv filter.

/ref/NC/545319

Kibana Dashboards fail to load with icingabeat-7.4.2 and Kibana 7.4

Expected Behavior

New dashboards should be availbale in Kibana

Current Behavior

`# icingabeat setup kibana
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Exiting: Failed to import dashboard: Failed to load directory /usr/share/icingabeat/kibana/7/dashboard:
error loading /usr/share/icingabeat/kibana/7/dashboard/Icingabeat-CheckResults.json: returned 422 to import file: . Response: {"statusCode":422,"error":"Unprocessable Entity","message":"Document "a32bdf10-e4be-11e7-b4d1-8383451ae5a4" has property "visualization" which belongs to a more recent version of Kibana (7.4.2)."}
error loading /usr/share/icingabeat/kibana/7/dashboard/Icingabeat-Notifications.json: returned 422 to import file: . Response: {"statusCode":422,"error":"Unprocessable Entity","message":"Document "af54ac40-e4cd-11e7-b4d1-8383451ae5a4" has property "visualization" which belongs to a more recent version of Kibana (7.4.2)."}
error loading /usr/share/icingabeat/kibana/7/dashboard/Icingabeat-Status.json: returned 422 to import file: . Response: {"statusCode":422,"error":"Unprocessable Entity","message":"Document "77052890-e4c0-11e7-b4d1-8383451ae5a4" has property "visualization" which belongs to a more recent version of Kibana (7.4.2)."}

echo $?

1

`

Catch permission errors with statuspollers

Crashes after installing from icinga repo, configuring it and running "systemctl start icingabeat".

sudo /usr/bin/icingabeat
1m0s[][]panic: interface conversion: interface {} is float64, not []interface {}

goroutine 26 [running]:
github.com/icinga/icingabeat/beater.BuildStatusEvents(0xc420018600, 0x2c, 0x600, 0x2c, 0x600, 0x0)
        /go/src/github.com/icinga/icingabeat/beater/statuspoller.go:46 +0xa0d
github.com/icinga/icingabeat/beater.(*Statuspoller).Run(0xc420106820, 0x0, 0x0)
        /go/src/github.com/icinga/icingabeat/beater/statuspoller.go:123 +0x4b7
created by github.com/icinga/icingabeat/beater.(*Icingabeat).Run
        /go/src/github.com/icinga/icingabeat/beater/icingabeat.go:54 +0x284
icingabeat:
  host: "watcher01.prd"
  port: 5665
  user: "icingabeat"
  password: "HIDDENSUPERPASSWORD"
  skip_ssl_verify: false
  eventstream:
    types:
      - CheckResult
      - StateChange
    filter: ""
    retry_interval: 10s
  statuspoller:
    interval: 60s
fields_under_root: true
fields:
  env: prd
  servicename: watcher01.prd
  subservice: icinga
output.logstash:
  enabled: true
  hosts: [ "log01.fqdn" ]
  worker: 4
  compression_level: 3
  loadbalance: true
  index: 'icingabeat'
logging.to_files: true
logging.files:
  path: /var/log/icingabeat
  name: icingabeat
  rotateeverybytes: 10485760 # = 10MB
  permissions: 0600

Expected Behavior

I expect it icingabeat to stay running and collect my data

Current Behavior

Currently it just dies after connecting to icinga2.

Possible Solution

Steps to Reproduce (for bugs)

  1. Install icingabeat on Ubuntu16 from "officiall" repo.
  2. Configure an API-user as described in https://devhub.io/repos/Icinga-icingabeat
  3. Configure icingabeat.yml

Context

Your Environment

  • Beat version (icingabeat -version):
    admin@watcher01:/etc/icingabeat$ icingabeat --version
    icingabeat version 6.1.1 (amd64), libbeat 6.1.1
  • Icinga 2 version (icinga2 --version):
    admin@watcher01:/etc/icingabeat$ icinga2 --version
    icinga2 - The Icinga 2 network monitoring daemon (version: r2.8.2-1)

Copyright (c) 2012-2017 Icinga Development Team (https://www.icinga.com/)
License GPLv2+: GNU GPL version 2 or later http://gnu.org/licenses/gpl2.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Application information:
Installation root: /usr
Sysconf directory: /etc
Run directory: /run
Local state directory: /var
Package data directory: /usr/share/icinga2
State path: /var/lib/icinga2/icinga2.state
Modified attributes path: /var/lib/icinga2/modified-attributes.conf
Objects path: /var/cache/icinga2/icinga2.debug
Vars path: /var/cache/icinga2/icinga2.vars
PID path: /run/icinga2/icinga2.pid

System information:
Platform: Ubuntu
Platform version: 16.04.4 LTS (Xenial Xerus)
Kernel: Linux
Kernel version: 4.4.0-116-generic
Architecture: x86_64

Build information:
Compiler: GNU 5.3.1
Build host: 86927c12b6d8

  • Elasticsearch version (curl -XGET 'localhost:9200'):
    6.2.2
  • Logstash version, if used (bin/logstash -V):
    6.2.2
  • Kibana version, if used (curl -XGET http://localhost:5601/status -I):
    6.2.2
  • Operating System and version:
    Ubuntu 16,
    Icinga-API-user have permissions = [ "events/CheckResult" ]
    Network interfacename: ens160

admin@watcher01:~# apparmor_status
apparmor module is loaded.
5 profiles are loaded.
5 profiles are in enforce mode.
/sbin/dhclient
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/NetworkManager/nm-dhcp-helper
/usr/lib/connman/scripts/dhclient-script
/usr/sbin/mysqld
0 profiles are in complain mode.
1 processes have profiles defined.
1 processes are in enforce mode.
/usr/sbin/mysqld (1074)
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

Add missing fields for statuspoller

Hello,

after installing version 1.1.0 of the filebeat dashboard, the following fields within the index pattern are missing:

(id: status.icingaapplication.app.version)
(id: status.idomysqlconnection.ido-mysql.version)
(id: status.icingaapplication.app.node_name)

Refreshing the indexes manuelly resolves the issue.

Integration with Icinga2 with ELK

HI @bobapple,

Hope you are doing good.

I am not sure this is the right platform to ask for help. Please excuse me for that.

@bobapple I need your suggestion/assistance here.

I want to integrate the ELK with Icinga2 monitoring.

About my ELK setup. I have docker-compose.yml file running on a host which has ELK stack. I am using Filebeat container running on other server to ship the logs to ELK stack. We have used Docker Container for ELK stack and filebeat.

Now I wanted to integrate the ELK with Icinga2. I have performed following steps on the host where our ELK stack is running using docker.

  1. I have enable the "elasticsearch" module.
  2. When I am trying to configure the Elasticsearch using Icingaweb2, I am not getting the configure option in icingaweb2. Here is the doc I am referring https://github.com/Icinga/icingaweb2-module-elasticsearch/blob/master/doc/03-Configuration.md
  3. Icingabeat is also required to integrate the ELK with Icinga?
  4. Prerequisites to do integration with ELK

Requesting to you to kindly provide me your inputs/suggestion on this how to achieve the integration with ELK

Thank you in advance!

Regards,
Sabil.

Export performance data as separate fields

According to docs performancedata is exported in the field "check_result.performance_data" as text.

It would be much cooler to have each perfdata in its own field (check_result.perfdata.size.value), possibly with related fields for thresholds, unit, ...

This would basically allow to get rid of graphite, influxdb, .. and just use the icingabeat + elastic + grafana. I'm currently working on a icingaweb2 module for grafana which could display the graphs from a dashboard using this data.

Connection close is broken for invalid credentials

Expected Behavior

icingabeat should correctly close the connection to the Icinga2 API.

Current Behavior

If the credentials for the Icinga2 API are incorrect, icingabeat doesn't close the connection. At some point the icingabeat log will be flooded with the following messages:

2017-09-25T14:37:23+02:00 ERR Error connecting to API: Get https://localhost:5665/v1/status: dial tcp 127.0.0.1:5665: socket: too many open files
2017-09-25T14:37:31+02:00 ERR Error connecting to API: Post https://localhost:5665/v1/events/?queue=icingabeat&types=CheckResult&types=StateChange&types=Notification&types=AcknowledgementSet&types=AcknowledgementCleared&types=CommentAdded&types=CommentRemoved&types=DowntimeAdded&types=DowntimeRemoved&types=DowntimeStarted&types=DowntimeTriggered: dial tcp 127.0.0.1:5665: socket: too many open files
2017-09-25T14:37:41+02:00 ERR Error connecting to API: Post https://localhost:5665/v1/events/?queue=icingabeat&types=CheckResult&types=StateChange&types=Notification&types=AcknowledgementSet&types=AcknowledgementCleared&types=CommentAdded&types=CommentRemoved&types=DowntimeAdded&types=DowntimeRemoved&types=DowntimeStarted&types=DowntimeTriggered: dial tcp 127.0.0.1:5665: socket: too many open files

Possible Solution

Steps to Reproduce (for bugs)

  1. Configure Icingabeat with incorrect credentials
  2. watch -n 1 "lsof -p $(pgrep icingabeat) | grep CLOSE_WAIT"

Context

Your Environment

  • Beat version (icingabeat -version): icingabeat version 5.3.2 (amd64), libbeat 5.3.2
  • Icinga 2 version (icinga2 --version): r2.7.1-1
  • Elasticsearch version (curl -XGET 'localhost:9200'): 5.6.0
  • Logstash version, if used (bin/logstash -V): 5.6.0-1
  • Kibana version, if used (curl -XGET http://localhost:5601/status -I): 5.6.0
  • Operating System and version: 16.04.2

Error dashboard asset: returned 200 to import file

Expected Behavior

The dashboards installed with setup.dashboards.enabled: true should reference the index created with Icingabeat.

Current Behavior

I am currently receiving the following error:

error loading /usr/share/icingabeat/kibana/7/dashboard/f3625980-e0c9-11ec-80a5-3f2d0da3a05a.json: error dashboard asset: returned 200 to import file: 4 error: error: missing_references

Context

I have written a role for the automatic installation and configuration of Icingabeat via Ansible. All individual parts are installed without any problems. Only this section could not be solved.

Your Environment

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.