Giter VIP home page Giter VIP logo

elk-docker's People

Contributors

b-long avatar bluekvirus avatar cappadona avatar cron410 avatar excalq avatar harbulot avatar jchannon avatar jimtonic avatar kenwdelong avatar mesaugat avatar mg64ve avatar mgazzin avatar mikhailzhukov avatar mzac avatar nikolay avatar npotier avatar ofir123 avatar ovanekem avatar pli01 avatar raphting avatar rdxmb avatar redsoxfantom avatar sergeygalkin avatar skestle avatar spujadas avatar steven5538 avatar sunborn23 avatar tedder avatar timc3 avatar yann-j avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elk-docker's Issues

Updating LS_HEAP_SIZE

Hi,

Thanks for the great project!

I have a problem that logstash seems to crash after about 24h. Viewing logstash.err reveals this:

Error: Your application used more memory than the safety cap of 500M.

It may be caused by a combination of plugins I'm using, but I wanted to start off by enlarging the heap to 1GB.

I'm inheriting to my own Dockerfile and updating logstash configuration, but not sure how to update LS_HEAP_SIZE.

Updating logstash-init file in my own image seems like a bad idea (if you add functionality there or fix bugs).

Is changing this value from my inherited image supported or by using env variable supported? Just setting LS_HEAP_SIZE did not work..

Thanks.

Guidance on adding plugins and enabling logstash web interface

@spujadas
Thanks for putting this solution together.
I was trying to figure out where the right place (within the folder structure on the container) where I could manually add a plugin to elasticsearch?
Probably a more correct approach would be to add it to the Dockerfile on build ... would you be able to suggest how and where that might appropriately get added to the Dockerfile. Let's say the intention is to install this plugin:
bin/plugin -install elasticsearch/elasticsearch-mapper-attachments/2.6.0

Also, on the subject of plugins, is there a recommended best approach, as above, to installing the relevant require logstash plugins via the Dockerfile?

In the README, you mention that the logstash web-interface is disabled. How might I get this running. The logstash set-up for the elk stack looks comprehensive in terms of forwarders and lumberjack ... does there happen to be a write-up you could point to that discusses how best these can be leveraged. Thanks. Colum

Missing templates and etc dir [invalid]

The docs say:

Before starting Filebeat for the first time, run this command (replace elk with the appropriate hostname) to load the default index template in Elasticsearch:

curl -XPUT 'http://elk:9200/_template/filebeat?pretty' -d@/etc/filebeat/filebeat.template.json

But:

$ docker run -it sebp/elk:es233_l232_k451 /bin/bash
root@7e48d8700a76:/# ls /etc/filebeat
ls: cannot access /etc/filebeat: No such file or directory

And as a result:

root@7e48d8700a76:/# curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@/etc/filebeat/fi>
Warning: Couldn't read data from file "/etc/filebeat/filebeat.template.json", 
Warning: this makes an empty POST.
{
  "error" : {
    "root_cause" : [ {
      "type" : "parse_exception",
      "reason" : "Failed to derive xcontent"
    } ],
    "type" : "parse_exception",
    "reason" : "Failed to derive xcontent"
  },
  "status" : 400
}

Extending docker-compose file to add volumes, point to logstash config file

@spujadas
I was wondering if you might be able to point me in the right direction with elk-docker as a base. Your docs are great, but I'm still struggling!

I extended sebp/elk with the following dockerfile (see link below), 'analect/elk' in this case is a local build of your sebp/elk. I'm using this to add various plugins for ES and logstash. I've commented out aspects related to logstash web interface, which was subsequently deprecated.

https://gist.github.com/Analect/3504e9928f6c51044e46

Then I tried to add a volume to the docker-compose file in order to be able to more easily link back to an ES database on the host machine. I know the dockerfile in sebp/elk includes 'VOLUME /var/lib/elasticsearch' towards the end, which I think defines the location of the database within the elk container. I thought I could then link this to the host with '/home/mccoole/Development/Tools/data/elastic/data:/var/lib/elasticsearch'. However, it seems to fail. Do I somehow need to perform a chmod on the /var/lib/elasticsearch folder on the container as part of the Dockerfile for this to work?

elk:
  image: analect/elk-extra:0.1
  ports:
    - "5601:5601"
    - "9200:9200"
    - "5000:5000"
  volumes:
    - /home/mccoole/Development/Tools/data/elastic/data:/var/lib/elasticsearch

This is from the logs on ES, that fails to start.

root@6c7806c9b096:/var/log/elasticsearch# cat elasticsearch.log 
[2015-09-09 17:30:42,553][INFO ][node                     ] [Eric Williams] version[1.7.1], pid[41], build[b88f43f/2015-07-29T09:54:16Z]
[2015-09-09 17:30:42,554][INFO ][node                     ] [Eric Williams] initializing ...
[2015-09-09 17:30:42,993][INFO ][plugins                  ] [Eric Williams] loaded [mapper-attachments, lang-python, cloud-aws, lang-javascript], sites [head, HQ]
[2015-09-09 17:30:43,085][ERROR][bootstrap                ] Exception
org.elasticsearch.ElasticsearchIllegalStateException: Failed to created node environment
    at org.elasticsearch.node.internal.InternalNode.<init>(InternalNode.java:167)
    at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:159)
    at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:77)
    at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:245)
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Caused by: java.nio.file.AccessDeniedException: /var/lib/elasticsearch/elasticsearch/nodes/1
    at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
    at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
    at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
    at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:383)
    at java.nio.file.Files.createDirectory(Files.java:630)
    at java.nio.file.Files.createAndCheckIsDirectory(Files.java:734)
    at java.nio.file.Files.createDirectories(Files.java:720)
    at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:132)
    at org.elasticsearch.node.internal.InternalNode.<init>(InternalNode.java:165)
    ... 4 more

As regards pointing at a logstash config file on starting the elk-docker container, I can see you have various config files in the repo, but I don't see you referencing them in the dockerfiles or other bash scripts.

This is a config file (link below) I had working from a logstash running on the host machine (rather than within a container), that interfaced with a kafka message bus, but I'm not clear on how best to initiate it as part of the logstash startup. I've included 'ADD ./logstash-kafka.conf /etc/logstash/conf.d/logstash-kafka.conf' as part of the extended elk-extra linked to above, but I'm somehow missing something. Also, you can see from the link below, that I'm referencing manually a zookeeper container IP zk_connect => "172.17.0.215:2181", which isn't ideal. Is there a better way of handling this within docker-compose that you know of?
https://gist.github.com/Analect/9d954d1326ce0241f3d6

Thanks for any guidance you can offer.

logstash logs

Hi

I'm running this successfully so a big thank you.

I have added a mixin that is using beats input and a file that adds a filter{} and I'm using the date filter however it always fails for some reason. I was hoping logstash would have some logs but I went to bin/bash of my running container and looked in /var/log/logstash but its empty. Any ideas if I can get any logs out?

Problems in getting single node cluster working as per your docs

@spujadas
As ever, thanks for your great efforts with this repo.

Perhaps you could help.
Is it possible to configure a docker-compose for a single-node cluster ... rather than the manual approach in your docs?

I followed the manual approach, but I'm getting this master_not_discovered_exception.

root@eolas-eslib2:/home/Eolas/data/elastic2# curl http://localhost:9200/_cluster/health?pretty
{
  "error" : {
    "root_cause" : [ {
      "type" : "master_not_discovered_exception",
      "reason" : "waited for [30s]"
    } ],
    "type" : "master_not_discovered_exception",
    "reason" : "waited for [30s]"
  },
  "status" : 503
}

I ran these commands (which differ slightly from yours to use my own image):
docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -p 5000:5000 -itd --name elk myprivaterepo/elk-extra:0.5

then, after creating the elasticsearch-slave.yml as per your docs and adjust the location to my own:
docker run -it --rm=true -v /home/elk-extra/ES_2.X/elasticsearch-slave.yml:/etc/elasticsearch/elasticsearch.yml --link elk:elk-master myprivaterepo/elk-extra:0.5

I suppose I'm confused by the following in your docs:
sudo docker run -it --rm=true \ -v /var/sandbox/elk-docker/elasticsearch-slave.yml:/etc/elasticsearch/elasticsearch.yml \ --link elkdocker_elk_1:elk-master elkdocker_elk

The elkdocker_elk and elkdocker_elk_1 references look like naming conventions for containers generated from docker-compose .. rather than an image ... and the first container should be called just 'elk', as per your docs.

Hence, I tried the --link in my command as --link elk:elk-master

But obviously it's failing to find a master ...
Are there any obvious mistakes I've made?

Thanks.

[2016-02-03 17:39:53,971][INFO ][node                     ] [Moon Knight] version[2.1.1], pid[57], build[40e2c53/2015-12-15T13:05:55Z]
[2016-02-03 17:39:53,971][INFO ][node                     ] [Moon Knight] initializing ...
[2016-02-03 17:39:54,812][INFO ][plugins                  ] [Moon Knight] loaded [mapper-attachments, lang-python, cloud-aws, lang-javascript], sites [head, hq]
[2016-02-03 17:39:54,845][INFO ][env                      ] [Moon Knight] using [1] data paths, mounts [[/var/lib/elasticsearch (/dev/vda1)]], net usable_space [63.1gb], net total_space [78.6gb], spins? [possibly], types [ext4]
[2016-02-03 17:40:01,176][INFO ][node                     ] [Moon Knight] initialized
[2016-02-03 17:40:01,176][INFO ][node                     ] [Moon Knight] starting ...
[2016-02-03 17:40:01,278][INFO ][transport                ] [Moon Knight] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2016-02-03 17:40:01,289][INFO ][discovery                ] [Moon Knight] elasticsearch/0MgA4mtaSC--xXBjdWlRTg
[2016-02-03 17:40:31,289][WARN ][discovery                ] [Moon Knight] waited for 30s and no initial state was set by the discovery
[2016-02-03 17:40:31,300][INFO ][http                     ] [Moon Knight] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2016-02-03 17:40:31,300][INFO ][node                     ] [Moon Knight] started
[2016-02-03 17:41:37,463][INFO ][discovery.zen            ] [Moon Knight] failed to send join request to master [{Silhouette}{m5WHheFyQzWrMyamaP5orw}{172.17.0.2}{172.17.0.2:9300}], reason [RemoteTransportException[[Silhouette][172.17.0.2:9300][internal:discovery/zen/join]]; nested: IllegalStateException[Node [{Silhouette}{m5WHheFyQzWrMyamaP5orw}{172.17.0.2}{172.17.0.2:9300}] not master for join request]; ]
[2016-02-03 17:42:40,502][INFO ][discovery.zen            ] [Moon Knight] failed to send join request to master [{Silhouette}{m5WHheFyQzWrMyamaP5orw}{172.17.0.2}{172.17.0.2:9300}], reason [RemoteTransportException[[Silhouette][172.17.0.2:9300][internal:discovery/zen/join]]; nested: IllegalStateException[Node [{Silhouette}{m5WHheFyQzWrMyamaP5orw}{172.17.0.2}{172.17.0.2:9300}] not master for join request]; ]
[2016-02-03 17:43:43,537][INFO ][discovery.zen            ] [Moon Knight] failed to send join request to master [{Silhouette}{m5WHheFyQzWrMyamaP5orw}{172.17.0.2}{172.17.0.2:9300}], reason [RemoteTransportException[[Silhouette][172.17.0.2:9300][internal:discovery/zen/join]]; nested: IllegalStateException[Node [{Silhouette}{m5WHheFyQzWrMyamaP5orw}{172.17.0.2}{172.17.0.2:9300}] not master for join request]; ]

filebeat broken pipe (no TLS)

Hi,

I have created a docker-compose configuration with 2 containers:

  1. an ELK stack based on your docker image
  2. a tomcat container, containing one web application

I have an issue trying to send log events from container 2 to 1 using Filebeat.

2016/03/21 10:21:12.463875 output.go:87: DBG  output worker: publish 145 events
2016/03/21 10:21:12.464030 client.go:136: DBG  Try to publish 145 events to logstash with window size 10
2016/03/21 10:21:12.481885 client.go:95: DBG  close connection
2016/03/21 10:21:12.482323 client.go:114: DBG  0 events out of 145 events sent to logstash. Continue sending ...
2016/03/21 10:21:12.482460 single.go:76: INFO Error publishing events (retrying): write tcp 172.18.0.3:36052->172.18.0.2:5044: write: broken pipe
2016/03/21 10:21:12.482553 single.go:152: INFO send fail

I can ping the ELK container from tomcat without issues.
This is the filebeat config on tomcat container:

output:
  logstash:
    enabled: true
    hosts: [ "elk:5044" ]
    timeout: 15
    index: filebeat

filebeat:
  prospectors:
    -
      paths:
        - /usr/local/tomcat/logs/*

and this is the docker-compose.yml:

version: '2'
services:

    elk:
        image: sebp/elk
        ports:
            - "5601:5601"
            - "9200:9200"
            - "5044:5044"
            - "5000:5000"

    tomcat-ad:
        extends:
            file: common.yml
            service: tomcat7-jre7-baseline
        ports:
            - "8080:8080"
        volumes:
            - /home/ps/artifacts/webapp-1.0.1.war:/usr/local/tomcat/webapps/webapp.war
            - /home/ps/queues:/root/queues
        links:
            - elk

do you have any suggestions?
Thanks, best regards

Domenico

Missing documentation to change logstash configuration

Hi,

Could you provide some pointers as to what the recommended way is to change the logstash configuration (in my case, i need to add the json filter for incoming events).

When i was looking into this, i read that it is against recommended practices to log into the container and do the changes right there? This means i would have to fork your container and replace the configuration file that is provided by default?

What would be a good way to set this up if i am in the process of creating a config file and i need to edit and reapply my changes several times before the config is fully finished.

Thanks.

help

``sudo docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -p 5000:5000 -it --name elk sebp/elk
is error

WARN[18707] exit status 1
WARN[18708] failed to cleanup ipc mounts:
failed to umount /var/lib/docker/containers/aa944b3c49ff034014d8c002d5528526c62de8204ce8b86402319146e45f2f20/shm: invalid argument
failed to umount /var/lib/docker/containers/aa944b3c49ff034014d8c002d5528526c62de8204ce8b86402319146e45f2f20/mqueue: invalid argument
ERRO[18708] Error unmounting device aa944b3c49ff034014d8c002d5528526c62de8204ce8b86402319146e45f2f20: UnmountDevice: device not-mounted id aa944b3c49ff034014d8c002d5528526c62de8204ce8b86402319146e45f2f20
ERRO[18708] Handler for POST /v1.21/containers/aa944b3c49ff034014d8c002d5528526c62de8204ce8b86402319146e45f2f20/start returned error: Cannot start container aa944b3c49ff034014d8c002d5528526c62de8204ce8b86402319146e45f2f20: [8] System error: mkdir /var/lib/docker/devicemapper/mnt/aa944b3c49ff034014d8c002d5528526c62de8204ce8b86402319146e45f2f20/rootfs/sys/fs/cgroup: no such file or directory
ERRO[18708] HTTP Error err=Cannot start container aa944b3c49ff034014d8c002d5528526c62de8204ce8b86402319146e45f2f20: [8] System error: mkdir /var/lib/docker/devicemapper/mnt/aa944b3c49ff034014d8c002d5528526c62de8204ce8b86402319146e45f2f20/rootfs/sys/fs/cgroup: no such file or directory statusCode=500
Error response from daemon: Cannot start container aa944b3c49ff034014d8c002d5528526c62de8204ce8b86402319146e45f2f20: [8] System error: mkdir /var/lib/docker/devicemapper/mnt/aa944b3c49ff034014d8c002d5528526c62de8204ce8b86402319146e45f2f20/rootfs/sys/fs/cgroup: no such file or directory

Add beats output routing

If you have a beats input plugin, if you send the data directly to elasticsearch it works as expected with the sample dashboards. If you configure beats to send data to the logstash endpoint, they do not because there is no output routing to the correct index names.

Adding:

output {
  elasticsearch {
    hosts => "localhost:9200"
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

To the 02-beats-input.conf, it works, although the naming of that file suggests that an output shouldn't be added there.

Ref: https://www.elastic.co/guide/en/beats/libbeat/1.0.1/getting-started.html#logstash-installation

Beats plugin needs updating to 2.0.5

Sebastian,

I know your image targets logstash-forwarder but I thought I'd share this. I am using your image as a base to test a new config with filebeat. I assume more and more people with be moving to filebeat so maybe this will help someone else. I ran into this problem:

https://discuss.elastic.co/t/issue-with-filebeat-logstash-beats-input-unhandled-exception/33934

which I think is related to using TLS. I was able to fix it by upgrading the beats plugin as suggested by tudor here:

https://discuss.elastic.co/t/issue-with-filebeat-logstash-beats-input-unhandled-exception/33934/2

As a workaround for now, in my Docker file, I just added:

RUN cd /opt/logstash ;\
./bin/plugin update logstash-input-beats

-- Bud

doubt in ELK configuration

Hi,

I am new in ELK and I isntalled follwoing this guide:

https://timothy-quinn.com/running-the-elk-stack-on-centos-7-and-using-beats/

When I start the agent in the client I get the following error.

[root@ardc01zabbix certs]# service filebeat start
Starting filebeat: 2016/08/30 19:31:15.970602 transport.go:125: ERR SSL client failed to connect with: EOF
[ OK ]
[root@ardc01zabbix certs]# telnet 10.32.8.20 5044
Trying 10.32.8.20...
Connected to 10.32.8.20.
Escape character is '^]'.

^CConnection closed by foreign host.
[root@ardc01zabbix certs]#

I cheked that the port 5044 can get throught to the master elk server. I need help in order to resolve this I tried everuthing.

Regards

Cannot receive data with Kafka

Hi, I'm using this container, great work btw, to receive JSON data from Kafka. For that, I installed the kafka-input-plugin as mentioned in the docs, and the plugin is registered as I can see with logstash-plugin list.

My config file looks like this:

input {
    kafka {
        topic_id => 'collectortopic'
        zk_connect => '172.17.0.4:2181'
        type => 'kafka-input'
    }
}
output {
    elasticsearch {
        hosts => ["localhost:9200"]
        codec => json
    }
    stdout {
        codec => json
    }
}

and is added with ADD ./kafka-input.conf /etc/logstash/conf.d/kafka-input.conf in the Dockerfile.

My Kafka setup is fine, because I can send and receive data with other applications. But anything in my ELK setup seems to be wrong, because I cannot receive any data. There's neither any output from Logstash in the console nor any data in Kibana, because there is no logstash index created, which should be the default behavior according to the plugin docs.

The zk_connect is correct too, because otherwise I get exceptions ...

Any ideas?

Thanks in advance!

Logstash doesn't release logs properly for log rotation

It seems that when log rotation runs and truncates (because copytrunate is set) the file is still open pointing at the end thus causing the file pointer null bytes problem.

This causes the filesystem to think that the truncated file is still the same size, thus rotation happens every time even though the true file size (ls -lsh) is small.
Other programs (like nginx) fix this by sending a signal to the process so it will release the file.
I noticed that logstash is started with nice and thats where stdout and stderr are redirected to the log locations. There should be a way to send a signal to nice or the process it starts to enable this. If I find it, i'll submit a PR.

Auto Reload considered dangerous?

I just wanted to make you aware of another issue that I had with the 2.3.x series. There might be a resource leak in logstash. The issue is tracking here: elastic/logstash#5235.

Once I removed auto-reload from my logstash config, my instance is fine. It's still running with load average 0.1 after 2 days.

SSL connection error with es230_l230_k450

I followed the instructions at readthedocs, sending logfiles with filebeat. With es221_l222_k442 logs are sent and processed:

using

filebeat -c /etc/filebeat/filebeat.yml -e -d "*"

2016/04/07 11:28:30.730432 client.go:90: DBG connect
2016/04/07 11:28:30.832034 outputs.go:126: INFO Activated logstash as output plugin.
2016/04/07 11:28:30.832063 publish.go:232: DBG Create output worker

Changing the docker image to es230_l230_k450 (latest) filebeat cannot connect anymore:

2016/04/07 11:28:34.671825 client.go:90: DBG connect
2016/04/07 11:28:34.672001 transport.go:125: ERR SSL client failed to connect with: dial tcp 192.168.12.66:5044: getsockopt: connection refused
2016/04/07 11:28:34.672008 single.go:126: INFO Connecting error publishing events (retrying): dial tcp 192.168.12.66:5044: getsockopt: connection refused
2016/04/07 11:28:34.672011 single.go:152: INFO send fail
2016/04/07 11:28:34.672015 single.go:159: INFO backoff retry: 2s

elk container error:

==> /var/log/kibana/kibana4.log <==
{"type":"log","@timestamp":"2016-04-07T11:28:10+00:00","tags":["status","plugin:elasticsearch","info"],"pid":211,"name":"plugin:elasticsearch","state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2016-04-07T11:28:13+00:00","tags":["status","plugin:elasticsearch","info"],"pid":211,"name":"plugin:elasticsearch","state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}

==> /var/log/logstash/logstash.log <==
{:timestamp=>"2016-04-07T11:28:33.553000+0000", :message=>"Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash.", "exception"=>#<NoMethodError: undefined method multi_filter' for nil:NilClass>, "backtrace"=>["(eval):191:in cond_func_4'", "org/jruby/RubyArray.java:1613:ineach'", "(eval):188:in cond_func_4'", "(eval):130:infilter_func'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:271:in filter_batch'", "org/jruby/RubyArray.java:1613:ineach'", "org/jruby/RubyEnumerable.java:852:in inject'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:269:infilter_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:227:in worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:205:instart_workers'"], :level=>:error} {:timestamp=>"2016-04-07T11:28:33.654000+0000", :message=>"Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash.", "exception"=>#<NoMethodError: undefined method multi_filter' for nil:NilClass>, "backtrace"=>["(eval):191:in cond_func_4'", "org/jruby/RubyArray.java:1613:ineach'", "(eval):188:in cond_func_4'", "(eval):130:infilter_func'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:271:in filter_batch'", "org/jruby/RubyArray.java:1613:ineach'", "org/jruby/RubyEnumerable.java:852:in inject'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:269:infilter_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:227:in worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:205:instart_workers'"], :level=>:error}

Can't run ELK cluster on different nodes

Hello, i am unable to start ELK cluster. On first EC2 instance, I have this config:

node.master: 'true'
node.data: 'true'
network.bind_host: 172.31.3.111 #ec2 internal ip addres, first node
network.publish_host: 172.31.3.111
network.host: 172.31.3.111
transport.tcp.port: '9300'
http.port: '9200'
discovery.zen.ping.timeout: '3s'
discovery.zen.ping.unicast.hosts: [172.31.3.111, 172.31.3.112]

On second host, just changed IP addresses:

node.master: 'true'
node.data: 'true'
network.bind_host: 172.31.3.112 #ec2 internal ip address, 2nd node
network.publish_host: 172.31.3.112
network.host: 172.31.3.112
transport.tcp.port: '9300'
http.port: '9200'
discovery.zen.ping.timeout: '3s'
discovery.zen.ping.unicast.hosts: [172.31.3.111, 172.31.3.112] #ec2 internal ip addresses

Tried also with settings:

network.host: 0.0.0.0
discovery.zen.ping.unicast.hosts: [172.31.3.111, 172.31.3.112]

on moth machines but still doesnt working, Elasticsearch doesn't start.

  • Starting Elasticsearch Server sysctl: setting key "vm.max_map_count": Read-only file system
    [ OK ]
    waiting for Elasticsearch to be up (1/30)
    waiting for Elasticsearch to be up (2/30)
    waiting for Elasticsearch to be up (3/30)
    waiting for Elasticsearch to be up (4/30)
    waiting for Elasticsearch to be up (5/30)
    waiting for Elasticsearch to be up (6/30)
    waiting for Elasticsearch to be up (7/30)
    waiting for Elasticsearch to be up (8/30)
    waiting for Elasticsearch to be up (9/30)
    waiting for Elasticsearch to be up (10/30)
    waiting for Elasticsearch to be up (11/30)
    waiting for Elasticsearch to be up (12/30)
    waiting for Elasticsearch to be up (13/30)
    waiting for Elasticsearch to be up (14/30)
    waiting for Elasticsearch to be up (15/30)
    waiting for Elasticsearch to be up (16/30)
    waiting for Elasticsearch to be up (17/30)
    waiting for Elasticsearch to be up (18/30)
    waiting for Elasticsearch to be up (19/30)
    waiting for Elasticsearch to be up (20/30)
    waiting for Elasticsearch to be up (21/30)
    waiting for Elasticsearch to be up (22/30)
    waiting for Elasticsearch to be up (23/30)
    waiting for Elasticsearch to be up (24/30)
    waiting for Elasticsearch to be up (25/30)
    waiting for Elasticsearch to be up (26/30)
    waiting for Elasticsearch to be up (27/30)
    waiting for Elasticsearch to be up (28/30)
    waiting for Elasticsearch to be up (29/30)
    waiting for Elasticsearch to be up (30/30)
    logstash started.
  • Starting Kibana4 [ OK ]
    ==> /var/log/elasticsearch/elasticsearch.log <==
    at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:

Do you have some suggestion?

Thanks ahead.

Too many open files

Hi,

Elastic search is failing to load the shard due to too many open files:

[2016-07-12 16:43:23,040][WARN ][indices.cluster          ] [She-Hulk] [[filebeat-2016.04.12][0]] marking and sending shard failed due to [failed recovery]
[filebeat-2016.04.12][[filebeat-2016.04.12][0]] IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to create engine]; nested: FileSystemException[/var/lib/elasticsearch/elasticsearch/nodes/0/indices/filebeat-2016.04
.12/0/index: Too many open files];
        at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:250)
        at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
        at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: [filebeat-2016.04.12][[filebeat-2016.04.12][0]] EngineCreationFailureException[failed to create engine]; nested: FileSystemException[/var/lib/elasticsearch/elasticsearch/nodes/0/indices/filebeat-2016.04.12/0/index: Too many open files];
        at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:155)
        at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
        at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1510)
        at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1494)
        at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:969)
        at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:941)
        at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:241)
        ... 5 more
Caused by: java.nio.file.FileSystemException: /var/lib/elasticsearch/elasticsearch/nodes/0/indices/filebeat-2016.04.12/0/index: Too many open files
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
        at sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:426)
        at java.nio.file.Files.newDirectoryStream(Files.java:413)
        at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:190)
        at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:202)
        at org.elasticsearch.index.store.FsDirectoryService$1.listAll(FsDirectoryService.java:127)
        at org.apache.lucene.store.FilterDirectory.listAll(FilterDirectory.java:57)
        at org.apache.lucene.store.FilterDirectory.listAll(FilterDirectory.java:57)
        at org.apache.lucene.store.FilterDirectory.listAll(FilterDirectory.java:57)
        at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:811)
        at org.elasticsearch.index.engine.InternalEngine.createWriter(InternalEngine.java:1088)
        at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:149)
        ... 11 more

And it causes Kibana to show elastic search with red as "Elasticsearch is still initializing the kibana index."

I tried to fix it by trying multiple ways to increase the allowed open files count but nothing seems to work. Anyone bumped into this and have a solution? It seems that deleting the .kibana index and starting over works, but since it's the second time it happens, I'd like a more permanent solution..

Thanks!

port 9300 is not exposed

Hey,

Thanks for your contribution! I have been using your image for a while and it has worked well for me.

I am now wanting to use my elk instance as a data store for a scala project but the library uses TCP connections to elasticsearch and not http so port 9300 is needed.

I have created pull request #2 could you please merge it so I can continue to use your image please ๐Ÿ˜„

Log data doesn't persist after removing and rebuilding container

After inserting dummy log data into logstash, I am able to see the logs in kibana as well as http://elk:9200/_search?pretty.

But when I docker-compose kill followed by docker-compose rm and then rebuild using docker-compose up the old dummy logs are not being persisted in the VOLUME /var/lib/elasticsearch.

I have to insert new dummy logs for kibana to re-create logstash-* index pattern and the old logs are not there.

Am I not supposed to docker-compose rm? I thought volumes persisted unless you explicitly run docker-compose -v rm...

Logstash tcp/udp inputs stop working after container restart

Hi! Awesome image, like it very much.
But I stumbled upon an issue: after I stop and start container again, logstash doesn't receive data in tcp or udp inputs (tried both)

my logstash-tcp.conf:

input {
    udp {
    port => 25000
        codec => json
    }
}

output {
  elasticsearch { host => localhost }
}

Dockerfile:

FROM sebp/elk

ENV ES_HOME /usr/share/elasticsearch
WORKDIR ${ES_HOME}

RUN bin/plugin -i royrusso/elasticsearch-HQ
RUN bin/plugin -i mobz/elasticsearch-head

RUN rm -f /etc/logstash/conf.d/01-lumberjack-input.conf
RUN rm -f /etc/logstash/conf.d/10-syslog.conf
RUN rm -f /etc/logstash/conf.d/11-nginx.conf
RUN rm -f /etc/logstash/conf.d/30-lumberjack-output.conf
ADD ./logstash-tcp.conf /etc/logstash/conf.d/logstash-tcp.conf

EXPOSE 25000/udp

Maybe I'm doing something wrong.
I send logs via logstash appender

Connecting custom container to elk

Hi,

first of all, thanks for your outstanding work.
I am trying to connect my custom container to your ELK one to allow log collecting. In my custom container, all apps log to stdout, so I would like to simply pipe container logs into the ELK, without installing filebeat on the container itself.
I've tried with this docker-compose file:

version: '2'
services: 
    elk: 
        image: sebp/elk
        ports:
            - "5601:5601"
            - "9200:9200"
            - "5044:5044"
            - "5000:5000"
    tomcat-ad: 
        image: dr/my-tomcat
        ports: 
            - "8080:8080"
        links: 
            - elk
        logging: 
            driver: gelf
            options:  
                gelf-address: "udp://elk:5044"
                gelf-tag: "test-elk-tag"

but I get an error:

ps@ps-vm:~$ docker-compose up -d   
Creating ps_elk_1
Creating ps_tomcat-ad_1
ERROR: Failed to initialize logging driver: gelf: cannot connect to GELF endpoint: elk:5044 dial udp: lookup elk on 10.25.7.4:53: server misbehaving

Seems like the "elk" hostname is somehow not recognized, so the docker host tries to reach my network DNS server.
What do you think? Is this approach possible?

Issue when installing Kibana plugin Sense

Hi,

When I try to install the Kibana plugin Sense with this Dockerfile

FROM sebp/elk

WORKDIR ${KIBANA_HOME}
RUN gosu kibana bin/kibana plugin -i elastic/sense

I have theses errors:

Step 3 : RUN gosu kibana bin/kibana plugin -i elastic/sense
 ---> Running in 662d1a08d470
Installing sense
Attempting to transfer from https://download.elastic.co/elastic/sense/sense-latest.tar.gz
Error: Client request error: getaddrinfo ENOTFOUND download.elastic.co download.elastic.co:443
Plugin installation was unsuccessful due to error "Client request error: getaddrinfo ENOTFOUND         download.elastic.co download.elastic.co:443"
ERROR: Service 'elastic' failed to build: The command '/bin/sh -c gosu kibana bin/kibana plugin -i elastic/sense' returned a non-zero code: 70
Cedric-Vidal:elasticsearch mtournaud$ docker volume create elk-data
docker: "volume create" requires 0 arguments.

Anybody have a solution ?

filebeats from remote not validating with cert

First I created a new set of certs with:

sudo openssl req -x509 -batch -nodes -subj "/CN=elk/" -days 3650 -newkey rsa:2048 -keyout private/logstash-beats.key -out certs/logstash-beats.crt

I took your image and made a Dockerfile with:

FROM sebp/elk
MAINTAINER Alan Blount
ADD certs /etc/pki/tls/certs

And it starts up fine...

From a remote host machine I'm getting the following error:

# /etc/init.d/filebeat restart
 * Restarting Sends log files to Logstash or directly to Elasticsearch. filebeat
2016/01/22 06:23:58.932905 transport.go:125: ERR SSL client failed to connect with: crypto/rsa: verification error
   ...done.

# grep cert /etc/filebeat/filebeat.yml
      certificate_authorities:
      - /etc/pki/tls/certs/logstash-beats.crt

# md5sum /etc/pki/tls/certs/logstash-beats.crt
e9ec6a39d449c680a08ed866ab8171ae  /etc/pki/tls/certs/logstash-beats.crt

and on the ELK machine, from inside the container (via docker exec ... bash)

# md5sum /etc/pki/tls/certs/logstash-beats.crt
e9ec6a39d449c680a08ed866ab8171ae  /etc/pki/tls/certs/logstash-beats.crt

can you think of any reason I'd be getting this error?

Compatible with Java 8?

I downloaded and installed ELK-Docker according to the instructions at http://elk-docker.readthedocs.io/#installation. Went fine; things running via Docker on Windows (!).

I am trying to connect ElasticSearch to Microsoft SQL Server so as to evaluate Kibana as a possible alternative to Dashboard engines such as Tableau.

The JDBC Importer (https://github.com/jprante/elasticsearch-jdbc) was designed to connect ElasticSearch to SQL Server. However the JDBC Importer requires Java 8.

Is it the case that ELK-Docker does not support Java 8? I'm more of a Microsoft stack guy so it's quite possible I messed something up; but please confirm the version of Java needed for ELK-Docker.

I'm getting this error message when running the JDBC Importer :

Exception in thread "main" java.lang.UnsupportedClassVersionError: org/xbib/tools/Runner : Unsupported major.minor version 52.0.

The JDBC Importer Issues link on Github says that error is due to Java 8 not being present.

filebeat broken pipe

Hi,

This is probably not your issue so feel free to tell me to create an issue on the elastic filebeat repo.

I've had your docker image working and had filebeat working fine.

However after stopping filebeat and the docker container and starting it all up again, filebeat sees a file but then throws errors which I don't understand and was wondering if you'd seen it before?

2016/01/11 16:50:45.526612 registrar.go:83: INFO Starting Registrar
2016/01/11 16:50:45.526678 filebeat.go:122: INFO Start sending events to output
2016/01/11 16:50:45.526749 spooler.go:77: INFO Starting spooler: spool_size: 1024; idle_timeout: 5s
2016/01/11 16:50:45.526801 log.go:62: INFO Harvester started for file: /persist/blah/2015-12-01.log
2016/01/11 16:50:48.028564 single.go:75: INFO Error publishing events (retrying): write tcp 127.0.0.1:42956->127.0.0.1:5044: write: broken pipe
2016/01/11 16:50:48.028744 single.go:143: INFO send fail
2016/01/11 16:50:48.028802 single.go:150: INFO backoff retry: 1s
2016/01/11 16:50:49.030793 single.go:75: INFO Error publishing events (retrying): write tcp 127.0.0.1:42968->127.0.0.1:5044: write: broken pipe
2016/01/11 16:50:49.030878 single.go:143: INFO send fail
2016/01/11 16:50:49.030953 single.go:150: INFO backoff retry: 2s
2016/01/11 16:50:51.033594 single.go:75: INFO Error publishing events (retrying): read tcp 127.0.0.1:42980->127.0.0.1:5044: read: connection reset by peer
2016/01/11 16:50:51.033879 single.go:143: INFO send fail

Certificate issue with filebeat agent running inside nginx-filebeat container

Hi,

Im getting the following error when I run the nginx-filebeat docker container:

2016/06/10 12:36:46.192860 transport.go:125: ERR SSL client failed to connect with: x509: cannot validate certificate for 192.168.99.100 because it doesn't contain any IP SANs

Is there something wrong with the cert file (elk-docker/nginx-filebeat/logstash-beats.crt) ?

Any help would be much appreciated.

Thanks,

zzkhan

Connecting non-beats source with appropriate Index

Great stack, thanks

I've got other services in my setup now, for example one using this python log shipper (json to 5001/tcp) https://github.com/vklochan/python-logstash

The elk-docker output.conf file is trying to shoehorn everything into a beats index by the looks

index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}"

I've tried updating this with my newbie logstash experience using:

output {
  elasticsearch {
    hosts => ["localhost"]
    sniffing => true
    manage_template => false
    if [@metadata][beat] =~ "filebeat-" {
      mutate { replace => { index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" } }
      mutate { replace => { document_type => "%{[@metadata][type]}" } }
    } else {
      mutate { replace => { index => "other" } }
      mutate { replace => { document_type => "stuff" } }
    }
  }
  stdout { codec => rubydebug }
}

But I keep getting:

{:timestamp=>"2016-03-28T11:25:30.259000+0000", :message=>"Error: Expected one of #, => at line 54, column 8 (byte 1197) after output {\n elasticsearch {\n hosts => ["localhost"]\n sniffing => true\n manage_template => false\n if "}

in the logstash logs. Can you help make this do what I'm trying, or suggest a better way if I'm approaching this wrong?

Thanks
Brad

logstash is not running on centos 7

Thanks for putting this solution together.
logstash version 1.5.2

centos7 (LXC)

LS_USER=logstash
LS_GROUP=logstash
LS_HOME=/opt/logstash
LS_HEAP_SIZE="500m"
LS_JAVA_OPTS="-Djava.io.tmpdir=${LS_HOME}"
LS_LOG_DIR=/data/log/logstash
LS_LOG_FILE="${LS_LOG_DIR}/$name.log"
LS_CONF_DIR=/opt/logstash/conf.d
LS_OPEN_FILES=16384
LS_NICE=19

LS_OPTS=""

I added logstash-init script on Dockerfile, after build image,try to start it an error.
conf files are in use ,I use zip decompression
By the way, I am using this script supervisor monitoring process.

{
[root@log /]# /usr/local/bin/logstash start
touch: cannot touch '/data/log/logstash/logstash.log': No such file or directory
chown: cannot access '/data/log/logstash/logstash.log': No such file or directory
logstash started.
[root@log /]# /usr/local/bin/logstash: line 57: /data/log/logstash/logstash.stdout: No such file or directory
}

Root Disk is 80% full

@spujadas

# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        20G   15G  4.0G  79% /

I logged into the running container and see the /etc/hosts showing as 79%.

/dev/vda1     20G   15G  4.0G  79% /etc/hosts


5997716 /
3244584 /var
2760688 /var/lib
2484128 /var/lib/docker
2448948 /var/lib/docker/volumes
2340880 /var/lib/docker/volumes/a41835047f86ef6f868e93315dfc2d1fe5760c75e390f754311d8a9e27e0be39
2340876 /var/lib/docker/volumes/a41835047f86ef6f868e93315dfc2d1fe5760c75e390f754311d8a9e27e0be39/_data
2328580 /var/lib/docker/volumes/a41835047f86ef6f868e93315dfc2d1fe5760c75e390f754311d8a9e27e0be39/_data/ibdata1
2304924 /usr
796992  /usr/lib

Any thoughts on how to clean up?

Kibana log files

I've left our docker container running doing nothing for a few days and found out that Kibana had stopped. I'm not sure why which is worrying. I see by default Kibana doesnt log errors to a file by default on v4.

Is it worth telling it to on the dockerfile a bit like you've done for logstash so it writes to /var/log/kibana?

kibana -l /var/log/kibana

filbeat forward logs

I'm trying to forward old logs from Tomcat to Logstash with Filebeat.

I used this following tutorial to set up Filebeat: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html
I tried it on Windows, I guess it's not working for me, and also I tried it on Voyage Linux (Debian).
On Linux I used sudo /etc/init.d/filebeat start to start filebeat. It gave me a blank page. But on the second try it said that filebeat is already running. But on http://192.168.99.100:9200/_search?pretty it's only showing me my example logs from this part: http://elk-docker.readthedocs.org/#creating-a-dummy-log-entry

My filebeat.yml:

  logstash:
    enabled: true
    hosts:
      - 192.168.99.100:5044

    #i don't know where to get the beats.crt from
    #tls:
    #  certificate_authorities:
    #    - /etc/pki/tls/certs/logstash-beats.crt
    timeout: 15

filebeat:
  prospectors:
    -
      paths:
        - "/log/*.log"
      document_type: tomcat-access

My Logstash container logs are showing nothing that is from that date when I tried it.

Tag images

Hi,

We made a change to our docker image that is based on yours so it pulled your latest with the ELK bits upgraded and then kibana just kept saying it was rebuilding the index. Not sure why so we nuked our docker volume and started again as we didn't need the data.

However, going forward this might be an issue once we have stored data. Would it be possible for you to tag your images on docker hub so if we do pull latest and things go wrong we can fallback to an image tag we know worked. At the moment you just have latest so there is no way to fall back.

Thanks ๐Ÿ˜„

Elasticsearch stops after a while

Hi,

I'm running sebp/elk:es233_l232_k451 on a micro EC2. I notice that after the container runs for a while (a few hours), the Elasticsearch service inside the container stops.

Here is /var/log/elasticsearch/elasticsearch.log

[2016-07-18 16:07:45,242][INFO ][node                     ] [Chthon] version[2.3.3], pid[34], build[218bdf1/2016-05-17T15:40:04Z]
[2016-07-18 16:07:45,242][INFO ][node                     ] [Chthon] initializing ...
[2016-07-18 16:07:46,005][INFO ][plugins                  ] [Chthon] modules [lang-groovy, reindex, lang-expression], plugins [], sites []
[2016-07-18 16:07:46,046][INFO ][env                      ] [Chthon] using [1] data paths, mounts [[/var/lib/elasticsearch (/dev/disk/by-uuid/23a4139b-b0c3-4cb9-aa4c-620243691435)]], net usable_space [4.8gb], net total_space [7.7gb], spins? [possibly], types [ext4]
[2016-07-18 16:07:46,046][INFO ][env                      ] [Chthon] heap size [1015.6mb], compressed ordinary object pointers [true]
[2016-07-18 16:07:46,046][WARN ][env                      ] [Chthon] max file descriptors [65535] for elasticsearch process likely too low, consider increasing to at least [65536]
[2016-07-18 16:07:48,545][INFO ][node                     ] [Chthon] initialized
[2016-07-18 16:07:48,545][INFO ][node                     ] [Chthon] starting ...
[2016-07-18 16:07:48,684][INFO ][transport                ] [Chthon] publish_address {172.17.0.2:9300}, bound_addresses {[::]:9300}
[2016-07-18 16:07:48,689][INFO ][discovery                ] [Chthon] elasticsearch/jdVkwiCJQTanDdNhCFYLew
[2016-07-18 16:07:51,738][INFO ][cluster.service          ] [Chthon] new_master {Chthon}{jdVkwiCJQTanDdNhCFYLew}{172.17.0.2}{172.17.0.2:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-07-18 16:07:51,757][INFO ][http                     ] [Chthon] publish_address {172.17.0.2:9200}, bound_addresses {[::]:9200}
[2016-07-18 16:07:51,757][INFO ][node                     ] [Chthon] started
[2016-07-18 16:07:51,852][INFO ][gateway                  ] [Chthon] recovered [5] indices into cluster_state
[2016-07-18 16:07:55,029][INFO ][cluster.routing.allocation] [Chthon] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-2016.07.04][3], [logstash-2016.07.04][3]] ...]).
[2016-07-18 16:08:57,312][INFO ][cluster.metadata         ] [Chthon] [filebeat-2016.07.18] creating index, cause [auto(bulk api)], templates [filebeat], shards [5]/[1], mappings [_default_, access]
[2016-07-18 16:08:57,568][INFO ][cluster.routing.allocation] [Chthon] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[filebeat-2016.07.18][4], [filebeat-2016.07.18][4]] ...]).
[2016-07-18 16:08:57,796][INFO ][cluster.metadata         ] [Chthon] [filebeat-2016.07.18] update_mapping [access]
[2016-07-18 16:08:59,268][INFO ][cluster.metadata         ] [Chthon] [filebeat-2016.07.18] create_mapping [passenger]

But both Kibana and Logstash service are running. I have no idea why this happened. Please help.

logstash not starting with init.d script

With the docker hub version of the image, I'm having a problem getting logstash to start using the init.d script. Seems to work fine launching from the command line with:

/opt/logstash/bin/logstash agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log

Here's what's in the log files:

root@460d0f13460f:/# cat /var/log/logstash/logstash.stdout
(Prepend -J in front of these options when using 'jruby' command)

root@460d0f13460f:/# cat /var/log/logstash/logstash.err
WARNING: Default JAVA_OPTS will be overridden by the JAVA_OPTS defined in the environment. Environment JAVA_OPTS are -Djava.io.tmpdir=/opt/logstash
Usage: java [-options] class [args...]
           (to execute a class)
   or  java [-options] -jar jarfile [args...]
           (to execute a jar file)
where options include:
    -d32          use a 32-bit data model if available
    -d64          use a 64-bit data model if available
    -server       to select the "server" VM
    -zero         to select the "zero" VM
    -jamvm        to select the "jamvm" VM
    -avian        to select the "avian" VM
    -dcevm        to select the "dcevm" VM
                  The default VM is server,
                  because you are running on a server-class machine.


    -cp <class search path of directories and zip/jar files>
    -classpath <class search path of directories and zip/jar files>
                  A : separated list of directories, JAR archives,
                  and ZIP archives to search for class files.
    -D<name>=<value>
                  set a system property
    -verbose:[class|gc|jni]
                  enable verbose output
    -version      print product version and exit
    -version:<value>
                  require the specified version to run
    -showversion  print product version and continue
    -jre-restrict-search | -no-jre-restrict-search
                  include/exclude user private JREs in the version search
    -? -help      print this help message
    -X            print help on non-standard options
    -ea[:<packagename>...|:<classname>]
    -enableassertions[:<packagename>...|:<classname>]
                  enable assertions with specified granularity
    -da[:<packagename>...|:<classname>]
    -disableassertions[:<packagename>...|:<classname>]
                  disable assertions with specified granularity
    -esa | -enablesystemassertions
                  enable system assertions
    -dsa | -disablesystemassertions
                  disable system assertions
    -agentlib:<libname>[=<options>]
                  load native agent library <libname>, e.g. -agentlib:hprof
                  see also, -agentlib:jdwp=help and -agentlib:hprof=help
    -agentpath:<pathname>[=<options>]
                  load native agent library by full pathname
    -javaagent:<jarpath>[=<options>]
                  load Java programming language agent, see java.lang.instrument
    -splash:<imagepath>
                  show splash screen with specified image
See http://www.oracle.com/technetwork/java/javase/documentation/index.html for more details.

I pulled/ran the image with:

$ docker pull sebp/elk
Using default tag: latest
latest: Pulling from sebp/elk
Digest: sha256:388c55c8451719aabd73d6f5e2e3e9d480f707238d958bd5ddb54ce4997b18dc
Status: Image is up to date for sebp/elk:latest

$ docker run --rm -p 5601:5601 -p 9200:9200 -p 5000:5000 -it --name elk sebp/elk
 * Starting Elasticsearch Server                                       [OK]
sysctl: setting key "vm.max_map_count": Read-only file system
logstash started.

waiting for Elasticsearch to be up (1/30)
waiting for Elasticsearch to be up (2/30)

Weird behavior on ELK stack.

I got a question about Logstash particularly. We are working on the manipulation of data, but when we start to test and make changes on the config files, we realize that the agent of logstash only parse log files with recient date (tested with touch command over the log file and making a copy of the same file). Is this an expected behavior? And on that scenario, Is it possible to reconfigure the agent or change the parameters during the reading process for the consideration of older files?

cron won't start causing logrotate to never run

My system was brought down the other day by logstash creating a 20GB log file in /var/log/logstash/logstash.stdout in less than 24h.

After some debugging I realized the cron daemon isn't running thus logrotate never runs.

Make service UIDs/GIDs fixed

FYI, upgrading from es232_l232_k450 to es234_l234_k453 I noticed that UID of elasticsearch user changed. I have mounted host filesystem directories for logs and data. After updating I did not get any relevant output message to logs (permission denied) and docker console.

Finally after tuning elasticsearch init script to avoid daemon mode I got relevant error.

I think maybe users should have fixed UID settings to avoid this when image change?

source: https://hub.docker.com/r/sebp/elk/

Filebeat 1.1.0 multiline support

I'm using filebeat to ship multiline logs into logstash per your documentation, but was running into an issue when I realized that each line (like from a stack trace) was being considered as an individual log event.

I thought adding a multiline codec to the beats input filter would do the trick, but it didn't seem to have any effect at all:

#02-beats-input.conf
input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
    ssl_key => "/etc/pki/tls/private/logstash-beats.key"
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601} "
      negate => true
      what => previous
    }
  }

}

Thankfully this issue was recently being talked about here: https://github.com/elastic/filebeat/issues/301 and support announced here: https://www.elastic.co/blog/beats-1-1-0-and-winlogbeat-released.

So I just wanted to bring to your attention that it might be worth updating your docs to include the most recent version of filebeat and maybe a line about multiline support:
https://download.elastic.co/beats/filebeat/filebeat_1.1.0_amd64.deb

Kibana stucks loading after version es231_l231_k450

Hi there,

First, thanks for all the effort in the project.
I'm trying to use a version after es231_l231_k450 because I would like to use gosu to install some logstash plugins.
The problem is with any version before es231_l231_k450, es233_l232_k451 for example, Kibana stucks in the "Kibana is loading. Give me a moment here" loading page.

I can repro this also using latest also. es231_l231_k450 works fine. Kibana loads very quick.

Any ideas?
Thanks!

java client API

not sure this is related to your work,
Is the client installed?
which port is it running on (assume 9200)? also tried with port 9300

started docker instance on 192.168.99.100
can connect to client on port 9200 and Kebana on port 5601
created some logs as instructed in the docs

both server and client version is 2.3.1 (maven)

 <dependency>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch</artifactId>
            <version>2.3.1</version>
        </dependency>

testing the connection from a java API like so:

        Settings settings = Settings.settingsBuilder()
                .put("client.transport.sniff", true)
                .put("client.transport.ignore_cluster_name", true)
                .build();

        this.client = TransportClient.builder().settings(settings).build()
                .addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("192.168.99.100"), 9200));

getting error on connection

INFO: [Yandroth] modules [], plugins [], sites []
Apr 29, 2016 12:08:36 PM org.elasticsearch.client.transport.TransportClientNodesService$SniffNodesSampler$1$1 handleException
INFO: [Yandroth] failed to get local cluster state for {#transport#-1}{192.168.99.100}{192.168.99.100:9200}, disconnecting...
ReceiveTimeoutTransportException[[][192.168.99.100:9200][cluster:monitor/state] request_id [0] timed out after [5002ms]]
    at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:679)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

and without the settings

        this.client = TransportClient.builder()
                .build()
                .addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("192.168.99.100"), 9200));

slightly different connection error...

INFO: [Colonel A
merica] modules [], plugins [], sites []
Apr 29, 2016 12:23:44 PM org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler doSample
INFO: [Colonel America] failed to get node info for {#transport#-1}{192.168.99.100}{192.168.99.100:9200}, disconnecting...
ReceiveTimeoutTransportException[[][192.168.99.100:9200][cluster:monitor/nodes/liveness] request_id [0] timed out after [5003ms]]
    at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:679)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

any help appreciated , thanks.

Can't send filebeat to logstash

I've tried hooking up to my elk-docker container from 2 different hosts (a filebeat container on another server, and even just running filebeat on my laptop).

I'm seeing the exact same issue from both -- I'm not sure if this is an issue with my network settings (ports are fine though...) or with the ELK config.

Here's where things seem to go wrong in the filebeat client logs (nothing of interest in the elk-docker logs), and it keeps backing off and retrying:

2016/02/17 05:05:02.403280 client.go:136: DBG  Try to publish 1922 events to logstash with window size 10
2016/02/17 05:05:02.436776 client.go:95: DBG  close connection
2016/02/17 05:05:02.436862 client.go:114: DBG  0 events out of 1922 events sent to logstash. Continue sending ...
2016/02/17 05:05:02.436881 single.go:76: INFO Error publishing events (retrying): lumberjack protocol error
2016/02/17 05:05:02.436894 single.go:152: INFO send fail
2016/02/17 05:05:02.436906 single.go:159: INFO backoff retry: 1s

Been trying to get this working for hours, so any ideas are helpful! Sorry if this issue has nothing to do with your image.

Help Installing Marvel

I have added these lines to the very bottom of my Dockerfile in my fork (kenwdelong/elk-docker):

ENV ES_HOME /usr/share/elasticsearch
WORKDIR ${ES_HOME}
RUN bin/plugin install license
RUN bin/plugin install marvel-agent
WORKDIR ${KIBANA_HOME}
RUN bin/kibana plugin --install elasticsearch/marvel/2.3.1

Now Kibana won't start. It won't start using "service kibana start" or "start.sh". No error messages are printed anywhere, even after adding -v to start_stop_daemon. Nothing at all is ever printed to kibana4.log. I chown'd the marvel directory to kibana:kibana, no effect.

However, if I run $KIBANA_HOME/bin/kibana, it all works fine.

I've done everything I could think of, and googled all over the place, and I'm out of ideas. I know this isn't strictly an issue with this image, but I was hoping someone watching this board would have an idea for me!

using log-driver with the ssl-termination around logstash

I'm trying to receive logs from a container running on a remote sever using the docker log-driver (syslog), however logstash does not seem to accept anything but ssl(correct?)

I haven't found a way to make the docker log-driver to use ssl yet. Is there a simple way to disable the ssl requirement by logstash? In the near future i will probably utilize a VPN or tunnel of some sort anyway, I just need this to be open for now while testing and configuring.

Keys Generation

@spujadas How are you generating these keys? Are these done manually?

#certs/keys for Beats and Lumberjack input
RUN mkdir -p /etc/pki/tls/certs && mkdir /etc/pki/tls/private
ADD ./logstash-forwarder.crt /etc/pki/tls/certs/logstash-forwarder.crt
ADD ./logstash-forwarder.key /etc/pki/tls/private/logstash-forwarder.key
ADD ./logstash-beats.crt /etc/pki/tls/certs/logstash-beats.crt
ADD ./logstash-beats.key /etc/pki/tls/private/logstash-beats.key

Thanks,
Govind

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.