Giter VIP home page Giter VIP logo

sample-code's People

Contributors

aetter avatar alolita avatar carlmeadows avatar dandydeveloper avatar dcuenot avatar derekheldtwerle avatar elfisher avatar gaiksaya avatar hyandell avatar jon-ataws avatar miro-grapeup avatar mlapointe22 avatar peterzhuamazon avatar rishabh6788 avatar saadrana219 avatar sendkb avatar seneccamiller avatar sruti1312 avatar xom4ek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sample-code's Issues

Move masters to StatefulSet

I think there is an issue with the master configuration that would cause issues if all master nodes were lost simultaneously:

  • The data/client nodes would require restarting as Zen doesn't appear to re-query the elasticsearch-discovery DNS record after initial startup, so they wouldn't be able to find the newly provisioned master pods pires/kubernetes-elasticsearch-cluster#197
  • Masters store state to disk, if this is lost, it can cause serious data loss across the cluster. Currently this is written directly into the container filesystem, which is also bad practice (PVs should be used).

Both of these issues would be resolved by moving to a StatefulSet for the masters, with a small PV associated to each and publishNotReadyAddresses: true

If there is agreement I'm happy to implement.

Email alert no sending

Hi

I try configurate Alerting Open Distro. the problem is when try configurate destinations Amazon SNS and the emails not sending. The problem is i need specific IAM role ARN and a specific SNS ARN Role?

thanks

No RPM available for separated Kibana plugins

The RPM that OpenDistro provides includes Kibana + OpenDistro plugins for Kibana and these installations cannot be decorrelated. This means that we cannot provide our own Kibana distribution, but also that we cannot install plugins separately.

This is something we can do with OpenDistro ES plugins because the global RPM references ElasticSearch RPM + OpenDistro plugins separately.

Tracking upstream elasticsearch?

What's the plan to track upstream releases of elasticsearch? Will your version numbers match theirs with a patch number after it? Is the plan documented somewhere?

This is going to be really important as Elastic release versions with new features and breaking changes frequently.

Issues with opendistro-for-elasticsearch behind haproxy with logstash

I'm having a weird issue where logstash refuses to work with opendistro when going through haproxy. We have other elasticsearch clusters that aren't opendistro that work fine with this setup.

All servers involved: Ubuntu 16.04
Logstash: 1:6.5.4-1
Haproxy: 1.6.14-1ppa1~xenial
elasticsearch-oss: 6.7.1
opendistroforelasticsearch: 0.9.0-1
opendistroforelasticsearch-kibana: 0.9.0

I can curl the elasticsearch backend through haproxy via the logstash hosts no problem, using http with the opendistro_security plugin removed from both elasticsearch and kibana.

Also, I have it working currently by pointing logstash directly to one of the nodes on port 9200. This is nice, but I really need to point to the haproxy server so I have load balancing/fault tolerance.

Error I see in logstash logs, causes it to never start listening on the filebeat input port:

May 20 13:02:08 logstash01 logstash[18414]: [2019-05-20T13:02:08,098][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#LogStash::OutputDelegator:0x61d086c7", :error=>"Unrecognized SSL message, plaintext connection?", :thread=>"#<Thread:0x703cd44f run>"}
May 20 13:02:10 logstash01 logstash[18414]: [2019-05-20T13:02:10,010][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Manticore::UnknownException: Unrecognized SSL message, plaintext connection?>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:37:in block in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:79:in call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:74:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:291:in perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:245:in block in healthcheck!'", "org/jruby/RubyHash.java:1343:in each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241:in healthcheck!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:341:in update_urls'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:71:in start'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client.rb:302:in build_pool'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client.rb:64:in `initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:103:in
May 20 13:02:10 logstash01 logstash[18414]: [2019-05-20T13:02:10,020][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create

, action_result: false", :backtrace=>nil}

Let kibana use index timezone

Hi,

There is a configuration on kibana that allows you to choose between using a fix timezone for all users or fix the timezone depending on the timezone of the browser.
I seen some conversations online where this is a limitation since if your data is produced across multiple timezones there is no real way to graph it other than creating one index per timezone.
I would like to open de discussion and ask it as a possible enhancement on the tool add the ability to let kibana just use the timezone on the field of the index.

What do you think of this?

TLS Certificate and key permissions in Docker

When using a host mounted volume for the certificate files the file permissions require mode 0644 to work with both elasticsearch and kibana images. How can we bind/mount these files without requiring everyone can read the key?

Kibana to Elasticsearch authentication failure (opendistro debian packages).

Hi,
First of all, thank you for providing debian packages...
I have used them to install the Opendistro Elasticsearch and Kibana flavors, but I am failing to make my them communicate (the operating system is Ubuntu 18.04).
In the elasticsearch security plugin config I had at first setup the basic_internal_auth_domain and saml_auth_domain authentication then reverted to simply using the basic_internal_auth_domain but with no better luck..
When starting the Kibana server, I end up with error messages like this one in the elasticsearch logs:

 Authentication finally failed for kibana from 127.0.0.1:39354

Which makes me think that the Kibana server is trying to connect as user kibana instead of kibanaserver as stated in the /etc/kibana/kibana.yml.
If that's the case, the failure makes sense as only kibanauser is set up in internal_users.yml.
I have confirmed that the configuration file is loaded as stating a faulty elasticsearch server there triggers errors.
I had added the Kibana configuration for SAML as I was trying to setup that authentication as it is explained in the documentation.
Removing them didn't seem to have any effect.

Making curl requests as kibanaserver to list the installed plugins succeeds...
Making curl requests with a wrong password prevents from receiving the answer and pops a message indicating that the authentication did fail for kibanaserver

I am out of easy tricks...
Where can I set the user that shall be used or which password should be associated a kibana user to be created ?

Thanks and kind regards.

Installing Kibana plugins

I'm trying to install different Kiabana plugins on top of running open distro Kiabana docker container. They all fail with the same error , below is one the of attempts:

sh-4.2$ ./bin/kibana-plugin install https://github.com/johtani/analyze-api-ui-plugin/releases/download/6.5.4/analyze-api-ui-plugin-6.5.4.zip
Attempting to transfer from https://github.com/johtani/analyze-api-ui-plugin/releases/download/6.5.4/analyze-api-ui-plugin-6.5.4.zip
Transferring 1876039 bytes....................
Transfer complete
Retrieving metadata from plugin archive
Extracting plugin archive
Extraction complete
Optimizing and caching browser bundles...
Plugin installation was unsuccessful due to error "Command failed: /usr/share/kibana/node/bin/node /usr/share/kibana/src/cli --env.name=production --optimize.useBundleCache=false --server.autoListen=false --plugins.initialize=false
(node:282) [DEP0022] DeprecationWarning: os.tmpDir() is deprecated. Use os.tmpdir() instead.
Browserslist: caniuse-lite is outdated. Please run next command `npm update caniuse-lite browserslist`

{"type":"log","@timestamp":"2019-04-03T06:01:07Z","tags":["info","optimize"],"pid":282,"message":"Optimizing and caching bundles for stateSessionStorageRedirect, status_page, timelion, kibana, analyze-api-ui-plugin, security-login, security-customerror, security-multitenancy, security-accountinfo, security-configuration and opendistro-alerting. This may take a few minutes"}

My docker-compose.yml :

version: '3'
services:
  odfe-node1:
    image: amazon/opendistro-for-elasticsearch:0.7.1
    container_name: odfe-node1
    environment:
      - cluster.name=odfe-cluster
      - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
      - opendistro_security.ssl.http.enabled=false
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - 9200:9200
      - 9600:9600 # required for Performance Analyzer
    networks:
      - odfe-net
  kibana:
    image: amazon/opendistro-for-elasticsearch-kibana:0.7.1
    container_name: odfe-kibana
    ports:
      - 5601:5601
    expose:
      - "5601"
    environment:
      ELASTICSEARCH_URL: http://odfe-node1:9200
    networks:
      - odfe-net

volumes:
  odfe-data1:
  odfe-data2:

networks:
  odfe-net:

Please advise.
Thanks

Helm Chart

It would be great if there was a helm chart so you could quickly launch opendistro-for-elasticsearch on eks or other Kubernetes clusters.

How opendistro Use Transport client to access security Cluster

i want to use transportclient access es cluster have install opendistro plguin.
here is my code

public static TransportClient initESClient() throws UnknownHostException {
        Settings settings = Settings.builder()
                .put("cluster.name", clusterName)
                .put("client.transport.sniff", false)
                .put("client.transport.ping_timeout", "10s")
                .put("client.transport.ignore_cluster_name", true)
                .build();
        client = new PreBuiltTransportClient(settings);
        client.threadPool().getThreadContext().putHeader("Authorization","Basic "+  Base64.getEncoder().encodeToString(("admin:Huawei@12345").getBytes()));
        System.out.println( Base64.getEncoder().encodeToString(("admin:Admin@12345").getBytes()));
        client.addTransportAddress(new TransportAddress(InetAddress.getByName(clusterAddress), 9300));
        //client.prepareGet('')
        return client;
    }

It seems does't work, where can I get a example for how to use tansport to access opendistro plugin es cluster..

Multiple host deploy examples

Hi, i was wondering if someone has already tried multiple host deploy and have any examples on docker-compose on this.

It would be really helpful to have it on the docs.

Thanks!

Does opendistro support multiple instances of Elasticsearch?

I'm working with multiple instances of Elasticsearch on the same host.
It always loads the default settings when I started the service.

/etc/sysconfig/master-node-0_elasticsearch

ES_HOME=/usr/share/elasticsearch
CONF_DIR=/etc/elasticsearch/master-node-0
ES_PATH_CONF=/etc/elasticsearch/master-node-0
DATA_DIR=/var/lib/elasticsearch/10.49.113.9-master-node-0
LOG_DIR=/var/log/elasticsearch/10.49.113.9-master-node-0
PID_DIR=/var/run/elasticsearch/10.49.113.9-master-node-0
ES_JVM_OPTIONS=/etc/elasticsearch/master-node-0/jvm.options
ES_USER=elasticsearch
ES_GROUP=elasticsearch
ES_STARTUP_SLEEP_TIME=5
MAX_OPEN_FILES=65536
MAX_LOCKED_MEMORY=unlimited
MAX_MAP_COUNT=262144
MAX_THREADS=2048

/etc/elasticsearch/master-node-0/elasticsearch.yml

bootstrap.memory_lock: true
cluster.name: opendistro-cluster-preprod
cluster.routing.allocation.same_shard.host: true
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts:
- 10.49.113.9
http.compression: true
http.port: '9200'
network.bind_host:
- local:ipv4
- site:ipv4
network.publish_host: site:ipv4
node.data: false
node.ingest: false
node.master: true
opendistro_security.audit.type: internal_elasticsearch
opendistro_security.authcz.admin_dn:
- CN=Administrator,OU=.,O=.,DC=.,DC=.,C=US
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.enterprise_modules_enabled: false
opendistro_security.nodes_dn:
- CN=10.49.,OU=.,O=.,DC=.,DC=.*,C=US
opendistro_security.restapi.roles_enabled:
- all_access
opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: 10.49.113.9.pem
opendistro_security.ssl.http.pemkey_filepath: 10.49.113.9.key
opendistro_security.ssl.http.pemkey_password: compass
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.transport.pemcert_filepath: 10.49.113.9.pem
opendistro_security.ssl.transport.pemkey_filepath: 10.49.113.9.key
opendistro_security.ssl.transport.pemkey_password: compass
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
opendistro_security.ssl.transport.resolve_hostname: false
node.name: 10.49.113.9-master-node-0
path.conf: /etc/elasticsearch/master-node-0
path.data: /var/lib/elasticsearch/10.49.113.9-master-node-0
path.logs: /var/log/elasticsearch/10.49.113.9-master-node-0
action.auto_create_index: true

/etc/elasticsearch/master-node-0/jvm.options

-Xms31g
-Xmx31g
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+AlwaysPreTouch
-server
-Djava.awt.headless=true
-Dfile.encoding=UTF-8
-Djna.nosys=true
-Djdk.io.permissionsUseCanonicalPath=true
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Dlog4j.skipJansi=true
-XX:+HeapDumpOnOutOfMemoryError

The Elasticsearch process with options

495 30799 1 3 04:25 ? 00:00:37 /usr/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch-5890626230362802382 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Dclk.tck=100 -Djdk.attach.allowAttachSelf=true -Djava.security.policy=file:///usr/share/elasticsearch/plugins/opendistro_performance_analyzer/pa_config/es_security.policy -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=oss -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/10.49.113.9-master-node-0/master-node-0_elasticsearch.pid -d

You can see the invalid options in BOLD.

Zen discovery doesn't work

Hi,

Big fan here. Been meaning to use ODFE since you guys released the first version, and now I just got the chance.

However, there's a very strange issue. I can't seem to use the docker-compose.yml here to produce a 2-node ODFE cluster on Docker running off a REHL server (2vCPU, 8GB RAM).

https://aws.amazon.com/blogs/opensource/running-open-distro-for-elasticsearch/

Expected: 2-node cluster ODFE
What happened: 2 single-node clusters, with odfe-node2 cannot be accessed anywhere outside the container (due to unbound port)

Here is the _cat/nodes?v when using the dev tool on Kibana.
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.21.0.3 41 97 44 2.96 4.99 6.67 mdi * iyJUkY9

I can docker exec -it <containerId> /bin/bash into each container and ping the other host (for e.g. ping odfe-node1 in container for odfe-node2 and vice versa). However, the zen discovery won't even be
Thanks a lot!

This it the log for odfe-node2

[root@sin0r3app2183xd odfe]# docker logs 1fc91edf4ee6
OpenDistro for Elasticsearch Security Demo Installer
 ** Warning: Do not use on production or public reachable systems **
Basedir: /usr/share/elasticsearch
Elasticsearch install type: rpm/deb on CentOS Linux release 7.6.1810 (Core)
Elasticsearch config dir: /usr/share/elasticsearch/config
Elasticsearch config file: /usr/share/elasticsearch/config/elasticsearch.yml
Elasticsearch bin dir: /usr/share/elasticsearch/bin
Elasticsearch plugins dir: /usr/share/elasticsearch/plugins
Elasticsearch lib dir: /usr/share/elasticsearch/lib
Detected Elasticsearch Version: x-content-6.7.1
Detected Open Distro Security Version: 0.9.0.0

### Success
### Execute this script now on all your nodes and then start all nodes
### Open Distro Security will be automatically initialized.
### If you like to change the runtime configuration
### change the files in ../securityconfig and execute:
"/usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh" -cd "/usr/share/elasticsearch/plugins/opendistro_security/securityconfig" -icl -key "/usr/share/elasticsearch/config/kirk-key.pem" -cert "/usr/share/elasticsearch/config/kirk.pem" -cacert "/usr/share/elasticsearch/config/root-ca.pem" -nhnv
### or run ./securityadmin_demo.sh
### To use the Security Plugin ConfigurationGUI
### To access your secured cluster open https://<hostname>:<HTTP port> and log in with admin/admin.
### (Ignore the SSL certificate warning because we installed self-signed demo certificates)
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2019-06-06T15:59:31,791][INFO ][o.e.e.NodeEnvironment    ] [WbbXwRt] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda2)]], net usable_space [50gb], net total_space [79.4gb], types [xfs]
[2019-06-06T15:59:31,797][INFO ][o.e.e.NodeEnvironment    ] [WbbXwRt] heap size [495.3mb], compressed ordinary object pointers [true]
[2019-06-06T15:59:31,801][INFO ][o.e.n.Node               ] [WbbXwRt] node name derived from node ID [WbbXwRtEQwGkmkAjzk7Enw]; set [node.name] to override
[2019-06-06T15:59:31,801][INFO ][o.e.n.Node               ] [WbbXwRt] version[6.7.1], pid[1], build[oss/tar/2f32220/2019-04-02T15:59:27.961366Z], OS[Linux/3.10.0-957.1.3.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13]
[2019-06-06T15:59:31,802][INFO ][o.e.n.Node               ] [WbbXwRt] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-1856553924822379077, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Djava.security.policy=file:///usr/share/elasticsearch/plugins/opendistro_performance_analyzer/pa_config/es_security.policy, -Dclk.tck=100, -Djdk.attach.allowAttachSelf=true, -Des.cgroups.hierarchy.override=/, -Xms512m, -Xmx512m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar]
[2019-06-06T15:59:39,418][INFO ][c.a.o.e.p.c.PluginSettings] [WbbXwRt] loading config ...
[2019-06-06T15:59:39,419][INFO ][c.a.o.e.p.c.PluginSettings] [WbbXwRt] Config: metricsLocation: /dev/shm/performanceanalyzer/, metricsDeletionInterval: 1
[2019-06-06T15:59:42,252][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [WbbXwRt] ES Config path is /usr/share/elasticsearch/config
[2019-06-06T15:59:42,969][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [WbbXwRt] OpenSSL not available (this is not an error, we simply fallback to built-in JDK SSL) because of java.lang.ClassNotFoundException: io.netty.internal.tcnative.SSL
[2019-06-06T15:59:44,251][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [WbbXwRt] JVM supports TLSv1.3
[2019-06-06T15:59:44,252][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [WbbXwRt] Config directory is /usr/share/elasticsearch/config/, from there the key- and truststore files are resolved relatively
[2019-06-06T15:59:47,408][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [WbbXwRt] TLS Transport Client Provider : JDK
[2019-06-06T15:59:47,408][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [WbbXwRt] TLS Transport Server Provider : JDK
[2019-06-06T15:59:47,408][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [WbbXwRt] TLS HTTP Provider             : JDK
[2019-06-06T15:59:47,408][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [WbbXwRt] Enabled TLS protocols for transport layer : [TLSv1.3, TLSv1.2, TLSv1.1]
[2019-06-06T15:59:47,409][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [WbbXwRt] Enabled TLS protocols for HTTP layer      : [TLSv1.3, TLSv1.2, TLSv1.1]
[2019-06-06T15:59:50,894][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [WbbXwRt] Clustername: odfe-cluster
[2019-06-06T15:59:51,089][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [WbbXwRt] Directory /usr/share/elasticsearch/config has insecure file permissions (should be 0700)
[2019-06-06T15:59:51,089][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [WbbXwRt] File /usr/share/elasticsearch/config/elasticsearch.yml has insecure file permissions (should be 0600)
[2019-06-06T15:59:51,097][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [WbbXwRt] File /usr/share/elasticsearch/config/log4j2.properties has insecure file permissions (should be 0600)
[2019-06-06T15:59:51,097][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [WbbXwRt] File /usr/share/elasticsearch/config/kirk.pem has insecure file permissions (should be 0600)
[2019-06-06T15:59:51,097][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [WbbXwRt] File /usr/share/elasticsearch/config/esnode.pem has insecure file permissions (should be 0600)
[2019-06-06T15:59:51,098][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [WbbXwRt] File /usr/share/elasticsearch/config/root-ca.pem has insecure file permissions (should be 0600)
[2019-06-06T15:59:51,098][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [WbbXwRt] File /usr/share/elasticsearch/config/esnode-key.pem has insecure file permissions (should be 0600)
[2019-06-06T15:59:51,098][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [WbbXwRt] File /usr/share/elasticsearch/config/kirk-key.pem has insecure file permissions (should be 0600)
[2019-06-06T15:59:51,854][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [aggs-matrix-stats]
[2019-06-06T15:59:51,855][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [analysis-common]
[2019-06-06T15:59:51,855][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [ingest-common]
[2019-06-06T15:59:51,855][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [ingest-geoip]
[2019-06-06T15:59:51,855][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [ingest-user-agent]
[2019-06-06T15:59:51,855][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [lang-expression]
[2019-06-06T15:59:51,855][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [lang-mustache]
[2019-06-06T15:59:51,856][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [lang-painless]
[2019-06-06T15:59:51,856][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [mapper-extras]
[2019-06-06T15:59:51,856][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [parent-join]
[2019-06-06T15:59:51,856][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [percolator]
[2019-06-06T15:59:51,856][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [rank-eval]
[2019-06-06T15:59:51,856][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [reindex]
[2019-06-06T15:59:51,856][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [repository-url]
[2019-06-06T15:59:51,857][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [transport-netty4]
[2019-06-06T15:59:51,874][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded module [tribe]
[2019-06-06T15:59:51,875][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded plugin [opendistro_alerting]
[2019-06-06T15:59:51,875][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded plugin [opendistro_performance_analyzer]
[2019-06-06T15:59:51,875][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded plugin [opendistro_security]
[2019-06-06T15:59:51,875][INFO ][o.e.p.PluginsService     ] [WbbXwRt] loaded plugin [opendistro_sql]
[2019-06-06T15:59:51,950][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [WbbXwRt] Disabled https compression by default to mitigate BREACH attacks. You can enable it by setting 'http.compression: true' in elasticsearch.yml
[2019-06-06T16:00:16,787][INFO ][c.a.o.s.a.i.AuditLogImpl ] [WbbXwRt] Configured categories on rest layer to ignore: [AUTHENTICATED, GRANTED_PRIVILEGES]
[2019-06-06T16:00:16,788][INFO ][c.a.o.s.a.i.AuditLogImpl ] [WbbXwRt] Configured categories on transport layer to ignore: [AUTHENTICATED, GRANTED_PRIVILEGES]
[2019-06-06T16:00:16,788][INFO ][c.a.o.s.a.i.AuditLogImpl ] [WbbXwRt] Configured Users to ignore: [kibanaserver]
[2019-06-06T16:00:16,788][INFO ][c.a.o.s.a.i.AuditLogImpl ] [WbbXwRt] Configured Users to ignore for read compliance events: [kibanaserver]
[2019-06-06T16:00:16,788][INFO ][c.a.o.s.a.i.AuditLogImpl ] [WbbXwRt] Configured Users to ignore for write compliance events: [kibanaserver]
[2019-06-06T16:00:16,841][INFO ][c.a.o.s.a.i.AuditLogImpl ] [WbbXwRt] Message routing enabled: true
[2019-06-06T16:00:16,847][WARN ][c.a.o.s.c.ComplianceConfig] [WbbXwRt] If you plan to use field masking pls configure opendistro_security.compliance.salt to be a random string of 16 chars length identical on all nodes
[2019-06-06T16:00:16,847][INFO ][c.a.o.s.c.ComplianceConfig] [WbbXwRt] PII configuration [auditLogPattern=org.joda.time.format.DateTimeFormatter@65b73689,  auditLogIndex=null]: {}
[2019-06-06T16:00:18,323][INFO ][o.e.d.DiscoveryModule    ] [WbbXwRt] using discovery type [zen] and host providers [settings]
[2019-06-06T16:00:21,618][INFO ][c.a.o.e.p.h.c.PerformanceAnalyzerConfigAction] [WbbXwRt] PerformanceAnalyzer Enabled: true
Registering Handler
[2019-06-06T16:00:21,838][INFO ][o.e.n.Node               ] [WbbXwRt] initialized
[2019-06-06T16:00:21,846][INFO ][o.e.n.Node               ] [WbbXwRt] starting ...
[2019-06-06T16:00:22,722][INFO ][o.e.t.TransportService   ] [WbbXwRt] publish_address {172.21.0.2:9300}, bound_addresses {0.0.0.0:9300}
[2019-06-06T16:00:22,760][INFO ][o.e.b.BootstrapChecks    ] [WbbXwRt] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-06-06T16:00:22,815][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [WbbXwRt] Check if .opendistro_security index exists ...
[2019-06-06T16:00:23,476][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector MasterServiceMetrics is still in progress, so skipping this Interval
[2019-06-06T16:00:23,514][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector NodeDetails is still in progress, so skipping this Interval
[2019-06-06T16:00:23,514][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector NetworkInterfaceCollector is still in progress, so skipping this Interval
[2019-06-06T16:00:23,515][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector ThreadPoolMetrics is still in progress, so skipping this Interval
[2019-06-06T16:00:23,515][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector HeapMetrics is still in progress, so skipping this Interval
[2019-06-06T16:00:23,515][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector OSMetrics is still in progress, so skipping this Interval
[2019-06-06T16:00:23,515][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector NodeStatsMetrics is still in progress, so skipping this Interval
[2019-06-06T16:00:23,515][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector MasterServiceEventMetrics is still in progress, so skipping this Interval
[2019-06-06T16:00:23,516][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector CircuitBreaker is still in progress, so skipping this Interval
[2019-06-06T16:00:23,516][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector DisksCollector is still in progress, so skipping this Interval
[2019-06-06T16:00:23,516][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector NetworkE2ECollector is still in progress, so skipping this Interval
[2019-06-06T16:00:24,480][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector MasterServiceMetrics is still in progress, so skipping this Interval
[2019-06-06T16:00:24,480][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector NodeDetails is still in progress, so skipping this Interval
[2019-06-06T16:00:24,480][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector NetworkInterfaceCollector is still in progress, so skipping this Interval
[2019-06-06T16:00:24,480][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector ThreadPoolMetrics is still in progress, so skipping this Interval
[2019-06-06T16:00:24,480][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector HeapMetrics is still in progress, so skipping this Interval
[2019-06-06T16:00:24,481][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector OSMetrics is still in progress, so skipping this Interval
[2019-06-06T16:00:24,481][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector NodeStatsMetrics is still in progress, so skipping this Interval
[2019-06-06T16:00:24,481][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector MasterServiceEventMetrics is still in progress, so skipping this Interval
[2019-06-06T16:00:24,481][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector CircuitBreaker is still in progress, so skipping this Interval
[2019-06-06T16:00:24,481][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector DisksCollector is still in progress, so skipping this Interval
[2019-06-06T16:00:24,481][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector NetworkE2ECollector is still in progress, so skipping this Interval
[2019-06-06T16:00:25,482][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector OSMetrics is still in progress, so skipping this Interval
[2019-06-06T16:00:25,482][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector NetworkE2ECollector is still in progress, so skipping this Interval
[2019-06-06T16:00:26,461][INFO ][o.e.c.s.MasterService    ] [WbbXwRt] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {WbbXwRt}{WbbXwRtEQwGkmkAjzk7Enw}{iqod2-4-Rl2JEtmTTwhiMg}{172.21.0.2}{172.21.0.2:9300}
[2019-06-06T16:00:26,542][INFO ][o.e.c.s.ClusterApplierService] [WbbXwRt] new_master {WbbXwRt}{WbbXwRtEQwGkmkAjzk7Enw}{iqod2-4-Rl2JEtmTTwhiMg}{172.21.0.2}{172.21.0.2:9300}, reason: apply cluster state (from master [master {WbbXwRt}{WbbXwRtEQwGkmkAjzk7Enw}{iqod2-4-Rl2JEtmTTwhiMg}{172.21.0.2}{172.21.0.2:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.MasterServiceEventMetrics (file:/usr/share/elasticsearch/plugins/opendistro_performance_analyzer/opendistro_performance_analyzer-0.9.0.0.jar) to field java.util.concurrent.ThreadPoolExecutor.workers
WARNING: Please consider reporting this to the maintainers of com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.MasterServiceEventMetrics
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[2019-06-06T16:00:27,061][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [WbbXwRt] .opendistro_security index does not exist yet, so we create a default config
[2019-06-06T16:00:27,136][INFO ][o.e.h.n.Netty4HttpServerTransport] [WbbXwRt] publish_address {172.21.0.2:9200}, bound_addresses {0.0.0.0:9200}
[2019-06-06T16:00:27,136][INFO ][o.e.n.Node               ] [WbbXwRt] started
[2019-06-06T16:00:27,136][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [WbbXwRt] 4 Open Distro Security modules loaded so far: [Module [type=DLSFLS, implementing class=com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityFlsDlsIndexSearcherWrapper], Module [type=MULTITENANCY, implementing class=com.amazon.opendistroforelasticsearch.security.configuration.PrivilegesInterceptorImpl], Module [type=REST_MANAGEMENT_API, implementing class=com.amazon.opendistroforelasticsearch.security.dlic.rest.api.OpenDistroSecurityRestApiActions], Module [type=AUDITLOG, implementing class=com.amazon.opendistroforelasticsearch.security.auditlog.impl.AuditLogImpl]]
[2019-06-06T16:00:27,208][INFO ][o.e.g.GatewayService     ] [WbbXwRt] recovered [0] indices into cluster_state
[2019-06-06T16:00:27,227][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [WbbXwRt] Will create .opendistro_security index so we can apply default config
[2019-06-06T16:00:27,492][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector OSMetrics is still in progress, so skipping this Interval
[2019-06-06T16:00:27,688][WARN ][o.e.t.OutboundHandler    ] [WbbXwRt] send message failed [channel: Netty4TcpChannel{localAddress=/172.21.0.2:33268, remoteAddress=odfe-node1/172.21.0.3:9300}]
javax.net.ssl.SSLException: SSLEngine closed already
        at io.netty.handler.ssl.SslHandler.wrap(...)(Unknown Source) ~[?:?]
[2019-06-06T16:00:28,221][INFO ][o.e.c.m.MetaDataCreateIndexService] [WbbXwRt] [.opendistro_security] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2019-06-06T16:00:28,325][INFO ][o.e.c.r.a.AllocationService] [WbbXwRt] updating number_of_replicas to [0] for indices [.opendistro_security]
[2019-06-06T16:00:29,504][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector OSMetrics is still in progress, so skipping this Interval
[2019-06-06T16:00:31,267][INFO ][o.e.c.r.a.AllocationService] [WbbXwRt] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.opendistro_security][0]] ...]).
[2019-06-06T16:00:31,324][INFO ][c.a.o.s.s.ConfigHelper   ] [WbbXwRt] Will update 'config' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/config.yml
[2019-06-06T16:00:32,396][INFO ][o.e.c.m.MetaDataMappingService] [WbbXwRt] [.opendistro_security/0BCHDZE1T7yhtmoejzC0Ug] create_mapping [security]
[2019-06-06T16:00:34,282][INFO ][c.a.o.s.s.ConfigHelper   ] [WbbXwRt] Will update 'roles' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles.yml
[2019-06-06T16:00:34,427][INFO ][o.e.c.m.MetaDataMappingService] [WbbXwRt] [.opendistro_security/0BCHDZE1T7yhtmoejzC0Ug] update_mapping [security]
[2019-06-06T16:00:34,975][INFO ][c.a.o.s.s.ConfigHelper   ] [WbbXwRt] Will update 'rolesmapping' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles_mapping.yml
[2019-06-06T16:00:35,155][INFO ][o.e.c.m.MetaDataMappingService] [WbbXwRt] [.opendistro_security/0BCHDZE1T7yhtmoejzC0Ug] update_mapping [security]
[2019-06-06T16:00:35,535][INFO ][c.a.o.s.s.ConfigHelper   ] [WbbXwRt] Will update 'internalusers' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
[2019-06-06T16:00:35,668][INFO ][o.e.c.m.MetaDataMappingService] [WbbXwRt] [.opendistro_security/0BCHDZE1T7yhtmoejzC0Ug] update_mapping [security]
[2019-06-06T16:00:36,293][INFO ][c.a.o.s.s.ConfigHelper   ] [WbbXwRt] Will update 'actiongroups' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/action_groups.yml
[2019-06-06T16:00:36,497][INFO ][o.e.c.m.MetaDataMappingService] [WbbXwRt] [.opendistro_security/0BCHDZE1T7yhtmoejzC0Ug] update_mapping [security]
[2019-06-06T16:00:37,179][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [WbbXwRt] Default config applied
[2019-06-06T16:00:38,436][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [WbbXwRt] Node 'WbbXwRt' initialized
[2019-06-06T16:00:39,504][INFO ][o.e.m.j.JvmGcMonitorService] [WbbXwRt] [gc][17] overhead, spent [363ms] collecting in the last [1s]
[2019-06-06T16:07:19,664][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector OSMetrics is still in progress, so skipping this Interval
[2019-06-06T16:08:36,360][INFO ][c.a.o.e.p.c.ScheduledMetricCollectorsExecutor] [WbbXwRt] Collector MasterServiceEventMetrics is still in progress, so skipping this Interval
[2019-06-06T16:08:36,368][WARN ][o.e.m.j.JvmGcMonitorService] [WbbXwRt] [gc][young][489][13] duration [1.5s], collections [1]/[1.9s], total [1.5s]/[3.6s], memory [212.8mb]->[82.1mb]/[495.3mb], all_pools {[young] [131.3mb]->[766.8kb]/[133.1mb]}{[survivor] [7.9mb]->[8mb]/[16.6mb]}{[old] [73.4mb]->[73.4mb]/[345.6mb]}
[2019-06-06T16:08:36,376][WARN ][o.e.m.j.JvmGcMonitorService] [WbbXwRt] [gc][489] overhead, spent [1.5s] collecting in the last [1.9s]

Update 1: It works as expected in my Docker for Windows. It's just the REHL that's acting funny.

Build instructions?

It would be useful to have build instructions in the documentation so that people can build this themselves.

Provide upgrade assistant feature

Provide a Kibana UI that will perform validation checks on an existing 6.7 domain to check compatibility issues with 7.x to be corrected prior to upgrading.

LICENSE link in README.md is 404

LICENSE link in README.md is 404
Either place a LICENSE file in this directory, or point at the LICENSE file in the root of the repo

Cannot change Kibana IP

I just installed opendistro via RPM. The default setting appears to have kibana listening on 127.0.0.1. Where can I change this IP address so that it is accessible on my network as 0.0.0.0. or any other IP address. I noticed the /etc/kibana/kibana.yml file does not have the usual port and host entry. I added them, made sure they were uncommented, restarted the services, and even the server, and they had no affect. Is this setting saved elsewhere?

This is on a clean minimal CentOS 7 install with no firewall or other software installed. Thanks

Tenant indices migration failed

After rebooting my RPM install, I am now getting a Kibana Status is Yellow. It is showing Plugin Status as

plugin:[email protected] | Tenant indices migration failed

and will not let me switch to other pages within Kibana. How might I fix this error? Thanks

Error when indexing new data

Hi, im getting this error when trying to index into an existing index. Any ideas what could be happening?

[2019-06-13T15:27:03,585][INFO ][o.e.m.j.JvmGcMonitorService] [xTpc1RF] [gc][22863] overhead, spent [301ms] collecting in the last [1s]

Does opendistro support ILM

Hello :)

Opendistro support ILM - (Index Lifecycle Management)?

I tried _ilm/status and _ilm_start but i get error, i think ILM work with out X-PACK

opendistro version: "number" : "6.7.1",

Cheers :)

Tar ball releases

There is already an issue for alternative distro packages. In addition - tar/zip files are a must. These are java based products meaning that for the most part they will just run - without having specific binary dependencies - so a simple zip file allows the broadest possible platform usage - including windows. And this is the model that elastic themselves use - as well as the other download formats they offer.

elasticsearch-oss versions causing dependency issues

Hi,

reference to -> opendistro/for-elasticsearch-docs#53 (comment)

There is an issue with installing/upgrading opendistroforelasticsearch package via yum on CentOS7 as the elasticsearch-oss repo is far ahead with the EL versions and cause issue at the installation/upgrade process.

This is an attempt to get this addressed and solved properly without breaking dependencies of the OS packet-manager.

It would be great to get packages built and provided via
https://d3g5vo6xdbdb9a.cloudfront.net/yum/noarch/
(maybe a human readable hostname can be used) that can handle elasticsearch-oss versions that are ahead and doesn't cause dependency issues.

Workaround

please note that --skip-broken is not the correct solution to this issue !

The correct way and to avoid any side effect, is to install (upgrade/downgrade to) the specific version of elasticsearch-oss which is required for opendistroforelasticsearch before installing (upgrade) opendistroforelasticsearch.

check available verions of elasticsearch-oss

One can get the available versions via repoquery command:

$ repoquery --show-duplicates elasticsearch-oss
elasticsearch-oss-0:6.3.0-1.noarch
elasticsearch-oss-0:6.3.1-1.noarch
elasticsearch-oss-0:6.3.2-1.noarch
elasticsearch-oss-0:6.4.0-1.noarch
elasticsearch-oss-0:6.4.1-1.noarch
elasticsearch-oss-0:6.4.2-1.noarch
elasticsearch-oss-0:6.4.3-1.noarch
elasticsearch-oss-0:6.5.0-1.noarch
elasticsearch-oss-0:6.5.1-1.noarch
elasticsearch-oss-0:6.5.2-1.noarch
elasticsearch-oss-0:6.5.3-1.noarch
elasticsearch-oss-0:6.5.4-1.noarch
elasticsearch-oss-0:6.6.0-1.noarch
elasticsearch-oss-0:6.6.1-1.noarch
elasticsearch-oss-0:6.6.2-1.noarch
elasticsearch-oss-0:6.7.0-1.noarch
elasticsearch-oss-0:6.7.1-1.noarch
elasticsearch-oss-0:7.0.0-1.x86_6

install proper matching version

in this case version 6.6.2

yum clean all
yum install elasticsearch-oss-6.6.2 logstash-6.6.2
yum install opendistroforelasticsearch opendistroforelasticsearch-kibana

downgrade to specific version

If one has used --skip-broken or installed a higher version by mistake you can downgrade to specific version via:

yum downgrade elasticsearch-oss-6.6.2 logstash-6.6.2

keep in mind that all the *-oss components should be on same version (elasticsearch, kibana, logstash, etc.)

pls. Consider the compatibility matrix from Elasticsearch -> https://www.elastic.co/support/matrix#matrix_compatibility

A compatibility matrix for Opendistro would be very much appreciated.

thanks

Kibana Console Requests Fail

Issue: After submitting a request to the console I receive the following as a response no matter what is submitted.

{
"statusCode": 504,
"error": "Gateway Time-out",
"message": "Client request timeout"
}

As I am not sure where the problem lies I am submitting the issue here. The requests that are failing look like: https://kibana.demo.flexentialps.com/api/console/proxy?path=_template&method=GET

It is the only thing thats currently failing. My cluster is setup with OIDC authentication and logins / roles / tenants work but this will just consistently fail. I enabled tracing on Kibana and see the following:

{"type":"log","@timestamp":"2019-06-14T16:42:58Z","tags":["debug","legacy-service"],"pid":1,"message":"Request will be handled by proxy POST:/api/console/proxy?path=_aliases&method=GET."}
{"type":"log","@timestamp":"2019-06-14T16:42:58Z","tags":["debug","legacy-service"],"pid":1,"message":"Request will be handled by proxy POST:/api/console/proxy?path=_mapping&method=GET."}
{"type":"log","@timestamp":"2019-06-14T16:42:58Z","tags":["debug","legacy-service"],"pid":1,"message":"Request will be handled by proxy POST:/api/console/proxy?path=_template&method=GET."}
{"type":"log","@timestamp":"2019-06-14T16:42:59Z","tags":["debug","legacy-proxy"],"pid":1,"message":"Event is being forwarded: connection"}
{"type":"log","@timestamp":"2019-06-14T16:42:59Z","tags":["debug","legacy-service"],"pid":1,"message":"Request will be handled by proxy POST:/api/console/proxy?path=sshannon-pr&method=PUT."}

It eventually fails with this:

{"type":"response","@timestamp":"2019-06-14T16:42:59Z","tags":[],"pid":1,"method":"post","statusCode":504,"req":{"url":"/api/console/proxy?path=sshannon-pr&method=PUT","method":"post","headers":{"host":"kibana.demo.x.com","x-request-id":"ced034f9010bbf32f80c9e9bd6e7f66f","x-real-ip":"10.3.1.123","x-forwarded-for":"10.3.1.123","x-forwarded-host":"kibana.demo.x.com","x-forwarded-port":"443","x-forwarded-proto":"https","x-original-uri":"/api/console/proxy?path=sshannon-pr&method=PUT","x-scheme":"https","content-length":"0","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:67.0) Gecko/20100101 Firefox/67.0","accept":"text/plain, */*; q=0.01","accept-language":"en-US,en;q=0.5","accept-encoding":"gzip, deflate, br","referer":"https://kibana.demo.x.com/app/kibana","kbn-version":"6.7.1","securitytenant":"admin_tenant"},"remoteAddress":"10.3.2.134","userAgent":"10.3.2.134","referer":"https://kibana.demo.x.com/app/kibana"},"res":{"statusCode":504,"responseTime":36005,"contentLength":9},"message":"POST /api/console/proxy?path=sshannon-pr&method=PUT 504 36005ms - 9.0B"}

In the client container i see the following logs:

[2019-06-14T16:41:30,534][WARN ][c.a.o.s.a.BackendRegistry] [Cv0-NSC] Authentication finally failed for null from 10.3.1.79:51560
[2019-06-14T16:41:31,216][WARN ][c.a.o.s.h.HTTPBasicAuthenticator] [Cv0-NSC] No 'Basic Authorization' header, send 401 and 'WWW-Authenticate Basic'
[2019-06-14T16:41:31,346][WARN ][c.a.o.s.h.HTTPBasicAuthenticator] [Cv0-NSC] No 'Basic Authorization' header, send 401 and 'WWW-Authenticate Basic'
[2019-06-14T16:41:34,076][WARN ][c.a.o.s.h.HTTPBasicAuthenticator] [Cv0-NSC] No 'Basic Authorization' header, send 401 and 'WWW-Authenticate Basic'
[2019-06-14T16:41:34,663][WARN ][c.a.o.s.h.HTTPBasicAuthenticator] [Cv0-NSC] No 'Basic Authorization' header, send 401 and 'WWW-Authenticate Basic'
[2019-06-14T16:41:35,179][WARN ][c.a.o.s.h.HTTPBasicAuthenticator] [Cv0-NSC] No 'Basic Authorization' header, send 401 and 'WWW-Authenticate Basic'
[2019-06-14T16:41:39,975][WARN ][c.a.o.s.h.HTTPBasicAuthenticator] [Cv0-NSC] No 'Basic Authorization' header, send 401 and 'WWW-Authenticate Basic'
[2019-06-14T16:41:48,201][WARN ][c.a.o.s.h.HTTPBasicAuthenticator] [Cv0-NSC] No 'Basic Authorization' header, send 401 and 'WWW-Authenticate Basic'
[2019-06-14T16:41:48,526][WARN ][c.a.o.s.h.HTTPBasicAuthenticator] [Cv0-NSC] No 'Basic Authorization' header, send 401 and 'WWW-Authenticate Basic'
[2019-06-14T16:41:49,166][WARN ][c.a.o.s.h.HTTPBasicAuthenticator] [Cv0-NSC] No 'Basic Authorization' header, send 401 and 'WWW-Authenticate Basic'
[2019-06-14T16:41:49,261][WARN ][c.a.o.s.h.HTTPBasicAuthenticator] [Cv0-NSC] No 'Basic Authorization' header, send 401 and 'WWW-Authenticate Basic'
[2019-06-14T16:42:54,739][WARN ][c.a.o.s.a.BackendRegistry] [Cv0-NSC] Authentication finally failed for null from 10.3.1.79:53286

I believe the kibana ES log is setup successfully despite this, as it would not load otherwise. I have test that much.

Unable to install

Hello,

I'm trying to install openDistro at my local system Ubuntu 16.04 followed by instruction from here.

And getting below error:

apt install opendistroforelasticsearch
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  libvulkan1 libvulkan1:i386
Use 'sudo apt autoremove' to remove them.
The following NEW packages will be installed:
  opendistroforelasticsearch
0 upgraded, 1 newly installed, 0 to remove and 5 not upgraded.
1 not fully installed or removed.
Need to get 702 B of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 https://d3g5vo6xdbdb9a.cloudfront.net/apt stable/main amd64 opendistroforelasticsearch amd64 0.9.0-1 [702 B]
Fetched 702 B in 1s (576 B/s)                      
Selecting previously unselected package opendistroforelasticsearch.
(Reading database ... 612923 files and directories currently installed.)
Preparing to unpack .../opendistroforelasticsearch_0.9.0-1_amd64.deb ...
Unpacking opendistroforelasticsearch (0.9.0-1) ...
Setting up elasticsearch-oss (6.7.1) ...
/usr/share/elasticsearch/bin/elasticsearch-env: line 71: /etc/default/elasticsearch: No such file or directory
dpkg: error processing package elasticsearch-oss (--configure):
 subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of opendistroforelasticsearch:
 opendistroforelasticsearch depends on elasticsearch-oss (= 6.7.1); however:
  Package elasticsearch-oss is not configured yet.

dpkg: error processing package opendistroforelasticsearch (--configure):
 dependency problems - leaving unconfigured
No apport report written because the error message indicates its a followup error from a previous failure.
                                                                                                          Errors were encountered while processing:
 elasticsearch-oss
 opendistroforelasticsearch
E: Sub-process /usr/bin/dpkg returned an error code (1)

Please help!

Default monitoring

I would like to know why alerting and security plugins is in but not graph and monitoring parts. They are default in ES, the none oss flavor?
It would be nice if we could get a opendistro_monitor plugin going. I know PerfTop is in OD but for none CLI users the default monitoring is good enough. And I can't really switch right now if that is not in place.

Version I'm running:
elasticsearch-oss.noarch 6.5.4-1
Plugins running 0.7.0.0

Cross-Cluster Replication

Cross-Cluster Replication is available now available and production ready in Elastic Search 6.7.0 (LINK).

Is there currently anything in the Open Distro Roadmap for a Cross-Cluster Replication feature? If not, can there be?

OIDC support/documentation

The documentation mentions JWT support, but it is unclear if it is enough to work with OIDC providers and only additional documentation is needed, or if OIDC support would require additional support. Either way, OIDC support would be very useful.

ERROR Get EOF

Hi, i´m trying to build a test enviroment to try how does the stack could help us with some issues with the logs we are facing right now.

I used CentOS 7 and installed Kibana and ElasticSearch with RPM. Both of those are working fine but i´m having some issues with Filebeat, doesn´t matter what i try i always get this output.

[root@minint-qpcm0q4 modules.d]# filebeat test output
elasticsearch: http://172.16.21.3:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 172.16.21.3
dial up... OK
TLS... WARN secure connection disabled
talk to server... ERROR Get http://172.16.21.3:9200: EOF

The Filebeat.yml is almost by deault except for this:

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["172.16.21.3:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"

I tried installing filebeat in the same machine, and in another test server and both of them give me the same error. If more information is needed let me know.

Managed Service

Hi,

do you have any plans to add the new functionality to the existing AWS Elasticsearch Service or offer a new managed service with all these features? Would love to see the security addtions to be added!

Error when creating image with s3 pluging

Im getting this output when running

docker build -t custom_opendistro .

free(): invalid pointer
SIGABRT: abort
PC=0x7f0f5e840e97 m=0 sigcode=18446744073709551610
signal arrived during cgo execution

goroutine 1 [syscall, locked to thread]:
runtime.cgocall(0x4afd50, 0xc420057cc0, 0xc420057ce8)
	/usr/lib/go-1.8/src/runtime/cgocall.go:131 +0xe2 fp=0xc420057c90 sp=0xc420057c50
github.com/docker/docker-credential-helpers/secretservice._Cfunc_free(0x2742270)
	github.com/docker/docker-credential-helpers/secretservice/_obj/_cgo_gotypes.go:111 +0x41 fp=0xc420057cc0 sp=0xc420057c90
github.com/docker/docker-credential-helpers/secretservice.Secretservice.List.func5(0x2742270)
	/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/obj-x86_64-linux-gnu/src/github.com/docker/docker-credential-helpers/secretservice/secretservice_linux.go:96 +0x60 fp=0xc420057cf8 sp=0xc420057cc0
github.com/docker/docker-credential-helpers/secretservice.Secretservice.List(0x0, 0x756060, 0xc420018360)
	/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/obj-x86_64-linux-gnu/src/github.com/docker/docker-credential-helpers/secretservice/secretservice_linux.go:97 +0x217 fp=0xc420057da0 sp=0xc420057cf8
github.com/docker/docker-credential-helpers/secretservice.(*Secretservice).List(0x77e548, 0xc420057e88, 0x410022, 0xc4200182b0)
	<autogenerated>:4 +0x46 fp=0xc420057de0 sp=0xc420057da0
github.com/docker/docker-credential-helpers/credentials.List(0x756ba0, 0x77e548, 0x7560e0, 0xc42000e018, 0x0, 0x10)
	/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/obj-x86_64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:145 +0x3e fp=0xc420057e68 sp=0xc420057de0
github.com/docker/docker-credential-helpers/credentials.HandleCommand(0x756ba0, 0x77e548, 0x7ffc044ae79e, 0x4, 0x7560a0, 0xc42000e010, 0x7560e0, 0xc42000e018, 0x40e398, 0x4d35c0)
	/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/obj-x86_64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:60 +0x16d fp=0xc420057ed8 sp=0xc420057e68
github.com/docker/docker-credential-helpers/credentials.Serve(0x756ba0, 0x77e548)
	/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/obj-x86_64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:41 +0x1cb fp=0xc420057f58 sp=0xc420057ed8
main.main()
	/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/secretservice/cmd/main_linux.go:9 +0x4f fp=0xc420057f88 sp=0xc420057f58
runtime.main()
	/usr/lib/go-1.8/src/runtime/proc.go:185 +0x20a fp=0xc420057fe0 sp=0xc420057f88
runtime.goexit()
	/usr/lib/go-1.8/src/runtime/asm_amd64.s:2197 +0x1 fp=0xc420057fe8 sp=0xc420057fe0

goroutine 17 [syscall, locked to thread]:
runtime.goexit()
	/usr/lib/go-1.8/src/runtime/asm_amd64.s:2197 +0x1

rax    0x0
rbx    0x7ffc044ac660
rcx    0x7f0f5e840e97
rdx    0x0
rdi    0x2
rsi    0x7ffc044ac3f0
rbp    0x7ffc044ac760
rsp    0x7ffc044ac3f0
r8     0x0
r9     0x7ffc044ac3f0
r10    0x8
r11    0x246
r12    0x7ffc044ac660
r13    0x1000
r14    0x0
r15    0x30
rip    0x7f0f5e840e97
rflags 0x246
cs     0x33
fs     0x0
gs     0x0
Sending build context to Docker daemon  39.94kB
Step 1/4 : FROM amazon/opendistro-for-elasticsearch:0.9.0
 ---> f2365bb02a57
Step 2/4 : RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch https://artifacts.elastic.co/downloads/elasticsearch-plugins/repository-s3/repository-s3-6.7.1.zip
 ---> Using cache
 ---> 242719e27ac1
Step 3/4 : RUN bin/elasticsearch-keystore add --force --stdin s3.client.default.access_key <<< "XXXXXXX"
 ---> Using cache
 ---> 76a530fd1503
Step 4/4 : RUN bin/elasticsearch-keystore add --force --stdin s3.client.default.secret_key <<< "XXXXXXX"
 ---> Using cache
 ---> f99c87cd0df2
Successfully built f99c87cd0df2
Successfully tagged custom_opendistro:latest

Running the server works fine and i wasnt able to find any trouble but i thought it might be important to point this error up.

Where is the fork source code

Hello,

It is not clear to me where does the actual source code of the elasticsearch fork live.
It seems that there is no such repository in the "opendistro-for-elasticsearch" github org. Do I missed something ?

Packaged plugins for download and offline install

As much as I like the idea of providing the source so that whoever wants can build it, I think it may be very useful to have prepackaged zip file of plugin readily available for download which can be directly used with the plugin installer on current ES installations.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.