Giter VIP home page Giter VIP logo

es-disk-rebalance's People

Contributors

datto-aparrill avatar dependabot[bot] avatar johnseekins avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

es-disk-rebalance's Issues

tool doesn't work

Hi guys, i tried to install with
pip3 install es-disk-rebalance/
and then execute your tool with command
es-rebalance -v -u 'https://user:pwd@localhost:9200' --box-type hot --iterations 50

but tool crash with this error:
INFO:elasticsearch:GET https://localhost:9200/_cat/allocation?bytes=b&format=json [status:200 request:0.311s] INFO:elasticsearch:GET https://localhost:9200/_cat/shards?bytes=b&format=json [status:200 request:0.854s] INFO:elasticsearch:GET https://localhost:9200/_nodes?format=json [status:200 request:0.016s] Traceback (most recent call last): File "/usr/local/bin/es-rebalance", line 11, in <module> sys.exit(main()) File "/usr/local/lib/python3.6/dist-packages/es_rebalance/__main__.py", line 57, in main if not plan.plan_step(): File "/usr/local/lib/python3.6/dist-packages/es_rebalance/rebalance.py", line 177, in plan_step current_pvariance = self.percent_used_variance() File "/usr/local/lib/python3.6/dist-packages/es_rebalance/rebalance.py", line 365, in percent_used_variance return statistics.pvariance(percentage(node) for node in self.nodes_by_size) File "/usr/lib/python3.6/statistics.py", line 636, in pvariance raise StatisticsError('pvariance requires at least one data point') statistics.StatisticsError: pvariance requires at least one data point

i have also modified connection to
es = Elasticsearch(args.url,verify_certs=False)
to connect to an https endpoint with a self signed certificate
I'm trying to use this tool with Opensearch 1.2
Can you help me here please?

Comparison to built in rebalance?

I believe that ES will rebalance based on disk usage when a node breaches the high watermark threshold (https://www.elastic.co/guide/en/elasticsearch/reference/6.8/disk-allocator.html#disk-allocator).

Controls the high watermark. It defaults to 90%, meaning that Elasticsearch will attempt to relocate shards away from a node whose disk usage is above 90%.

This script seems like it would be useful in cases where that threshold could not be adjusted (e.g. AWS Elasticsearch I believe). Are there other cases where this script takes action differently than the built in re-allocator?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.