Giter VIP home page Giter VIP logo

burry.sh's Issues

aux.go file is invalid for windows

On windows system aux is reserved and cannot be used as a file name.
MSDN

Windows user cannot clone this repository without going through bash for windows

a quick solution would be to just rename aux.go file to anything else

Znode data for kafka topic znode not being copied.

Hi,

Great tool, by the way. I've been experimenting with it a little to replicate Kafka config stored in Zookeeper (with Exhibitor), to basically allow a Kafka cluster to move onto new Zookeeper without losing information about topic partition assignment.

At the moment I'm running into an issue where the structure of the Kafka data being stored is being replicated correctly but the data in the topic znode is not copied.

My set up is:

  • Exhibitor cluster 1 with 5 nodes
  • Kafka cluster with 6 brokers connection to Exhibitor cluster 1
  • Exhibitor cluster 2 with 5 nodes - this is the destination for the data I want to copy from Exhibitor cluster 1

Procedure:

  • set up the topology as above
  • create a Kafka topic called test with 6 partitions and replication factor 3. This results in topic znode shown in the screen shot below, with the partition to Kafka broker mapping as data on the node.
    screen shot 2017-05-17 at 17 22 35
  • copy Exhibitor cluster 1 data to local storage with burry -e xx.xx.xx.xx:2181 -t local
  • copy the backup dump to Exhibitor cluster 2 with burry -o restore -e yy.yy.yy.yy:2181 -t local -s <snapID>
  • Exhibitor cluster 2 now has the correct znode structure but the data in the topic node is missing (as shown below).
    screen shot 2017-05-17 at 17 22 52

Is there anything obviously wrong with my usage here, or something else I'm missing? Thanks! :)

testing of etcd

Great tool. I have a question about the etcd backup scripts. Could you please help?

  1. What are the pre-req of running etcd backup ?
  2. How did you test the backup have restored all the key-values pairs that is stored in etcd?
    For example if I delete some of the key value pairs before taking a backup how did you ensure that the keys-values is restored after the etcd restore?

I tried with etctdctl command for V3 to retrieve all the keys-value pairs stored before backup and verify the data integrity after the backup . But I could not find a single command to achieve this?
https://coreos.com/etcd/docs/latest/dev-guide/interacting_v3.html

S3 Instance Profile

Is it possible to add support for EC2 instance profile to use in AWS IAM authentication?

Unable to restore Zookeeper in Kubernetes while running Kafka

Hey
Great tool. I just have one issue with it.
I am running a Kubernetes cluster with a Zookeeper and Kafka. Both of these are running single instance clusters. Creating the backup from zookeeper and pushing it to s3 works flawlessly, but the problem I have is that I am unable to restore a fresh zookeeper from that backup when kafka is running. This is what I ran:

burry --endpoint=localhost:2181 --operation=restore --target=s3 --snapshot=1534148038 --credentials=s3.amazonaws.com,ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID,SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY,BUCKET=Example_bucket

DEBU[0005] Visited /brokers/topics                       func=visitZKReverse
DEBU[0005] Attempting to insert /brokers/topics/ExampleTopic-Source as leaf znode  func=visitZKReverse
INFO[0010] Restored /brokers/topics/ExampleTopic-Source  func=visitZKReverse
DEBU[0010] Value: {"version":1,"partitions":{"12":[0],"8":[0],"19":[0],"23":[0],"4":[0],"15":[0],"11":[0],"9":[0],"22":[0],"26":[0],"13":[0],"24":[0],"16":[0],"5":[0],"10":[0],"21":[0],"6":[0],"1":[0],"17":[0],"25":[0],"14":[0],"31":[0],"0":[0],"20":[0],"27":[0],"2":[0],"18":[0],"30":[0],"7":[0],"29":[0],"3":[0],"28":[0]}}  func=visitZKReverse
DEBU[0010] Visited /brokers/topics/ExampleTopic-Source  func=visitZKReverse
DEBU[0010] Visited /brokers/topics/ExampleTopic-Source/content  func=visitZKReverse
DEBU[0010] Attempting to insert /brokers/topics/ExampleTopic-Source/partitions as leaf znode  func=visitZKReverse
ERRO[0011] zk: node already exists:/brokers/topics/ExampleTopic-Source/partitions  func=visitZKReverse
ERRO[0011] zk: node already exists                       func=restoreZK
ERRO[0012] Operation completed with error(s).            func=main

From what I can tell it seems like burry first successfully restores /brokers/topics/ExampleTopic-Source but before it manages to restore brokers/topics/ExampleTopic-Source/partitions, kafka has already created that node.

Is this a known limitation or am I doing something wrong? Thanks!

SSL/TLS support?

Hello,

Wanted to give burry a try but our ETCd cluster is full HTTPs.

When passing endpoing hostname:2379 burry says:
ERRO[0000] client: etcd cluster is unavailable or misconfigured; error #0: malformed HTTP response "\x15\x03\x01\x00\x02\x02"

If I specifically mention https:// in front I get:
ERRO[0000] client: etcd cluster is unavailable or misconfigured; error #0: dial tcp: lookup https: no such host
Is burry capable of SSL/TLS?

Thank you!

Implement restore capabilities

Currently, data can only be extracted from ZK/etcd and stored in a number of places but other than looking at it, you can't use burry to restore the state of an infra service.

Temporary nodes in zk become persistent nodes after recovery

I use the following commands to backup and restore zk data. After the operation, it was found that the data of the temporary node was still there, which caused my application to be unable to reconnect to zk.

burry -e 127.0.0.1:2181 -t local
burry -o restore -e 127.0.0.1:2181 -t local -f -s 1603378589

Upload backup snapshot as latest/last also

In case of remote upload, i would like to have a pointer or link to latest/last snapshot uploaded.
Since we can automate this post burry backup execution, grep snapshot id and copy s3 object with snaphot id to latest/last.
But doing it by burry itself will be great for automations of both backup and restore.

Add v3 data support

Hello, I've been using this tool for a while and it's working great. But I recently upgraded to etcdv3 and seems like the v3 data is not being backed up.

It would be nice to have support for this

Thanks

Backup operation is taking too long

Awesome Project! I just have one issue with it... I have an application being orchestrated by Zookeeper and the backups of zookeeper are taking about 3 hours to run completely. There is a way to make it faster? There is a parameter to collect specific znodes from zookeeper?

ZK authentication

Hello,
I'm not sure if this is a missing feature or if I'm doing something wrong however I'm trying to backup some nodes which have ACL's enabled. I have the necessary credentials but can't understand how/where to use them:

root@f2d067caa672:/go# burry -e docker.for.mac.localhost:2181 -t local
INFO[0000] Selected operation: BACKUP                    func=main
2019/03/25 19:36:48 Connected to 192.168.65.2:2181
2019/03/25 19:36:49 Authenticated: id=72312109170556965, timeout=4000
2019/03/25 19:36:49 Re-submitting `0` credentials after reconnect
INFO[0000] Rewriting root                                func=store
ERRO[0000] zk: not authenticated                         func=visitZK
ERRO[0000] zk: not authenticated                         func=visitZK
ERRO[0000] zk: not authenticated                         func=visitZK
INFO[0000] Operation successfully completed. The snapshot ID is: 1553542608  func=main
root@f2d067caa672:/go#

Is this possible with burry?
Thanks

custom snapshot id does not work

Looks like --snapshot flag makes no effect. No matter what I put there, burry keeps using default snapshot IDs.

/tmp # burry -v
This is burry in version 0.4.0
/tmp # burry --endpoint=zk-1.zk:2181 --isvc=zk --target=local --snapshot="123"
INFO[0000] Selected operation: BACKUP                    func=main
INFO[0000] My config: {InfraService:zk Endpoint:zk-1.zk:2181 StorageTarget:local Creds:{StorageTargetEndpoint: Params:[]}}  func=main
2018/03/27 14:12:32 Connected to 172.17.0.5:2181
2018/03/27 14:12:32 Authenticated: id=171807602279776271, timeout=4000
2018/03/27 14:12:32 Re-submitting `0` credentials after reconnect
INFO[0000] Rewriting root                                func=store
INFO[0000] Operation successfully completed. The snapshot ID is: 1522159952  func=main
/tmp # ls -l
total 4
-rw-r--r--    1 root     root          1039 Mar 27 14:12 1522159952.zip

When restoring zk data, node data cannot be overwritten

I use the following commands to backup and restore zk data. For an existing path, the data cannot be overwritten, even if the -f parameter is added.

burry -e 127.0.0.1:2181 -t local
burry -o restore -e 127.0.0.1:2181 -t local -f -s 1603378589

Target parameter is misleading

The README says that --target can take any of these values: local, s3, minio or tty. So I assume that you must set "minio" in case you want to use it as target.

However, in the example "Back up etcd to Minio" you use --target s3:

$ ./burry --endpoint etcd.mesos:1026 --isvc etcd --credentials play.minio.io:9000,ACCESS_KEY_ID=Q3AM3UQ867SPQQA43P2F,SECRET_ACCESS_KEY=zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG --target s3

Is that ok? I'm asking this because I'm not able to make burry work with minio using --target minio and I don't know if this might be the cause. I'm getting a "server gave HTTP response to HTTPS" error:

time="2018-11-08T22:30:57Z" level=info msg="Selected operation: BACKUP" func=main
2018/11/08 22:30:57 Connected to 100.70.205.228:2181
2018/11/08 22:30:57 Authenticated: id=101037703899381862, timeout=4000
2018/11/08 22:30:57 Re-submitting `0` credentials after reconnect
time="2018-11-08T22:30:57Z" level=info msg="Rewriting root" func=store
time="2018-11-08T22:31:00Z" level=fatal msg="Get https://127.0.0.1:9000/my-backups-bucket/?location=: http: server gave HTTP response to HTTPS client" func=toremoteS3

I'm using burry 0.4.0 and minio RELEASE.2018-11-06T01-01-02Z just in case it helps. Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.