mhausenblas / burry.sh Goto Github PK
View Code? Open in Web Editor NEWCloud Native Infrastructure BackUp & RecoveRY
License: Apache License 2.0
Cloud Native Infrastructure BackUp & RecoveRY
License: Apache License 2.0
On windows system aux
is reserved and cannot be used as a file name.
MSDN
Windows user cannot clone this repository without going through bash for windows
a quick solution would be to just rename aux.go file to anything else
Hi,
Great tool, by the way. I've been experimenting with it a little to replicate Kafka config stored in Zookeeper (with Exhibitor), to basically allow a Kafka cluster to move onto new Zookeeper without losing information about topic partition assignment.
At the moment I'm running into an issue where the structure of the Kafka data being stored is being replicated correctly but the data in the topic znode is not copied.
My set up is:
Procedure:
test
with 6 partitions and replication factor 3. This results in topic znode shown in the screen shot below, with the partition to Kafka broker mapping as data on the node.burry -e xx.xx.xx.xx:2181 -t local
burry -o restore -e yy.yy.yy.yy:2181 -t local -s <snapID>
Is there anything obviously wrong with my usage here, or something else I'm missing? Thanks! :)
Great tool. I have a question about the etcd backup scripts. Could you please help?
I tried with etctdctl command for V3 to retrieve all the keys-value pairs stored before backup and verify the data integrity after the backup . But I could not find a single command to achieve this?
https://coreos.com/etcd/docs/latest/dev-guide/interacting_v3.html
Is it possible to add support for EC2 instance profile to use in AWS IAM authentication?
Hey
Great tool. I just have one issue with it.
I am running a Kubernetes cluster with a Zookeeper and Kafka. Both of these are running single instance clusters. Creating the backup from zookeeper and pushing it to s3 works flawlessly, but the problem I have is that I am unable to restore a fresh zookeeper from that backup when kafka is running. This is what I ran:
burry --endpoint=localhost:2181 --operation=restore --target=s3 --snapshot=1534148038 --credentials=s3.amazonaws.com,ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID,SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY,BUCKET=Example_bucket
DEBU[0005] Visited /brokers/topics func=visitZKReverse
DEBU[0005] Attempting to insert /brokers/topics/ExampleTopic-Source as leaf znode func=visitZKReverse
INFO[0010] Restored /brokers/topics/ExampleTopic-Source func=visitZKReverse
DEBU[0010] Value: {"version":1,"partitions":{"12":[0],"8":[0],"19":[0],"23":[0],"4":[0],"15":[0],"11":[0],"9":[0],"22":[0],"26":[0],"13":[0],"24":[0],"16":[0],"5":[0],"10":[0],"21":[0],"6":[0],"1":[0],"17":[0],"25":[0],"14":[0],"31":[0],"0":[0],"20":[0],"27":[0],"2":[0],"18":[0],"30":[0],"7":[0],"29":[0],"3":[0],"28":[0]}} func=visitZKReverse
DEBU[0010] Visited /brokers/topics/ExampleTopic-Source func=visitZKReverse
DEBU[0010] Visited /brokers/topics/ExampleTopic-Source/content func=visitZKReverse
DEBU[0010] Attempting to insert /brokers/topics/ExampleTopic-Source/partitions as leaf znode func=visitZKReverse
ERRO[0011] zk: node already exists:/brokers/topics/ExampleTopic-Source/partitions func=visitZKReverse
ERRO[0011] zk: node already exists func=restoreZK
ERRO[0012] Operation completed with error(s). func=main
From what I can tell it seems like burry first successfully restores /brokers/topics/ExampleTopic-Source but before it manages to restore brokers/topics/ExampleTopic-Source/partitions, kafka has already created that node.
Is this a known limitation or am I doing something wrong? Thanks!
Hello,
Wanted to give burry a try but our ETCd cluster is full HTTPs.
When passing endpoing hostname:2379 burry says:
ERRO[0000] client: etcd cluster is unavailable or misconfigured; error #0: malformed HTTP response "\x15\x03\x01\x00\x02\x02"
If I specifically mention https:// in front I get:
ERRO[0000] client: etcd cluster is unavailable or misconfigured; error #0: dial tcp: lookup https: no such host
Is burry capable of SSL/TLS?
Thank you!
Based on the cloud.google.com/go/storage package, add support for Google Storage as storage target.
Currently, data can only be extracted from ZK/etcd and stored in a number of places but other than looking at it, you can't use burry
to restore the state of an infra service.
Backing up and restoring the Consul raft db and k/v store would be great.
Based on the github.com/Azure/azure-sdk-for-go/storage package, add support for Azure Storage as storage target. See also the GoDocs ...
I use the following commands to backup and restore zk data. After the operation, it was found that the data of the temporary node was still there, which caused my application to be unable to reconnect to zk.
burry -e 127.0.0.1:2181 -t local
burry -o restore -e 127.0.0.1:2181 -t local -f -s 1603378589
In case of remote upload, i would like to have a pointer or link to latest/last snapshot uploaded.
Since we can automate this post burry backup execution, grep snapshot id and copy s3 object with snaphot id to latest/last.
But doing it by burry itself will be great for automations of both backup and restore.
Hello, I've been using this tool for a while and it's working great. But I recently upgraded to etcdv3 and seems like the v3 data is not being backed up.
It would be nice to have support for this
Thanks
See https://godoc.org/github.com/docker/docker/client for the API, needs scoping and use cases
Awesome Project! I just have one issue with it... I have an application being orchestrated by Zookeeper and the backups of zookeeper are taking about 3 hours to run completely. There is a way to make it faster? There is a parameter to collect specific znodes from zookeeper?
Hello,
I'm not sure if this is a missing feature or if I'm doing something wrong however I'm trying to backup some nodes which have ACL's enabled. I have the necessary credentials but can't understand how/where to use them:
root@f2d067caa672:/go# burry -e docker.for.mac.localhost:2181 -t local
INFO[0000] Selected operation: BACKUP func=main
2019/03/25 19:36:48 Connected to 192.168.65.2:2181
2019/03/25 19:36:49 Authenticated: id=72312109170556965, timeout=4000
2019/03/25 19:36:49 Re-submitting `0` credentials after reconnect
INFO[0000] Rewriting root func=store
ERRO[0000] zk: not authenticated func=visitZK
ERRO[0000] zk: not authenticated func=visitZK
ERRO[0000] zk: not authenticated func=visitZK
INFO[0000] Operation successfully completed. The snapshot ID is: 1553542608 func=main
root@f2d067caa672:/go#
Is this possible with burry?
Thanks
Looks like --snapshot flag makes no effect. No matter what I put there, burry keeps using default snapshot IDs.
/tmp # burry -v
This is burry in version 0.4.0
/tmp # burry --endpoint=zk-1.zk:2181 --isvc=zk --target=local --snapshot="123"
INFO[0000] Selected operation: BACKUP func=main
INFO[0000] My config: {InfraService:zk Endpoint:zk-1.zk:2181 StorageTarget:local Creds:{StorageTargetEndpoint: Params:[]}} func=main
2018/03/27 14:12:32 Connected to 172.17.0.5:2181
2018/03/27 14:12:32 Authenticated: id=171807602279776271, timeout=4000
2018/03/27 14:12:32 Re-submitting `0` credentials after reconnect
INFO[0000] Rewriting root func=store
INFO[0000] Operation successfully completed. The snapshot ID is: 1522159952 func=main
/tmp # ls -l
total 4
-rw-r--r-- 1 root root 1039 Mar 27 14:12 1522159952.zip
I use the following commands to backup and restore zk data. For an existing path, the data cannot be overwritten, even if the -f
parameter is added.
burry -e 127.0.0.1:2181 -t local
burry -o restore -e 127.0.0.1:2181 -t local -f -s 1603378589
The README says that --target
can take any of these values: local, s3, minio or tty. So I assume that you must set "minio" in case you want to use it as target.
However, in the example "Back up etcd to Minio" you use --target s3
:
$ ./burry --endpoint etcd.mesos:1026 --isvc etcd --credentials play.minio.io:9000,ACCESS_KEY_ID=Q3AM3UQ867SPQQA43P2F,SECRET_ACCESS_KEY=zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG --target s3
Is that ok? I'm asking this because I'm not able to make burry work with minio using --target minio
and I don't know if this might be the cause. I'm getting a "server gave HTTP response to HTTPS" error:
time="2018-11-08T22:30:57Z" level=info msg="Selected operation: BACKUP" func=main
2018/11/08 22:30:57 Connected to 100.70.205.228:2181
2018/11/08 22:30:57 Authenticated: id=101037703899381862, timeout=4000
2018/11/08 22:30:57 Re-submitting `0` credentials after reconnect
time="2018-11-08T22:30:57Z" level=info msg="Rewriting root" func=store
time="2018-11-08T22:31:00Z" level=fatal msg="Get https://127.0.0.1:9000/my-backups-bucket/?location=: http: server gave HTTP response to HTTPS client" func=toremoteS3
I'm using burry 0.4.0 and minio RELEASE.2018-11-06T01-01-02Z just in case it helps. Thanks.
Support ceph Storage as target. it is similar to aws s3 compatible.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.