Comments (7)
Thanks. I have been wondering what happens when a remote transfer job takes more than an hour and the next job begins.
from btrfs-sxbackup.
thanks for the heads up. the expected result in this case is:
INFO preparing environment
ERROR Command '['bash', '-c', 'if [ -d "/.sxbackup/.temp"* ]; then btrfs sub del "/.sxbackup/.temp"*; fi']' returned non-zero exit status 1
ERROR: cannot delete '/.sxbackup/.temp.675e3049b71349b2aa8dc19cac6b349b' - Operation not permitted
As a temporary subvolume could exist for various reasons, also eg. unclean termination due to kill, reset, power outage etc. refusing to run essentially implies delegating the cleanup to the user for all other cases, which is (imho) a bad idea. Best to use kernel version which has the fix.
from btrfs-sxbackup.
my suggestion is to let btrfs-sxbackup check itself if it is already running, because it should not depend on (the currently buggy) btrfs to do so. btrfs-sxbackup should not run twice. even if running it two times will make it fail at a lower level, it is imho not the clean way to do so. The "Operation not permitted" message is not exactly a "Btrfs-Sxbackup is already running".
also please note that linux 4.0.3+ is rarly seen on any production systems. debian does not even offer a prebuild package for amd64. this bug hit me three times on different systems with linux 3.17 - 4.0.2, causing silent backup failures and reboots.
from btrfs-sxbackup.
@bhelm has a strong argument.
from btrfs-sxbackup.
Multiple instances of btrfs-sxbackup are allowed, since you could run multiple jobs at the same time.
I cannot replicate this on 4.0.1 (at all). I see delays, but no deadlocks. How do you replicate it?
The issue is already resolved on kernel level, I will not implement a workaround due to update woes of specific distros. But you could look into https://ma.ttias.be/prevent-cronjobs-from-overlapping-in-linux/ if you have to.
from btrfs-sxbackup.
Multiple instances of btrfs-sxbackup are allowed, since you could run multiple jobs at the same time.
that is right but that is not the problem im talking about. the problem occures sometimes when you run multiple instances of the SAME job, this can cause deadlocks on older kernels and btrfs-sxbackup does not support this use case either.
btrfs itself should have no problems if you create multiple snapshots of the same volume and transfer them simultaniously, but btrfs-sxbackup deletes the first partially transfered snapshot before it starts another transfer (without noticing that the transfer that belongs to that snapshot is still active)
this could be avoided by checking for already running btrfs-sxbackup instances on per-job basis and refuse to start another transfer on that condition. Im now using the linux flock utility to archive this, having this functionality native in btrfs-sxbackup would make job creation a bit easier and would prevent new users from running the same job twice by accident - and also running into that btrfs deadlock bug on common stable kernels that requires a reboot to fix.
from btrfs-sxbackup.
ok. if you provide a good implementation using fcntl I will pull from you, but I don't have time to implement it for now.
from btrfs-sxbackup.
Related Issues (20)
- Individual commands for local (`snapshot`) and remote transfer (`sync`) HOT 4
- No way to disable compression HOT 2
- Allow to disable ssh compression HOT 2
- Allow to specify ssh cypher HOT 5
- Backup not working unexpected EOF in stream HOT 4
- Incremental backup not working HOT 19
- Destination retention is not applied HOT 2
- sxbackup uses pv when invoked by systemd timer. HOT 3
- Run a pre- script on remote source? HOT 5
- Btrfs-send resiliency in the presence of tcp connection drops HOT 1
- btrfs-sxbackup requires (but does not document) "ionice" on the target HOT 1
- Enable compression on destination only HOT 4
- Command to make instant snapshot only, to transfer it later HOT 4
- Add cli argument to force disable pv for launching in cron scripts HOT 1
- Add transferred snapshot data size to logs output HOT 1
- Missing destination url in ssh source .sxbackup/.btrfs-sxbackup config file HOT 6
- btrfs-sxbackup fails to update backup job with ssh source HOT 1
- add webhook support HOT 1
- Add "--proto 0" to send/receive HOT 2
- File contains no section headers HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from btrfs-sxbackup.