Giter VIP home page Giter VIP logo

zfsnap's Introduction

Note

This branch contains the new 2.0 code-base which is in beta. While 2.0 is a big step forward and has far better testing, it has not been used as widely in production as the zfSnap 1.x line.

Testing is most welcome, but use at your own risk.

Please use the "legacy" branch for the older, more battle-tested version of zfSnap.

About zfsnap

zfsnap makes rolling ZFS snapshots easy and — with cron — automatic.

The main advantages of zfsnap are its portability, simplicity, and performance. It is written purely in /bin/sh and does not require any additional software — other than a few core *nix utilies.

zfsnap stores all the information it needs about a snapshot directly in its name; no database or special ZFS properties are needed. The information is stored in a way that is human readable, making it much easier for a sysadmin to manage and audit backup schedules.

Snapshot names are in the format of pool/fs@[prefix]Timestamp--TimeToLive (e.g. pool/fs@weekly-2014-04-07_05.30.00--6m). The prefix is optional but can be quite useful for filtering, Timestamp is the date and time when the snapshot was created, and TimeToLive (TTL) is the amount of time the snapshot will be kept until it can be deleted.

Need help?

The wiki covers zfSnap 1.x. https://github.com/zfsnap/zfsnap/wiki

For information about zfsnap 2.0, please refer to the manpage or the zfsnap website.

We have a mailing list ([email protected]) for questions, suggestions, and discussion. It can also be found at gmane.comp.sysutils.zfsnap on gmane.

Will zfsnap run on my system?

zfsnap is written with portability in mind, and our aim is for it to run on any and every OS that supports ZFS.

Currently, zfsnap supports FreeBSD, Solaris (and Solaris-like OSs), Linux, GNU/kFreeBSD, and OS X. It should run on your system as long as:

  • ZFS is installed
  • your Bourne shell is POSIX compliant and supports "local" variables (all modern systems should)
  • your system provides at least the most basic of POSIX utilities (uname, head, etc)
  • your system uses the Gregorian calendar

See the PORTABILITY file for additional information on specific shells and OSs.

zfsnap's People

Contributors

aqw avatar bee27 avatar graudeejs avatar gsadams avatar madssj avatar mmatuska avatar ppreeper avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zfsnap's Issues

Request: Option to send snapshots to alternative pool

Hi,

Firstly, thank you for creating this script, its been really useful.

I have decided to create a second pool to fully backup a filesystem on my main pool and was wondering if you could add an option (-b?) to backup the snapshot to an alternative pool. e.g. zfSnap -a 5d -r zpool -b bpool

This would first check if the current filesystem in the loop (e.g. xyz) exists in the bpool. If it doesn't then it will just zfs send zpool/xyz | zfs recv bpool/xyz. If it does exist then the script will get the n (just created) snapshot and the n-1 snapshot and send an incremental backup zfs send -i zpool/xyz@n zpool/xyz@n-1 | zfs recv bpool/xyz

Is this achievable?

Thanks,

Comm

zfssnap reports trying to delete pools

Here is the output that gets mailed to me:

Scrubbing of zfs pools:
   starting scrub of pool 'ssdmirror':
      consult 'zpool status ssdmirror' for the result
   starting scrub of pool 'usbenc':
      consult 'zpool status usbenc' for the result
   skipping scrubbing of pool 'zfsbak':
      last scrubbing is 29 days ago, threshold is set to 30 days
   skipping scrubbing of pool 'zfsraid2':
      last scrubbing is 29 days ago, threshold is set to 30 days
   starting scrub of pool 'zroot':
      consult 'zpool status zroot' for the result
FATAL: trying to delete zfs pool or filesystem? WTF?
  This is bug, we definitely don't want that.
  Please report it to https://github.com/graudeejs/zfSnap/issues
  Don't panic, nothing was deleted :)

And here is my config:

daily_zfsnap_enable="YES"

daily_zfsnap_delete_enable="YES"

daily_zfsnap_fs="zfsraid2/shows zfsbak/movies"
daily_zfsnap_flags="-s -S"
daily_zfsnap_verbose="YES"
daily_zfsnap_delete_flags="-s -S"
daily_zfsnap_delete_verbose="YES"


daily_scrub_zfs_enable="YES"
daily_scrub_zfs_zroot_threshold="7"
daily_scrub_zfs_ssdmirror_threshold="7"
daily_scrub_zfs_zfsbak_threshold="30"
daily_scrub_zfs_usbenc_threshold="14"
daily_scrub_zfs_zfsraid2_threshold="30"

As stated in the email output, it told me to report the issue, so here I am :) I am running FreeBSD 10.1 with its associated ZFS version.

ISO8601 timestamps

Would it be possible to use ISO8601 standard timestamps instead of another different format?

Write test script

This script should populate filesystem with snapshots with various timestamps and prefixes, then execute zfSnap and then check output.

unexpected behaviour

hi,

i'm trying to clean old snapshots in zpool/test, but the script wants to destroy all the old snapshots from the pool

# zfSnap -d -F 1d -v -n -p daily- -D zpool/test
/sbin/zfs destroy -r zpool@2012-03-26_17.21.00--3m
/sbin/zfs destroy -r zpool@daily-2012-03-27_03.14.00--3d
/sbin/zfs destroy -r zpool@daily-2012-03-28_03.14.00--3d
/sbin/zfs destroy zpool/test@2012-03-26_17.21.00--3m
/sbin/zfs destroy zpool/test@daily-2012-03-27_03.14.00--3d
/sbin/zfs destroy zpool/test@daily-2012-03-28_03.14.00--3d
/sbin/zfs destroy zpool/test@daily-2012-03-29_03.14.00--3d

the behaviour is different omitting the -F 1d parameter

am i doing something wrong?
is this correct?

thanks
jan

Feature to exclude certain filesystems

I am using the recursive_fs to snapshot my jails but for some jails I would like to disable the snapshot for it. I want to keep using _recursive_fs so I don't miss new jails but would like to have an option to exclude a fs.

daily_zfsnap_enable="YES"
daily_zfsnap_recursive_fs="tank/root tank/jails"
daily_zfsnap_verbose="YES"
daily_zfsnap_flags="-s -S"

FATAL: trying to delete zfs pool or filesystem? WTF?

Saw this in my weekly run output, so opening an issue as directed.

/sbin/zfs snapshot -r pool0/home@weekly-2014-02-22_04.44.26--1m ... DONE
/sbin/zfs snapshot -r zroot@weekly-2014-02-22_04.44.26--1m ... DONE
NOTE: No action will be performed on 'pool0@daily-2014-02-14_04.15.58--1w'. Scrub is running on pool.
NOTE: No action will be performed on 'pool0@daily-2014-02-15_04.02.02--1w'. Scrub is running on pool.
FATAL: trying to delete zfs pool or filesystem? WTF?
  This is bug, we definitely don't want that.
  Please report it to https://github.com/graudeejs/zfSnap/issues
  Don't panic, nothing was deleted :)

I'm not really sure what went wrong, or what additional information I should provide, so please let me know how I can help.

Is the example wrong?

At https://github.com/graudeejs/zfSnap/wiki/zfSnap there is an example, which I think is wrong. It claims to be an hourly snapshot. I think it's every 5 minutes.

Hourly recursive snapshots of an entire pool kept for 5 days

Minute Hour Day of month Month Day of week Who Command

5 * * * * root /usr/local/sbin/zfSnap -a 5d -r zpool

Solaris 11 Path

I tried the script on Solaris 11 and had several issues with sed, grep and date.
Looks like an easy solution is to add the GNU bin directory to the path:

16a17

22c23

< 'SunOS' | 'Linux')

'Linux')
24a26,29
'SunOS')
ESED='sed -r'
[ "$(uname -a|grep 5.11)" ] && export PATH=/usr/gnu/bin:$PATH
;;

Tag next release (2.0.0.beta3?)

Hi, I would be great to tag 2.0.0.beta3 (or something higher if desired), there have been important changes since 2.0.0.beta2

Multiple filesystem arguments not respected

I'm using the 2.0 beta and have found that passing multiple filesystems at the command line results in only the first one listed being snapshotted.

Example:

# /usr/sbin/zfsnap snapshot -s -S -v -a 1d tank/home tank/apps
/sbin/zfs snapshot  tank/home@2014-11-21_21.14.05--1d ... DONE

I've tried placing "-a" option in front of each filesystem argument but the result in the same.

Add Support for Time Zones

Currently, zfsnap doesn't store time zone information in the snapshot name. This isn't a huge problem locally, but it does cause problems during DST, where we can either add or lose an hour (which could affect those who use short TTLs).

A bigger problem is if the snapshots are sent to another server in another time zone, where one could easily end up 6+ hours off. Granted, trans-oceanic zfs sends of short-TTL snapshots probably aren't /that/ common.... but this would be nice for the sake of accuracy.

The important part of this task is:

  • Get timezone offset in a portable way (date %z isn't POSIX, but if everyone supports it, I don't care)
  • Don't break backwards compatibility (i.e. support snapshot names both with and without TZ info in the name).

Apply -d to only certain filesystems

It would be nice to be able to apply the regular delete policy, but only to certain filesystems.

This can be useful, for instance, for maintaining backups.

Installation on Mac problems

I ran the tests included with the latest release and they all passed, but for some reason when I try to actually use zfsnap with this command:

./zfsnap.sh snapshot -rv -a 6w zfspartition/Home

I get the following error:

/../share/zfsnap/core.sh: line 190: /usr/sbin/zfs: No such file or directory
FATAL: 'zfspartition/Home' does not exist!

If I run zfs mount I get the following output


zfspartition                    /Volumes/zfspartition
zfspartition/Home               /Volumes/zfspartition/Home
zfspartition/VideoFiles         /Volumes/zfspartition/VideoFiles

So I think it should work, especially with all the tests passing...I've tried running the script as the super user and tried using sh, bash, and fish, and I always get the same results. Any suggestions on what to do would be much appreciated.

-D & -p options behavior oddities...

Hello. I ran in to a situation where a file system ran out of space and I needed to blow away all of the snapshots. Issuing a command like:

zfSnap -D tank/dataset

Doesn't actually work because the regexp used ends up being:

grep -E -e '^(|tank/dataset)@()?20[0-9][0-9]-[01][0-9]-[0-3][0-9]_[0-2][0-9]\.[0-5][0-9]\.[0-5][0-9]--([0-9]+y)?([0-9]+m)?([0-9]+w)?([0-9]+d)?([0-9]+h)?([0-9]+M)?([0-9]+[s]?)?$'

Which fails because of the empty parenthesis after the @.

Looking through the script, you realize that you need to specify the -p flag and capture all of the hourly prefixes, got it:

zfSnap -D tank/dataset -p hourly

But no:

grep -E -e '^(|tank/dataset)@(hourly)?20[0-9][0-9]-[01][0-9]-[0-3][0-9]_[0-2][0-9]\.[0-5][0-9]\.[0-5][0-9]--([0-9]+y)?([0-9]+m)?([0-9]+w)?([0-9]+d)?([0-9]+h)?([0-9]+M)?([0-9]+[s]?)?$'

The hyphen is missing at the end of the hourly prefix. Janky, but the following works:

zfSnap -D tank/dataset -p hourly-
grep -E -e '^(|tank/dataset)@(hourly-)?20[0-9][0-9]-[01][0-9]-[0-3][0-9]_[0-2][0-9]\.[0-5][0-9]\.[0-5][0-9]--([0-9]+y)?([0-9]+m)?([0-9]+w)?([0-9]+d)?([0-9]+h)?([0-9]+M)?([0-9]+[s]?)?$'

I think the desired regexp is:

grep -E -e '^(|tank/dataset)@(hourly)?-20[0-9][0-9]-[01][0-9]-[0-3][0-9]_[0-2][0-9]\.[0-5][0-9]\.[0-5][0-9]--([0-9]+y)?([0-9]+m)?([0-9]+w)?([0-9]+d)?([0-9]+h)?([0-9]+M)?([0-9]+[s]?)?$'

Handle spaces in component names

So, despite not being listed as an acceptable ZFS character, it is in fact allowed. I tested it, and I can create pools, filesystems, and snapshots with spaces in them.

The worst-case scenario is that a space will be split incorrectly when requesting ZFS to perform an action (say delete a snapshot). There are no known bugs that are affected by this, and zfsnap currently protects against many such scenarios. But it would be preferable to properly support spaces in component names across zfsnap for completeness and protection.

  • All loops which operate on ZFS components should split by newline rather than the default IFS (newline, space, and tab)
  • Lists which are built by zfsnap (such as snapshots to delete) should be newline delimited rather than space delimited (better to not switch field delimiters throughout the code base)
  • Verify that the main loops in zfsnap commands (which operate on user-supplied targets) properly handle quoted and escaped spaces
  • Make sure commands issued to ZFS are quoted properly
  • Tests, tests, and more tests

Tab Completion

It's be nice to have tab completion support for zfsnap in some common shells. Bash, zsh, and tcsh come to mind.

com.sun:auto-snapshot* support

Hi all

ZFS normally uses some custom properties to control automatic snapshotting. These are

com.sun:auto-snapshot
com.sun:auto-snapshot:frequent
com.sun:auto-snapshot:hourly
com.sun:auto-snapshot:daily
com.sun:auto-snapshot:weekly
com.sun:auto-snapshot:monthly
(and possibly others)

I can't see any support for these in zfsnap, and it would be rather easy to add it so that inclusion for the snapshots could be easily determined per dataset.

I can write the code, I'm just crious why it hasn't been done already, and I wouldn't like to waste time coding something noone will use :P

roy

zfs holds fail delete snapshots

I put holds on the snapshot that I use to replicate to my backup box, so if it's unavailable for longer than the snapshot removal time, it won't get deleted. Having a hold breaks the deletion entirely, I just want to skip deleting anything with a hold on it. I made the following change to do this. If you think this is useful (it's a bug because a hold is part of zfs and having one somewhere breaks deletion), feel free to rewrite it better


if [ "$rm_snapshot_pattern" != '' ]; then
        rm_snapshots=$(echo $zfs_snapshots | xargs printf '%s\n' | grep -E -e "@`echo $rm_snapshot_pattern | sed -e 's/ /|/g'`" | sort -u)
        for i in $rm_snapshots; do
            ###aaron added - don't delete if it's the last sync
                #check for holds
                holds=$(`echo zfs holds -r $i | grep $i`)
                if echo $holds | grep -q -e '@'; then
                    echo "not deleting snapshot as it has a hold"
                else
                    rm_zfs_snapshot -r $i
                fi
            #rm_zfs_snapshot -r $i
        ###end aaron
        done
    fi 

Implement a 'keep min snapshots' option

I think it would useful to have an option to keep an minimum of snapshots regardless of the TTL (like '/sbin/zfsnap destroy -r keep-min=3 zpool')

What do you think about it?

Change TTL's y and m from fixed-length to calendar

Currently, zfsnap defines 1 month = 30 days and 1 year = 365 days. While this makes the math easier, it is unintuitive for the sysadmin and limits flexibility. A snapshot created on Feb 2 with 1m TTL will expire March 4th (or March 3rd on a leap year). IMO, 1m should expire March 2nd and 30d should expire March 4th (or 3rd on a leap year).

This limitation is largely due to the way zfsnap uses date. It is the least portable portion of the code ( -j, -f, --date, and %s are all non-POSIX), slowest (outside of the zfs calls), is used differently per-OS, and limits the flexibility of zfsnap.

Replacing snapshot date comparison and TTL addition with a purely sh solution will allow zfsnap to

  • Use calendar y and m
  • Avoid calling date for every snapshot checked
  • Limit bugs by performing the exact same way on each OS
  • Make zfsnap truly portable to any system that can run ZFS.

shorten code reviews with Travis and shellcheck

I was reading through a couple PRs and noticed the pointing out of quite a few lint type issues. Basically, stuff a human shouldn't have to be bothered with...

This could be completely automated away, such that when a developer opens a PR, a hook calls Travis which would check out a version of the code and run shellcheck against it.

benefits

  • lint issues are caught automatically, and developers can see what the issues are
  • it makes the continuous integration service (Travis) the strict lint police
  • GitHub will let you prevent PRs that break the tests from being merged ("sorry, can't merge it until the tests pass")

costs

  • you'll need to merge the PR I'll create
  • you'll need to set up Travis access to your repo (so they get notified of PRs, and can post results back to the PRs)
  • there will likely be some lint issues to clean up before tests pass. Basically, a developer will go through the issues raised and decide whether to suppress that particular warning, or alter the code to "fix" it.

Interested?

Feature to "promote" existing snapshots instead of creating new ones?

I'm setting up a ZFS-based backup server for my home, and my ideal scenario is that all the other machines will sync their important data to the backup server every hour, and then zfsnap will keep shapshots of all that data. However, if I set up separate cron jobs for the sync and the snapshotting, there's no guarantee that some snapshots won't happen in the middle of a sync. I could solve this problem for the hourly snapshots by putting the zfsnap command and sync commands in the same file. However, the daily, weekly, etc. snapshots, must be in separate scripts, so that wouldn't solve the problem. Ideally, I would like to be able to "promote" snapshots from one prefix to another. For example, the daily snapshot script would find the latest snapshot with a prefix of "hourly-" that is at least 24 hours old and rename it to have the "daily-" prefix instead, also changing the TTL at the same time. Similarly, the weekly script would promote a 7-day-old daily snapshot to a weekly. With this scheme, snapshots are only ever created in one place (the hourly script), so there is no chance of daily/weekly/etc. snapshots happening in the middle of a sync.

An alternative solution to the problem would be to have an option to only create a snapshot if no existing snapshot with the same prefix is younger than a certain age. For example, "create a daily snapshot only if there is no currently existing daily snapshot is less than 23 hours old". Then I could put all the snapshot creation commands in the hourly script after the sync commands, so they would all never happen during a sync.

Would there be any interest in implementing either feature?

Disable filtering when destroying

I just realized my hourly snapshots are maintained since 15 months .. currently deleting ...

My fault was that my cron run destroy -r pool/ds01 which did not destroy anything, because I assume filtering was missing.

Trying to disable filtering with destroy -n -v -P pool/ds01 also did not select any snapshots (empty result). I expected that all snapshots regardless of their prefix and beyond their TTL should get selected. I have to explicitly run destroy -p hourly && destroy -p daily && destroy -p weekly && ...

I think either I misunderstood -P which should disable all filtering or is does not work.

I used v2.0.0.beta2 with my cron and for testing the filtering v2.0.0.beta3

Thanks,

bc: bc: cannot find dc: No such file or directory

When running zfSnap.sh -d -s -S -zpool28fix, I get this error:

bc: bc: cannot find dc: No such file or directory
cannot open /usr/share/misc/bc.library: _third-party-tools/zfSnap.sh: line 420: 1394130300 + : syntax error: operand expected (error token is "+ ")

This is preventing me from removing old backups. I did not have this problem back in 2011.

uname -a

FreeBSD [HIDDEN] 9.2-RELEASE-p4 FreeBSD 9.2-RELEASE-p4 #0 r264973M: Sun Apr 27 13:37:49 CEST 2014     [email protected]:/usr/obj/nas4free/usr/src/sys/NAS4FREE-amd64  amd64

I'm on zpool version 28 and zfs version 5.

FATAL: trying to delete zfs pool or filesystem? WTF?

Scrubbing of zfs pools:
   starting scrub of pool 'data':
      consult 'zpool status data' for the result
   skipping scrubbing of pool 'ssd':
      last scrubbing is 23 days ago, threshold is set to 35 days
NOTE: No action will be performed on 'data@daily-2016-12-21_03.01.19--1w'. Scrub is running on pool.
FATAL: trying to delete zfs pool or filesystem? WTF?
  This is bug, we definitely don't want that.
  Please report it to https://github.com/graudeejs/zfSnap/issues
  Don't panic, nothing was deleted :)
# zfSnap --help
zfSnap v1.11.1 by Aldis Berjoza

Use of prefixes for destroy not clear in the docs

I have snapshots like:

rpool/data/backups@zfsnap-hourly-2016-01-16_22.13.00--2w                 0      -   121M  -
rpool/data/backups@zfsnap-hourly-2016-01-16_22.43.00--2w                 0      -   121M  -
rpool/data/backups@zfsnap-hourly-2016-01-16_23.05.00--2w                 0      -   121M  -
rpool/data/backups@zfsnap-hourly-2016-01-16_23.06.00--2w                 0      -   121M  -
rpool/data/backups@zfsnap-hourly-2016-01-16_23.13.00--2w                 0      -   121M  -
rpool/data/backups@zfsnap-hourly-2016-01-16_23.43.00--2w                 0      -   121M  -
rpool/data/backups@zfsnap-hourly-2016-01-17_00.13.00--2w                 0      -   121M  -
rpool/data/backups@zfsnap-hourly-2016-01-17_00.43.00--2w                 0      -   121M  -
rpool/data/backups@zfsnap-hourly-2016-01-17_01.13.00--2w                 0      -   121M  -
rpool/data/backups@zfsnap-hourly-2016-01-17_01.43.00--2w                 0      -   121M  -
rpool/data/backups@zfsnap-hourly-2016-01-17_02.13.00--2w                 0      -   121M  -

But, these commands display nothing and do nothing:

sudo zfsnap destroy -v rpool/data/backups
sudo zfsnap destroy -rv rpool/
sudo zfsnap destroy -nvF 3w rpool/data/backups

Am I missing something here? Version:

sudo zfsnap --version
zfsnap v2.0.0.beta2

Thanks.

zfSnap -d does not remove old snapshots

hello,

my problem may be between a chair and a keyboard, but I cant find what is wrong.

On FreeBSD (8.2-RELEASE-p5), with your zfSnap.sh (VERSION=1.11.1) I cannot force one to delete old snapshots what were created by the same script (it works perfect):

# date
Wed Apr  3 16:23:22 CEST 2013
# zfs list -t snapshot
NAME                                     USED  AVAIL  REFER  MOUNTPOINT
data/rsync/gw@2013-03-15_14.47.02--d1   70.3G      -  91.1G  -
...
data/rsync/gw@2013-04-01_03.15.01--d14  49.4G      -  88.3G  -
data/rsync/gw@2013-04-03_03.15.01--d8   26.9G      -  80.6G  -
# ./zfSnap.sh -d -v
# zfs list -t snapshot
NAME                                     USED  AVAIL  REFER  MOUNTPOINT
data/rsync/gw@2013-03-15_14.47.02--d1   70.3G      -  91.1G  -
...
data/rsync/gw@2013-04-01_03.15.01--d14  49.4G      -  88.3G  -
data/rsync/gw@2013-04-03_03.15.01--d8   26.9G      -  80.6G  -

Could you advise me, what I'm doing wrong, or talk me what next information you need to resolve it, please?

Thank you and
Best Regards
Marek

Recursive snapshot without parent directory

I'm using zfsnap create -a 1w -r zroot/usr/home

Snapshots are taken of all zfs filesystems within home and of home itself (I do not need the latter). There is nothing in the home directory besides a directory for each user. How can I disable snapshots of home while taking snapshots recursively of all zfs filesystems inside it?

Integration tests

There's my 2 cents about how we could run integration tests. It's really pretty simple.

To be able to test zfsnap on various platforms we need bunch of VirtualBox instances.
All these instance should have specific zpool dedicated for testing (maybe even two zpools to test that snapshots are not deleted from 2nd zpool under various conditions)

Logging in to virtual machines (VMs) should be done with ssh using private/public key mechanism. That way tests could log in into VM automatically do their stuff and assert result.

It would be possible to fully automate test scripts to start VM, run tests, shut down VM, roll-back changes (revert to previous VM state).

There should be possibility to run individual tests on one/all VM.

redirection to stderr prevents use in systemd-services

redirection of stdout to stderr in lines like:

365: if $zfs_snapshot > /dev/stderr ; then
366: .......

cause creation of /dev/stderr which is not possible if it is a socket (as is the case in systemd environment).
The error message displays as follows:

 /usr/sbin/zfSnap: 365: /usr/sbin/zfSnap: cannot create /dev/stderr: No such device or address
Dec 31 13:45:18 HOSTNAME zfSnap[29682]: /sbin/zfs snapshot -r data@prefix2016-12-31_13.00.00--2h01s ... FAIL

my proposed solution is to replace every occurence of > /dev/stderr with 1>&2.
this also works for sockets, as discussed here

Date2Timestamp test fails... 25200 seconds (7hrs) off?

./run.sh 

`Date2Timestamp '2014-01-29_02.03.00'` echos `1390953780` ... failed
  expected result: 1390953780
    actual result: 1390978980

`Date2Timestamp '2013-12-28_22.13.01'` echos `1388261581` ... failed
  expected result: 1388261581
    actual result: 1388286781

I think there is something different about date or sed on Arch linux. I just switched from Ubuntu where all the tests passed.

Is this still an active project?

Hello all,

Apologies if the title sounded critical, that's not the intent. I'm just wondering if there are still plans to support this tool going forward. It looks like awhile back there was push for version 2.0.0 and even 2.1.0 with send/recv support but it seems that was never finished.

I'm just now setting up my first zfs deployment with offsite backup (using openzfs on macos) and I'm looking at the current state for tools to manage as much of the automated snapshot and transfer as possible.

I'm just trying to understand where this project is in its lifecycle.

Thanks,
-- Arron

ver2.0.0 solaris 11

I get error with local variables in bourne shell. It seems not supported in bourne shell ?
I also got error in Exit 0, share/commands/snapshot.sh and destroy.sh.
The capital E creates an error exit 0 works good.

Create zfsnap send

This is the most often requested feature. Create a zfsnap command to send zfsnap generated snapshots to another server. Ideally it will

  • Automatically determine the most recent zfsnap snapshot on the destination server
  • Ideally not require zfsnap be installed on the destination server
  • Be flexible and not reinvent the wheel. Work with rather than reimplement all possible send scenarios (netcat, ssh, remote, local, compressed, etc)

Change License to BSD 3-Clause

As discussed before, this is a public record of our desire to change the license from Beer-ware to BSD-3-Clause. http://opensource.org/licenses/BSD-3-Clause

Many organizations do not recognize the Beer-ware license as it is not OSI approved. The BSD-3-Clause is popular, OSI approved, and is ideologically very similar to Beer-ware.

To date, two authors (graudeejs and Alexqw) have written all but one line of zfSnap. The one line modified by another contributor has been rewritten/replaced through normal program evolution.

Thus we feel that, with both of us consenting, we can relicense the whole project.

To state it explicitly. I license all of my past and future contributions to zfSnap under the BSD-3-Clause License.

---Alex

possible bug found

hi again,

i've found some strange delete_specific_snapshots variable in the source

181:delete_specific_snapshots=0     # Delete specific snapshots? 0 = NO
371:if [ "$delete_specific_snapshots" != '' ]; then

this is not the correct usage

  1. since there is no other place in the script modifying this variable
  2. comparing 0 (int) and '' (string) is not a good idea in shell

this code should be replaced with something like this (anyway, the condition is always true in this case)

diff --git a/zfSnap.sh b/zfSnap.sh
index 54318bc..48a6443 100755
--- a/zfSnap.sh
+++ b/zfSnap.sh
@@ -178,7 +178,7 @@ skip_pool() {
 ttl='1m'       # default snapshot ttl
 force_delete_snapshots_age=-1  # Delete snapshots older than x seconds. -1 means NO
 delete_snapshots=0                             # Delete old snapshots? 0 = NO
-delete_specific_snapshots=0            # Delete specific snapshots? 0 = NO
+delete_specific_snapshots=1            # Delete specific snapshots? 0 = NO
 verbose=0                                              # Verbose output? 0 = NO
 dry_run=0                                              # Dry run? 0 = NO
 prefx=""                                               # Default prefix
@@ -368,7 +368,7 @@ if [ $delete_snapshots -ne 0 -o $force_delete_snapshots_age -ne -1 ]; then
 fi

 # delete all snap
-if [ "$delete_specific_snapshots" != '' ]; then
+if [ $delete_specific_snapshots -eq 1 ]; then
        if [ "$delete_specific_fs_snapshots" != '' ]; then
                rm_snapshots=`$zfs_cmd list -H -o name -t snapshot | grep -E -e "^($(echo "$delete_specific_fs_snapshots" | tr ' ' '|'))@(${prefxes})?${date_pattern}--${htime_pattern}$"`
                for i in $rm_snapshots; do

have a nice day
jan

zfsnap2 does not delete stale snapshots

Dear All,
This seems to be an old or recurring issue, but I am not sure, what I am missing.
My setup: FreeBSD 10.3, zfsnap2 (version v2.0.0-beta2) from ports.
Here is a snippet of my crontab:

# Create and manage regular ZFS snapshots
# Create hourly snapshots of the server zpool zroot and retain 25 snapshots
@hourly                                 root    zfsnap snapshot -a 25h -p "server-hourly-" -r -z zroot
# Create daily snapshots of the server zpool zroot and retain 8 snapshots
@daily                                  root    zfsnap snapshot -a 8d -p "server-daily-" -r -z zroot
# Create weekly snapshots of the server zpool and retain 5 snapshots
@weekly                                 root    zfsnap snapshot -a 5w -p "server-weekly-" -r -z zroot
# Create monthly snapshots of the server zroot and retain 13 snapshots
@monthly                                root    zfsnap snapshot -a 13m -p "server-monthly-" -r -z zroot
# Delete obsolete snapshots once a day
45      1       *       *       *       root    zfsnap destroy -r zroot

Alas, nothing gets deleted, and my snapshots are accumulating happily.
Running

zfsnap destrop -D -r

does not do anything, either.
Any idea, what my be going wrong?
Thanks a lot!
Chris

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.