Giter VIP home page Giter VIP logo

znapzend's Introduction

ZnapZend

Build Coverage Status Gitter Releases Docker images

ZnapZend is a ZFS centric backup tool to create snapshots and send them to backup locations. It relies on the ZFS tools snapshot, send and receive to do its work. It has the built-in ability to manage both local snapshots as well as remote copies by thinning them out as time progresses.

The ZnapZend configuration is stored as properties in the ZFS filesystem itself. Keep in mind that while this only regards local ZFS properties of each configured dataset (not "inherited", not "received"), there is some domain-specific handling of recursion for certain settings based on presence and value of an org.znapzend:recursive property.

Note that while recursive configurations are well supported to set up backup and retention policies for a whole dataset subtree under the dataset to which you have applied explicit configuration, at this time pruning of such trees ("I want every dataset under var except var/tmp") is experimental: it works, but there may be rough edges which would require more development.

You probably do not want to enable ZnapZend against the root datasets of your pools due to that, but would have to be more fine-grained in your setup. This is consistent with (and due to) usage of recursive ZFS snapshots, where the command is targeted at one dataset and impacts it and all its children, allowing to get a consistent point-in-time set of snapshots across multiple datasets.

That said, for several years ZnapZend supports setting a local ZFS property org.znapzend:enabled=off (and only it) in datasets which descend from the one with a full backup retention schedule configuration (which in turn sets that its descendants should be handled per org.znapzend:recursive=off), and then exactly these "not-enabled" datasets with enabled=off setting would not be tracked with a long-term history locally or remotely.

NOTE: Implementation-wise, snapshots of the dataset with a full backup retention schedule configuration are made recursively so as to be a reliable atomic operation. Subsequently snapshots for "not-enabled" datasets are pruned. Different ZnapZend versions varied about sending such snapshots to a remote destination (e.g. as part of a recursive ZFS send stream) and pruning them there afterwards, or avoiding such sending operations.

An important take-away is that temporarily there may be a storage and traffic cost associated with "not-enabled" dataset snapshots, and that their creation and deletion is separated by time: if the host reboots (or ZnapZend process is interrupted otherwise) at the wrong moment, such snapshots may linger indefinitely and "unexpectedly" consume disk space for their uniquely referenced blocks.

Current ZnapZend releases extend this support with an ability to also set a local ZFS property org.znapzend:recursive=on in such datasets (so there would be two properties -- to enable/disable and to recurse that), with the effect that whole sub-trees of ZFS datasets can be excluded from ZnapZend retention handling with one configuration in their common ancestor dataset (previously this would require enabled=off in each excluded dataset).

This behavior can be useful, for example, on CI build hosts, where you would generally enable backups of rpool/home but would exclude the location for discardable bulk data like build roots or caches in the worker account's home.

NOTE: Technically, the code allows to further set enabled=on in certain sub-datasets of the not-enabled tree to re-enable snapshot tracking for that dataset (maybe recursively to its descendants), but this feature has not yet seen much use and feedback in real-life situations. It may be possible that you would have to pre-create the parent datasets (disabled on source) to receive regular backups from ZnapZend on remote destinations, etc.

Compilation and Installation from source Inztructionz

If your distribution does not provide a packaged version of znapzend, or if you want to get a custom-made copy of znapzend, you will need a compiler and stuff to build some of the prerequisite perl modules into binary libraries for the target OS and architecture. For run-time you will need just perl.

For a long time znapzend build required a GNU Make implementation. While this is no longer strictly the case, and at least Sun Make (as of OpenIndiana) and BSD Make (as of FreeBSD) are also known to work, the instructions below still suggest it as optional (if system-provided tools fail, fall back to gmake).

The Git checkout includes a pregenerated configure script. For a rebuild of a checkout from scratch you may also want to ./bootstrap.sh and then would need the autoconf/automake stack.

  • On RedHat you get the necessaries with:
yum install perl-core
  • On Ubuntu / Debian with:
apt-get install perl unzip

To also bootstrap on Ubuntu / Debian you may need:

apt-get install autoconf carton
  • On Solaris 10 you may need the C compiler from Solaris Studio and gnu-make since the installed perl version is probably very old and you would likely have to build some dependency modules. The GNU make may be needed instead of Sun make due to syntax standard differences over the years. Notably you could have to reference it if you would boot-strap the code workspace from scratch (and use later to gmake install as suggested by the configure script):
MAKE=gmake ./bootstrap.sh

Note also that the perl version 5.8.4 provided with Solaris 10 is too old for the syntax and dependencies of znapzend. As one alternative, take a look at CSW packaging of perl-5.10.1 or newer and its modules, and other dependencies. To use a non-default perl, set the PERL environment variable to the path of your favorite perl interpreter prior to running configure, e.g.:

PERL=/opt/perl-32/bin/perl5.32.1 ./configure
  • On OmniOS/SmartOS you will need perl and optionally gnu-make packages.

  • On macOS, if you have not already installed the Xcode command line tools, you can get them from the command line (Terminal app) with:

xcode-select --install  ### ...or just install the full Xcode app from the Apple app store

With that in place you can now utter:

ZNAPVER=0.22.1
wget https://github.com/oetiker/znapzend/releases/download/v${ZNAPVER}/znapzend-${ZNAPVER}.tar.gz
tar zxvf znapzend-${ZNAPVER}.tar.gz
cd znapzend-${ZNAPVER}
### ./bootstrap.sh
./configure --prefix=/opt/znapzend-${ZNAPVER}

NOTE: to get the current state of master branch without using git tools, you should fetch https://github.com/oetiker/znapzend/archive/master.zip

If the configure script finds anything noteworthy, it will tell you about it.

If any perl modules are found to be missing, they get installed locally into the znapzend installation. Your system perl installation will not be modified!

make
make install

Optionally (but recommended) put symbolic links to the installed binaries in the system PATH, e.g.:

ZNAPVER=0.22.1
for x in /opt/znapzend-${ZNAPVER}/bin/*; do ln -fs ../../../$x /usr/local/bin/; done

Verification Inztructionz

To make sure your resulting set of znapzend code and dependencies plays well together, you can run unit-tests with:

make check

or

./test.sh

NOTE: the two methods run same testing scripts with different handling, so might behave differently. While that can happen in practice, that would be a bug to report and pursue fixing.

Packages

Debian control files, guide on using them and experimental debian packages can be found at https://github.com/Gregy/znapzend-debian

An RPM spec file can be found at https://github.com/asciiphil/znapzend-spec

For recent versions of Fedora and RHEL 7-9 there's also a copr repository by spike (sources at https://gitlab.com/copr_spike/znapzend):

dnf copr enable spike/znapzend
dnf install znapzend

For Gentoo there's an ebuild in the gerczei overlay.

For OpenIndiana there is an IPS package at http://pkg.openindiana.org/hipster/en/search.shtml?token=znapzend&action=Search made with the recipe at https://github.com/OpenIndiana/oi-userland/tree/oi/hipster/components/sysutils/znapzend

pkg install backup/znapzend

Configuration

Use the znapzendzetup program to define your backup settings. They will be stored directly in dataset properties, and will cover both local snapshot schedule and any number of destinations to send snapshots to (as well as potentially different retention policies on those destinations). You can enable recursive configuration, so the settings would apply to all datasets under the one you configured explicitly.

Example:

znapzendzetup create --recursive\
   --pre-snap-command="/bin/sh /usr/local/bin/lock_flush_db.sh" \
   --post-snap-command="/bin/sh /usr/local/bin/unlock_db.sh" \
   SRC '7d=>1h,30d=>4h,90d=>1d' tank/home \
   DST:a '7d=>1h,30d=>4h,90d=>1d,1y=>1w,10y=>1month' root@bserv:backup/home

See the znapzendzetup manual for the full description of the configuration options.

For remote backup, znapzend uses ssh. Make sure to configure password-free login (authorized keys) for ssh to the backup target host with an account sufficiently privileged to manage its ZFS datasets under a chosen destination root.

For local or remote backup, znapzend can use mbuffer to level out the bursty nature of ZFS send and ZFS receive features, so it is quite beneficial even for local backups into another pool (e.g. on removable media or a NAS volume). It is also configured among the options set by znapzendzetup per dataset. Note that in order to use larger (multi-gigabyte) buffers you should point your configuration to a 64-bit binary of the mbuffer program. Sizing the buffer is a practical art, depending on the size and amount of your datasets and the I/O speeds of the storage and networking involved. As a rule of thumb, let it absorb at least a minute of I/O, so while one side of the ZFS dialog is deeply thinking, another can do its work.

NOTE: Due to backwards-compatibility considerations, the legacy --mbuffer=... setting applies by default to all destination datasets (and to sender, in case of --mbuffer=/path/to/mbuffer:port variant). This might work if needed programs are all found in PATH by the same short name, but fails miserably if custom full path names are required on different systems.

To avoid this limitation, ZnapZend now allows to specify custom path and buffer size settings individually for each source and destination dataset in each backup/retention schedule configuration (using the znapzendzetup program or org.znapzend:src_mbuffer etc. ZFS dataset properties directly). The legacy configuration properties would now be used as fallback defaults, and may emit warnings whenever they are applied as such.

With this feature in place, the sender may have the only mbuffer running, without requiring one on the receiver (e.g. to limit impact to RAM usage on the backup server). You may also run an mbuffer on each side of the SSH tunnel, if networking latency is random and carries a considerable impact.

The remote system does not need anything other than ZFS functionality, an SSH server, a user account with prepared SSH key based log-in (optionally an unprivileged one with zfs allow settings on a particular target dataset dedicated to receiving your trees of backed-up datasets), and optionally the local implementation of the mbuffer program. Namely, as a frequently asked concern: the remote system does not require ZnapZend nor its dependencies (perl, etc.) to be installed. (It may however be installed - e.g. if used for snapshots of that remote system's own datasets.)

Running

The znapzend daemon is responsible for doing the actual backups.

To see if your configuration is any good, run znapzend in noaction mode first.

znapzend --noaction --debug

If you don't want to wait for the scheduler to actually schedule work, you can also force immediate action by calling

znapzend --noaction --debug --runonce=<src_dataset>

then when you are happy with what you got, start it in daemon mode.

znapzend --daemonize

Best practice is to integrate znapzend into your system startup sequence, but you can also run it by hand. See the init/README.md for some inspiration.

Running by an unprivileged user

In order to allow a non-privileged user to use it, the following permissions are required on the ZFS filesystems (which you can assign with zfs allow):

Sending end: destroy,hold,mount,send,snapshot,userprop

Receiving end: create,destroy,mount,receive,userprop

Caveat Emptor: Receiver with some implementations of ZFS may have further constraints technologically. For example, non-root users with ZFS on Linux (as of 2022) may not write into a dataset with property zoned=on (including one inherited or just received -- and zfs recv -x zoned or similar options have no effect to not-replicate it), so this property has to be removed as soon as it appears on such destination host with the initial replication stream, e.g. leave a snippet like this running on receiving host before populating (zfs send -R ...) the destination for the first time:

while ! zfs inherit zoned backup/server1/rpool/rpool/zones/zone1/ROOT ; do sleep 0.1; done

You may also have to zfs allow by name all standard ZFS properties which your original datasets customize and you want applied to the copy (e.g. to eventually restore them), so the non-privileged user may zfs set them on that dataset and its descendants, e.g.: compression,mountpoint,canmount,setuid,atime,exec,dedup or perhaps you optimized the original storage with the likes of: logbias,primarycache,secondarycache,sync and note that other options may be problematic long-term if actually used by the receiving server, e.g.: refreservation,refquota,quota,reservation,encryption

Generally, check the ZnapZend service (or manual run) logs for any errors and adapt the dataset permissions on the destination pool to satisfy its implementation specifics.

Running with restricted shell

As a further security twist on using a non-privileged user on the receiving host is to restrict its shell so just a few commands may be executed. After all, you leave its gates open with remote SSH access and a private key without a passphrase lying somewhere. Several popular shells offer a restricted option, for example BASH has a -r command line option and a rbash symlink support.

NOTE: Some SSH server versions also allow to constrain the commands which a certain key-based session may use, and/or limit from which IP addresses or DNS names such sessions may be initiated. See documentation on your server's supported authorized_keys file format and key words for that extra layer.

On original server, run ssh-keygen to generate an SSH key for the sending account (root or otherwise), possibly into an uniquely named file to use just for this connection. You can specify custom key file name, non-standard port, acceptable encryption algorithms and other options with SSH config:

# ~/.ssh/config
Host znapdest
        # "HostName" to access may even be "localhost" if the backup storage
        # system can dial in to the systems it collects data from (with SSH
        # port forwarding back to itself) -- e.g. running without a dedicated
        # public IP address (consumer home network, corporate firewall).
        #HostName localhost
        HostName znapdest.domain.org
        Port 22123
        # May list several SSH keys to try:
        IdentityFile /root/.ssh/id_ecdsa-znapdest
        IdentityFile /root/.ssh/id_rsa-znapdest
        User znapzend-server1
        IdentitiesOnly yes

On receiving server (example for Proxmox/Debian with ZFS on Linux):

  • Create receiving user with rbash as the shell, and a home directory:
useradd -m -s `which rbash` znapzend-server1
  • Restricted shell denies access to run programs and redirect to path names with a path separator (slash character, including >/dev/null quiescing). This allows to only run allowed shell commands and whatever is resolved by PATH (read-only after the profile file is interpreted). Typically a bin directory is crafted with programs you allow to run, but unlike the chroot jails you don't have to fiddle with dynamic libraries, etc. to make the login usable for its purpose.

    • Prepare restricted shell profile (made and owned by root) in the user home directory:

      # ~znapzend-server1/.rbash_profile
      # Restricted BASH settings
      # https://www.howtogeek.com/718074/how-to-use-restricted-shell-to-limit-what-a-linux-user-can-do/
      PATH="$HOME/bin"
      export PATH
    • Neuter all other shell profiles so only the restricted one is consulted for any way the user logs in (avoid confusion):

      cd ~znapzend-server1/ && (
        rm -f .bash_history .bash_logout .bash_profile .bashrc .profile
        ln -s .rbash_profile .profile
        ln -s .rbash_profile .bashrc
        touch .hush_login )
    • (As root) Prepare ~/bin for the user:

      mkdir -p ~znapzend-server1/bin
      cd ~znapzend-server1/bin
      for CMD in mbuffer zfs ; do ln -frs "`which "$CMD"`" ./ ; done
      # NOTE: If this user also receives other backups, you can
      # symlink commands needed for that e.g. "rsync" or "git"
    • Maybe go as far as to make the homedir not writeable to the user?

  • Prepare SSH login:

mkdir -p ~znapzend-server1/.ssh
vi ~znapzend-server1/.ssh/authorized_keys
### Paste public keys from IdentityFile you used on the original server
  • Restrict access to SSH files (they are ignored otherwise):
chown -R znapzend-server1: ~znapzend-server1/.ssh
chmod 700 ~znapzend-server1/.ssh
chmod 600 ~znapzend-server1/.ssh/authorized_keys
  • Unlock the user for ability to login (will use SSH key in practice, but unlocking in general may require a password to be set):
#usermod znapzend-server1 -p "`cat /dev/random | base64 | cut -b 0-20 | head -1`"
usermod -U znapzend-server1
  • Now is a good time to check that you can log in from the original backed-up system to the backup server (using the same account that znapzend daemon would use, to save the known SSH host keys), e.g. that keys and encryption algorithms are trusted, names are known, ports are open... If you defined a Host znapdest like above, just run:
# Interactive login?
:; ssh znapdest

# Gets PATH to run stuff?
:; ssh znapdest zfs list
  • Dedicate a dataset (or several) you would use as destination for the znapzend daemon, and set ZFS permissions (see suggestions above), e.g.:
zfs create backup/server1
zfs allow -du znapzend-server1 create,destroy,mount,receive,userprop backup/server1

NOTE: When defining a "backup plan" you would have to specify a basename for mbuffer, since the restricted shell would forbid running a fully specified pathname, e.g.:

znapzendzetup edit --mbuffer=mbuffer \
   SRC '6hours=>30minutes,1week=>6hours' rpool/export \
   DST '6hours=>30minutes,1week=>6hours,2weeks=>1day,4months=>1week,10years=>1month' \
       znapdest:backup/server1/rpool/export

Running in Container

znapzend is also available as docker container image. It needs to be a privileged container depending on permissions.

docker run -d --name znapzend --device /dev/zfs --privileged \
    oetiker/znapzend:master

To configure znapzend, run in interactive mode:

docker exec -it znapzend /bin/sh
$ znapzendzetup create ...
# After exiting, restart znapzend container or send the HUP signal to
# reload config

By default, znapzend in container runs with --logto /dev/stdout. If you wish to add different arguments, overwrite them at the end of the command:

docker run --name znapzend --device /dev/zfs --privileged \
    oetiker/znapzend:master znapzend --logto /dev/stdout --runonce --debug

Be sure not to daemonize znapzend in the container, as that exits the container immediately.

Troubleshooting

By default a znapzend daemon would log its progress and any problems to local syslog as a daemon facility, so if the service misbehaves - that is the first place to look. Alternately, you can set up the service manifest to start the daemon with other logging configuration (e.g. to a file or to stderr) and perhaps with debug level enabled.

If your snapshots on the source dataset begin to pile up and not cleaned according to your expectations from the schedule you have defined, look into the logs particularly for summaries like ERROR: suspending cleanup source dataset because X send task(s) failed followed by each failed dataset name and a short verdict (e.g. snapshot(s) exist on destination, but no common found on source and destination). See above in the logs for more details, and/or disable the znapzend service temporarily (to avoid run-time conflicts) and run a manual replication:

znapzend --debug --runonce=<src_dataset>/failed/child --inherited

...to collect even more under-the-hood details about what is happening and to get ideas about fixing that. See the manual page about --recursive and --inherited modifiers to --runonce mode for more information.

Typical issues include:

  • At least one destination is offline;
  • At least one destination is full and can not be written into;
  • A destination on SAN (iSCSI) or local device had transport issues and ZFS suspended all write operations until you fix and zpool clear it;
  • Source is full (or exceeded quota) and can not be written into, so the new snapshots to send can not be made until you delete older ones;
  • There are too many snapshots to clean up on source or destination, and the operation fails because the command line becomes too long. You can try running with --features=oracleMode to process each snapshot name as a separate command, that would be slower but more reliable in such situation;
  • There are snapshots on destination, but none common with the source so incremental replication can not proceed without destroying much or all of the destination. Note this can be looking at snapshot names filtered by the pattern your backup schedule would create, and other znapzend options and/or a run of native zfs send|zfs recv would help if your destination has manually named snapshots that are common with your source.

NOTE: Do not forget to re-enable the znapzend service after you have rectified the problem that prevented normal functionality.

One known problem relates to automated backups of datasets whose source can get cloned, renamed and promoted - typically boot environments (the rootfs of your OS installation and ZBE for local zones on illumos/Solaris systems behave this way to benefit from snapshots during upgrades and to allow easily switching back to older version if an update went bad). At this time (see issue #503) znapzend does not handle such datasets as branches of a larger ZFS tree and with --autoCreation mode in place just makes new complete datasets on the destination pool. On one hand this is wasteful for space (unless you use deduplication which comes with other costs), and on another the histories of snapshots seen in the same-named source and destination datasets can eventually no longer expose a "last-common snapshot" and this causes an error like snapshot(s) exist on destination, but no common found on source and destination.

In case you tinkered with ZFS attributes that store ZnapZend retention policies, or potentially if you have a severe version mismatch of ZnapZend (e.g. update from a PoC or very old version), znapzendzetup list is quite useful to non-intrusively discover whatever your current version can consider to be discrepancies in your active configuration.

Finally note that yet-unreleased code from the master branch may include fixes to problems you face (see recent commits and closed pull requests), but also may introduce new bugs.

Statistics

If you want to know how much space your backups are using, try the znapzendztatz utility.

Support and Contributions

If you find a problem with znapzend, please open an Issue on GitHub but please first review if somebody posted similar symptoms or suggestions already and then chime in with your +1 there.

If you'd like to get in touch, come to Gitter.

And if you have a code or documentation contribution, please send a pull request.

Enjoy!

Dominik Hassler & Tobi Oetiker 2024-05-03

znapzend's People

Contributors

antoneliasson avatar asciipip avatar atj avatar beneter avatar ccremer avatar dglushenok avatar dunron avatar dylanpowers avatar flixman avatar grantwwu avatar gregy avatar hadfl avatar hansoostendorp avatar jimklimov avatar jmovs avatar jsoref avatar lckarssen avatar lotheac avatar martin-rueegg avatar mas90 avatar moetiker avatar mtelka avatar oetiker avatar samoser avatar snyman avatar spike77453 avatar sylvain-ilm avatar wiedi avatar wouter0100 avatar xenophon61 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

znapzend's Issues

I am not able to remove a DST

"znapzendzetup edit" is not removing a DST. I am able to add a new one, edit a current one, but removal is not being saved.

no new remote filesets when running with --debug

I noticed when running znapzend in debug to watch the output that it won't create new filesystems on the remote end. If I restart it with "runonce" everything creates remotely as expected
it isn't a big deal since I don't have a huge number of new filesystems, but I thought I'd ask

DST plan names strip out '-' character.

I tried to use the destination machine name in the name of the plan (no particular reason, it just seemed natural), but we use '-' in machine names. The regular expression that builds the backup plan tags that it puts into zfs seems to strip everything after and including the first '-' found. Since '-' is an allowed character in a zfs user property, it seems overly aggressive to filter in this way.

znapzend 0.14 on RHEL7/OL7

Hi,

I am trying to compile znapzend-0.14 on one of my OL7 VMs and had to run the
/usr/bin/gmake get-thirdparty-modules command, due to missing dependancies. After installint ExtUtilsMakeMaker through yum that worked and resultet in this:

Building Mojolicious-5.69 ... OK
Successfully installed Mojolicious-5.69
--> Working on Mojo::IOLoop::ForkCall
Fetching http://www.cpan.org/authors/id/J/JB/JBERGER/Mojo-IOLoop-ForkCall-0.15.tar.gz ... OK
Configuring Mojo-IOLoop-ForkCall-0.15 ... OK
==> Found dependencies: IO::Pipely
--> Working on IO::Pipely
Fetching http://www.cpan.org/authors/id/R/RC/RCAPUTO/IO-Pipely-0.005.tar.gz ... OK
Configuring IO-Pipely-0.005 ... OK
Building IO-Pipely-0.005 ... OK
Successfully installed IO-Pipely-0.005
Building Mojo-IOLoop-ForkCall-0.15 ... OK
Successfully installed Mojo-IOLoop-ForkCall-0.15
21 distributions installed

However, when I am trying to run configure znapzend-0.14 if errors out with this message:

checking checking for perl module 'Mojolicious'... Failed
checking checking for perl module 'Mojo::IOLoop::ForkCall'... Failed
checking that generated files are newer than configure... done
configure: creating ./config.status
config.status: creating Makefile
config.status: creating lib/Makefile
config.status: creating init/znapzend.xml
config.status: WARNING: 'init/znapzend.xml.in' seems to ignore the --datarootdir setting

** SOME PERL MODULES ARE MISSING ******************************

If you know where perl can find the missing modules, set
the PERL5LIB environment variable accordingly.

You can also install a local copy of the Perl modules by running

/usr/bin/gmake get-thirdparty-modules

You may want to run configure again after that to make sure
all modules are installed

So, I did that, but to no avail. Any hints on how to get that resolved?

Thanks,
Stephan

How to snapshot SRC frequently and send to DST infrequently?

The use case here is that a dst goes into low-power mode when not poked for several minutes at a time, and the src is undergoing rapid evolution that may require use of a snapshot (say, to rollback, or to clone). Additionally, in this particular case, the deltas between snapshots are quite large compared to the foo pool.

src             = foo/bar
src_plan        = 6hours=>10minutes,1day=>20minutes,3days=>1hour,7days=>2hours,2weeks=>1day
dst_0           = user@remote:remotebackup/bar
dst_0_plan      = 7days=>6hours,30days=>12hours,90days=>1day,1year=>1week,10years=>1month

... but this causes a zfs send/recv every 10 minutes, and some of the 10-minute snapshots' sendstreams are big enough that the zfs recv does not finish within 10 minutes.

With this configuration, I think znapzend should only talk to remote every six hours.

(Also, but less importantly, sending an -i stream rather than sending an -I stream would reduce the traffic on the network and the extra work on the dst of receiving the intermediate snapshots only to have many of them destroyed by znapzend.)

per-dst ssh options

The defaults in ZFS.pm (-o Compression=yes -o CompressionLevel=1 -o Cipher=arcfour -o batchMode=yes -o ConnectTimeout=3) might be inappropriate for some dsts, so it may be useful to override them on a per-dst basis.

For instance, I get an almost order of magnitude greater throughput using -o Compression=no -o Cipher=[email protected] between two Mac OS X boxes using OpenSSH_6.6.1p1, OpenSSL 1.0.1i 6 Aug 2014. I obviously can't use that cipher with an OmniOS box. Also, some destinations might not allow arcfour.

Skip network replication if remote destination is not available

It would be nice to have the daemon skip the network replication of a snapshot if the remote destination is not available. It may be needed in case of znapzend running on laptops while not having a network connection available: the daemon can continue to snapshot the datasets, and as soon as the destination becomes reachable, the repliction can occur.

Now it stops and exits with error (while checking for mbuffer on the remote side):
sudo ./bin/znapzend --debug
ssh: connect to host port 22: Connection timed out
ssh: connect to host port 22: No route to host
ERROR: executable '/usr/local/bin/mbuffer' does not exist on root@

This change is also needed on the znapzendztatz command.

znapzend doesn't have an option to keep at least "x" snapshots on source

I am looking into possibly switching from https://github.com/jollyjinx/ZFS-TimeMachine to znapzend. I realized, though, that znapzend doesn't seem to have an option to at least keep x number of snapshots which is helpful in the following circumstance:

  • The computer was at sleep for a longer time (say a week's vacation) and now woken up and cleans up old snapshots according to time intervals, but the file I just modified was in the last 10 existing snapshots but now the snapshots are gone. This option essentially mitigates edge cases where the usual clearing schedule shouuldn't be applied because there was a downtime/time where no snapshots were created.

znapzend doesn't ship snapshots to remote DST, if a clone is present on the DST dataset

It seems, that znapzend tries to clear our all other snapshots on the destination, which doesn't "belong" to its backup plan. The issue is, that I am using a clone to back up files from the NFS which take considerably longer than than the interval is set up to. I only found that out through trial and error, though.

If a clone is present on the destination the removal of the underlying snapshot is prevented and trying to remove it, will result in an error from zfs destroy. However, znapzend then bails out and even doesen't attempt to continue with is actions according to it's backup plan.

What is the reasonable behind this? I don't see, where an existing clone would interfere with znapzend's backup in any way.

Having the clone on the source also doesn't do any good since, znapzend seems not capable of determining that the the dataset it's about to chew on is actually a clone and thus starts to perform a complete zfs send/recv to the destination.

-Stephan

Feature: znapreceive

Hi, could there be a possibility to make znapzend as a application to pull zfs filesystem? instead of send?

thank you
frederic

use ISO8601 timestamp by default

It really helps if the default tsformat is ISO8601 format. In znapzendzetup:
$cfg{tsformat} = $opts->{tsformat} || '%Y-%m-%dT%H:%M:%S%z';

scaling recursive to a large number of datasets

I have several thousands of filesystems (user home directories) and would like to use znapzend recursively to back those up to a remote host. I configured the dataset with recursive=on, but running znapzend with --debug --no-action --runonce I can see that it is opening a new ssh connection to the remote host for every child dataset just to list their snapshots (the same is also done locally). This is not feasible with a large number of child datasets; the same information could be acquired with a single 'zfs list -r' instead.

It hasn't gotten to the sending part yet, so I can't speak as to issues there yet. Our current backup script uses a combination of zfs-auto-snapshot (which does snapshot -r, destroy -r), zfs list -r (both locally and remotely) as well as zfs send -R (because -r is not available in illumos zfs) to limit the number of ssh connections.

hashbangs should point to perl used at install-time

I don't think it's a good idea to assume that the perl used to build is the same one that will be first in PATH at runtime. We package our own perl into a prefix that is not in runtime PATH on OmniOS but znapzend hashbangs use '/usr/bin/env perl' which will be different depending on PATH.

I think it might be a good idea to replace that with the absolute path to the correct perl, which the configure script knows at build time.

cannot destroy snapshot(s) and cannot send snapshots errors

Hello,

using znapzend last version 0.13.0 everything running well and smooth
source machine(10.23.28.100) connect to target machine(10.23.28.101 ) via cross 1Gbps cable,i'm using separate NIC on each machine for replication purpose

but sometimes i'm getting like these errors

why these errors happens?

regards

Hafiz

Oct 17 12:02:08 canomnios znapzend[19771]: [ID 702911 daemon.warning] ERROR: cannot destroy snapshot(s) zpool1/ZIMBRA@2014-10-09-120000
Oct 18 00:00:07 canomnios znapzend[15689]: [ID 702911 daemon.warning] Mojo::Reactor::Poll: Read failed: Event "close" failed: ERROR: executing receive process at /opt/znapzend/bin/../lib/ZnapZend/ZFS.pm line 377.
Oct 18 00:00:39 canomnios znapzend[15672]: [ID 702911 daemon.warning] Mojo::Reactor::Poll: Timer 59c11d26092d1f562b4019fdc96592c8 failed: ERROR: cannot send snapshots to reppool/ora128k on 10.23.28.101 at /opt/znapzend/bin/../thirdparty/lib/perl5/Mojo/IOLoop.pm line 25.
Oct 18 00:00:39 canomnios znapzend[15696]: [ID 702911 daemon.warning] Mojo::Reactor::Poll: Timer e2e93b10152725d877f145df1103dde4 failed: ERROR: cannot send snapshots to reppool/canvmdk21 on 10.23.28.101 at /opt/znapzend/bin/../thirdparty/lib/perl5/Mojo/IOLoop.pm line 25.
Oct 18 00:02:35 canomnios znapzend[15693]: [ID 702911 daemon.warning] ERROR: cannot destroy snapshot(s) zpool1/ZIMBRA@2014-10-09-120000
Oct 18 12:01:15 canomnios znapzend[10636]: [ID 702911 daemon.warning] ERROR: cannot destroy snapshot(s) zpool1/ZIMBRA@2014-10-09-120000
Oct 19 00:00:38 canomnios znapzend[5492]: [ID 702911 daemon.warning] Mojo::Reactor::Poll: Timer 3f3726abfb7f58799984da6bd6622fdd failed: ERROR: cannot send snapshots to reppool/canvmdk24 on 10.23.28.101 at /opt/znapzend/bin/../thirdparty/lib/perl5/Mojo/IOLoop.pm line 25.
Oct 19 00:00:38 canomnios znapzend[5485]: [ID 702911 daemon.warning] Mojo::Reactor::Poll: Timer a749c9cf11bc89b0362d7ea6045ee984 failed: ERROR: cannot send snapshots to reppool/ora8k on 10.23.28.101 at /opt/znapzend/bin/../thirdparty/lib/perl5/Mojo/IOLoop.pm line 25.
Oct 19 00:01:10 canomnios znapzend[5490]: [ID 702911 daemon.warning] ERROR: cannot destroy snapshot(s) zpool1/ZIMBRA@2014-10-09-120000
Oct 19 12:01:21 canomnios znapzend[413]: [ID 702911 daemon.warning] ERROR: cannot destroy snapshot(s) zpool1/ZIMBRA@2014-10-09-120000
Oct 20 00:01:45 canomnios znapzend[25239]: [ID 702911 daemon.warning] ERROR: cannot destroy snapshot(s) zpool1/ZIMBRA@2014-10-09-120000

Oct 20 12:00:37 canomnios znapzend[22854]: [ID 702911 daemon.warning] Mojo::Reactor::Poll: Timer b229711bf55b77f2d64bfbbb346035c7 failed: ERROR: cannot send snapshots to reppool/ora128k on 10.23.28.101 at /opt/znapzend/bin/../thirdparty/lib/perl5/Mojo/IOLoop.pm line 25.
Oct 20 12:02:12 canomnios znapzend[22850]: [ID 702911 daemon.warning] ERROR: cannot destroy snapshot(s) zpool1/ZIMBRA@2014-10-09-120000
Oct 21 00:00:37 canomnios znapzend[20666]: [ID 702911 daemon.warning] Mojo::Reactor::Poll: Timer e8e71c24bfb1bff1049cfc5600c65aac failed: ERROR: cannot send snapshots to reppool/orgatex on 10.23.28.101 at /opt/znapzend/bin/../thirdparty/lib/perl5/Mojo/IOLoop.pm line 25.
Oct 21 00:02:01 canomnios znapzend[20668]: [ID 702911 daemon.warning] ERROR: cannot destroy snapshot(s) zpool1/ZIMBRA@2014-10-09-120000
Oct 21 12:02:10 canomnios znapzend[17486]: [ID 702911 daemon.warning] ERROR: cannot destroy snapshot(s) zpool1/ZIMBRA@2014-10-09-120000
Oct 22 00:00:38 canomnios znapzend[12360]: [ID 702911 daemon.warning] Mojo::Reactor::Poll: Timer 6e5c9e53e122e8f87e1bf019b5bd4af4 failed: ERROR: cannot send snapshots to reppool/ora128k on 10.23.28.101 at /opt/znapzend/bin/../thirdparty/lib/perl5/Mojo/IOLoop.pm line 25.
Oct 22 00:00:38 canomnios znapzend[12351]: [ID 702911 daemon.warning] Mojo::Reactor::Poll: Timer 1c3201908d1224bcafd2dcc8a17de43e failed: ERROR: cannot send snapshots to reppool/ora8k on 10.23.28.101 at /opt/znapzend/bin/../thirdparty/lib/perl5/Mojo/IOLoop.pm line 25.
Oct 22 00:02:18 canomnios znapzend[12335]: [ID 702911 daemon.warning] ERROR: cannot destroy snapshot(s) zpool1/ZIMBRA@2014-10-09-120000
Oct 22 12:02:04 canomnios znapzend[7250]: [ID 702911 daemon.warning] ERROR: cannot destroy snapshot(s) zpool1/ZIMBRA@2014-10-09-120000

use mbuffer with timeout

I'd suggest to implement mbuffer's timeout option, since I found that once mbuffer gets messed up, subsequent snapshots are not transferred over to the remote box. Just today I was wondering, why the shipping of snapshots had ceased and found the following on the receiver:

root@nfsvmpool02:/root# ps -ef | grep mbuffer
root 11710 11705 0 23:59:25 ? 0:00 bash -c /opt/csw/bin/mbuffer -q -s 128k -m 2G -4 -I 10000|zfs recv -F sasTank/n
root 7436 1 0 20:23:45 ? 0:00 bash -c /opt/csw/bin/mbuffer -q -s 128k -m 2G -4 -I 10000|zfs recv -F sasTank/n
root 7437 7436 13 20:23:45 ? 722:39 /opt/csw/bin/mbuffer -q -s 128k -m 2G -4 -I 10000
root 11784 11779 0 23:59:38 ? 0:00 bash -c /opt/csw/bin/mbuffer -q -s 128k -m 4G -4 -I 10001|zfs recv -F sataTank/
root 11711 11710 13 23:59:25 ? 507:01 /opt/csw/bin/mbuffer -q -s 128k -m 2G -4 -I 10000
root 20974 20955 0 08:26:43 pts/1 0:00 grep mbuffer
root 11785 11784 13 23:59:38 ? 497:36 /opt/csw/bin/mbuffer -q -s 128k -m 4G -4 -I 10001

After I had killed all mbuffer processes, znapzend started shipping snapshots again. Howerver, I'd like to give mbuffer the -W option so it exits by itself.

-Stephan

Doesn't run on FreeBSD 9.3

Hello, I tried to install Znapzend 0.14.0 following both the binary installation and the compile installation process (including the make get-thirdparty-modules step) - as per the readme file and in both cases when trying to run /opt/znapzend-0.14.0/bin/znapzendzetup I get the following error message :

root@abch026:/opt/znapzend-0.14.0/bin # /opt/znapzend-0.14.0/bin/znapzendzetup
Can't locate Mojo/Base.pm in @inc (you may need to install the Mojo::Base module) (@inc contains: /opt/znapzend-0.14.0/bin/../thirdparty/lib/perl5 /opt/znapzend-0.14.0/bin/../lib /usr/local/lib/perl5/site_perl/mach/5.18 /usr/local/lib/perl5/site_perl /usr/local/lib/perl5/5.18/mach /usr/local/lib/perl5/5.18 /usr/local/lib/perl5/site_perl/5.18 /usr/local/lib/perl5/site_perl/5.18/mach .) at /opt/znapzend-0.14.0/bin/znapzendzetup line 12.
BEGIN failed--compilation aborted at /opt/znapzend-0.14.0/bin/znapzendzetup line 12.

root@abch026:/opt/znapzend-0.14.0/bin # perl --version
This is perl 5, version 18, subversion 4 (v5.18.4) built for amd64-freebsd-thread-multi

I haven't found a way to resolve that on my own. Can someone help?

Thank you and have a great and Happy New Year !

Cedric Tineo

-snap-command API

It would be useful if scripts run from pre and post commands could get current backup parameters, for example current snapshot name, etc.

Replication failed in network mode

Hello!
Faced a problem with direct replication via mbuffer. Everything works just fine in "runonce mode" for every dataset, but some of them (every time different) fail in daemon mode.
I've read suggestion about timeout and now command looks this way:
/opt/znapzend/bin/znapzend --daemonize --pidfile=/dev/null --connectTimeout=180 --logto=/var/log/znapzend.log --loglevel=debug.
Increasing timeout to 3600 doesn't help. Buffer size doesn't matter too - changed from 100M to 4GB.
Configirations of datasets are similar and look like that:

dst_uranus=root@uranus:backup/users/gate
dst_uranus_plan=3days=>1days,3months=>1months
enabled=on
mbuffer=/usr/sbin/mbuffer:33333
mbuffer_size=100M
post_znap_cmd=off
pre_znap_cmd=off
recursive=off
src=tank/users/gate
src_plan=3days=>1days,3months=>1months
tsformat=%Y-%m-%d-%H%M%S

On the both sides installed OmniOS r151012, ZnapZend 0.14.
And log contains such records:

[Wed Jun  3 21:09:45 2015] [info] refreshing backup plans...
[Wed Jun  3 21:09:55 2015] [info] found a valid backup plan for tank/familyArch/photo...
[Wed Jun  3 21:09:55 2015] [info] found a valid backup plan for tank/familyArch/video...
[Wed Jun  3 21:09:55 2015] [info] found a valid backup plan for tank/users/gate...
[Wed Jun  3 21:09:55 2015] [info] found a valid backup plan for tank/users/nata...
[Thu Jun  4 00:00:00 2015] [debug] snapshot worker for tank/familyArch/photo spawned (187)
[Thu Jun  4 00:00:00 2015] [info] creating snapshot on tank/familyArch/photo
[Thu Jun  4 00:00:00 2015] [debug] snapshot worker for tank/users/gate spawned (188)
[Thu Jun  4 00:00:00 2015] [info] creating snapshot on tank/users/gate
[Thu Jun  4 00:00:00 2015] [debug] snapshot worker for tank/familyArch/video spawned (190)
[Thu Jun  4 00:00:00 2015] [info] creating snapshot on tank/familyArch/video
[Thu Jun  4 00:00:00 2015] [debug] snapshot worker for tank/users/nata spawned (192)
[Thu Jun  4 00:00:00 2015] [info] creating snapshot on tank/users/nata
[Thu Jun  4 00:00:00 2015] [debug] snapshot worker for tank/familyArch/photo done (187)
[Thu Jun  4 00:00:00 2015] [debug] send/receive worker for tank/familyArch/photo spawned (196)
[Thu Jun  4 00:00:00 2015] [info] starting work on backupSet tank/familyArch/photo
[Thu Jun  4 00:00:00 2015] [debug] sending snapshots from tank/familyArch/photo to root@uranus:backup/familyArch/photo
[Thu Jun  4 00:00:00 2015] [debug] snapshot worker for tank/familyArch/video done (190)
[Thu Jun  4 00:00:00 2015] [debug] snapshot worker for tank/users/nata done (192)
[Thu Jun  4 00:00:00 2015] [debug] snapshot worker for tank/users/gate done (188)
[Thu Jun  4 00:00:00 2015] [debug] send/receive worker for tank/users/gate spawned (202)
[Thu Jun  4 00:00:00 2015] [info] starting work on backupSet tank/users/gate
[Thu Jun  4 00:00:00 2015] [debug] sending snapshots from tank/users/gate to root@uranus:backup/users/gate
[Thu Jun  4 00:00:00 2015] [debug] send/receive worker for tank/users/nata spawned (203)
[Thu Jun  4 00:00:00 2015] [info] starting work on backupSet tank/users/nata
[Thu Jun  4 00:00:00 2015] [debug] sending snapshots from tank/users/nata to root@uranus:backup/users/nata
[Thu Jun  4 00:00:00 2015] [debug] send/receive worker for tank/familyArch/video spawned (205)
[Thu Jun  4 00:00:00 2015] [info] starting work on backupSet tank/familyArch/video
[Thu Jun  4 00:00:00 2015] [debug] sending snapshots from tank/familyArch/video to root@uranus:backup/familyArch/video
[Thu Jun  4 00:00:01 2015] [debug] receive process on uranus spawned (220)
[Thu Jun  4 00:00:01 2015] [debug] receive process on uranus spawned (222)
[Thu Jun  4 00:00:01 2015] [debug] receive process on uranus spawned (224)
[Thu Jun  4 00:00:02 2015] [debug] receive process on uranus spawned (227)
[Thu Jun  4 00:00:05 2015] [debug] receive process on uranus done (220)
[Thu Jun  4 00:00:05 2015] [warn] Mojo::Reactor::Poll: Read failed: ERROR: executing receive process at /opt/znapzend/bin/../thirdparty/lib/perl5/Mojo/IOLoop.pm line 25.

[Thu Jun  4 00:00:06 2015] [debug] cleaning up snapshots on root@uranus:backup/familyArch/photo
[Thu Jun  4 00:00:06 2015] [debug] cleaning up snapshots on tank/familyArch/photo
[Thu Jun  4 00:00:06 2015] [info] done with backupset tank/familyArch/photo in 6 seconds
[Thu Jun  4 00:00:06 2015] [debug] send/receive worker for tank/familyArch/photo done (196)
[Thu Jun  4 00:03:18 2015] [warn] Mojo::Reactor::Poll: Timer 236db45a14b3b94e68d2de5252a9cfa3 failed: ERROR: cannot send snapshots to backup/familyArch/video on uranus at /opt/znapzend/bin/../thirdparty/lib/perl5/Mojo/IOLoop.pm line 25.

[Thu Jun  4 00:03:18 2015] [warn] Mojo::Reactor::Poll: Timer 5e36a4ef53aadd1b78a27e5bb1c8370a failed: ERROR: cannot send snapshots to backup/users/nata on uranus at /opt/znapzend/bin/../thirdparty/lib/perl5/Mojo/IOLoop.pm line 25.

[Thu Jun  4 00:03:18 2015] [warn] Mojo::Reactor::Poll: Timer eb1ed86915d310f938a038cbec7533d1 failed: ERROR: cannot send snapshots to backup/users/gate on uranus at /opt/znapzend/bin/../thirdparty/lib/perl5/Mojo/IOLoop.pm line 25.

[Thu Jun  4 00:03:18 2015] [debug] cleaning up snapshots on root@uranus:backup/familyArch/video
[Thu Jun  4 00:03:18 2015] [debug] cleaning up snapshots on tank/familyArch/video
[Thu Jun  4 00:03:19 2015] [info] done with backupset tank/familyArch/video in 199 seconds
[Thu Jun  4 00:03:19 2015] [debug] send/receive worker for tank/familyArch/video done (205)
[Thu Jun  4 00:03:19 2015] [debug] cleaning up snapshots on root@uranus:backup/users/nata
[Thu Jun  4 00:03:19 2015] [debug] cleaning up snapshots on root@uranus:backup/users/gate
[Thu Jun  4 00:03:20 2015] [debug] cleaning up snapshots on tank/users/nata
[Thu Jun  4 00:03:20 2015] [debug] cleaning up snapshots on tank/users/gate
[Thu Jun  4 00:03:20 2015] [info] done with backupset tank/users/nata in 200 seconds
[Thu Jun  4 00:03:20 2015] [debug] send/receive worker for tank/users/nata done (203)
[Thu Jun  4 00:03:20 2015] [info] done with backupset tank/users/gate in 200 seconds
[Thu Jun  4 00:03:20 2015] [debug] send/receive worker for tank/users/gate done (202)

I have no idea what could it be. Please help! :-)

ZnapZend exits if DST is offline

Running ZnapZend on SmartOS. I have a definition like:

/opt/znapzend/bin/znapzendzetup create --recursive --mbuffer=/opt/local/bin/mbuffer
SRC '1hour=>5minute,24hour=>1hour,7day=>1day,5week=>1week,6month=>1month' zones/data/shareddata1
DST:backup '1hour=>5minute,24hour=>1hour,7day=>1day,5week=>1week,6month=>1month' [email protected]:zones/data/shareddata1

So it is taking periodic snapshots and also sending the snapshots to a backup computer. Usually the backup computer is on. The issue I've found is that, if the backup computer is off for some reason but the primary computer turns on, ZnapZend times out on SSH and exits when it can't talk to the backup computer. SmartOS tries restarting it a few times but eventually gives up.

So then I have no local snapshotting done either. Is there a way around this? Like ideally it would just snapshot locally if it times out with SSH, and then resume the DST part once it can connect to the backup server.

Thanks.

mbuffer vs ssh

I noticed my znapzend was only getting about 1.2MB/sec and thought that was pretty low, so I went to look, ant it looks like ssh is being used for the actual transport with mbuffer only being used on the far end. e.g. client side zfs -> ssh --------- server ssh -> mbuffer -> zfs

Is there a reason that the it doesn't use mbuffer tcp directly?

Here's my evidence:
server side:
lots of reads on fd0 of mbuffer, so pfiles on mbuffer pid:
root@x4275-3-15-20:/root# pfiles 3603
3603: /opt/omni/bin/mbuffer -q -s 128k -W 60 -m 10M
Current rlimit: 256 file descriptors
0: S_IFSOCK mode:0666 dev:532,0 ino:5813 uid:0 gid:0 rdev:0,0
O_RDWR
SOCK_STREAM
SO_SNDBUF(16384),SO_RCVBUF(5120)
sockname: AF_UNIX
peer: sshd[3597] zone: global[0]
1: S_IFSOCK mode:0666 dev:532,0 ino:5812 uid:0 gid:0 rdev:0,0
O_RDWR
SOCK_STREAM
SO_SNDBUF(16384),SO_RCVBUF(5120)
sockname: AF_UNIX
peer: sshd[3597] zone: global[0]
2: S_IFSOCK mode:0666 dev:532,0 ino:5812 uid:0 gid:0 rdev:0,0
O_RDWR
SOCK_STREAM
SO_SNDBUF(16384),SO_RCVBUF(5120)
sockname: AF_UNIX
peer: sshd[3597] zone: global[0]
3: S_IFIFO mode:0000 dev:521,0 ino:3278 uid:0 gid:0 rdev:0,0
O_RDWR

client side:
root@x4275-1-4-35:/export/d# ptree 11572
6278 screen
6279 /usr/bin/bash
11572 /opt/niksula/perl5/bin/perl /opt/niksula/bin/znapzend --connectTimeou
11585 /opt/niksula/perl5/bin/perl /opt/niksula/bin/znapzend --connectTime
11588 sh -c zfs send -I zpool1/tech-0@2015-05-15-172248 zpool1/tech-0@2
11589 zfs send -I zpool1/tech-0@2015-05-15-172248 zpool1/tech-0@2015-
11590 ssh -o Compression=yes -o CompressionLevel=1 -o batchMode=yes -

I've done a previous mbuffer over the WAN and was getting 70MB/sec on a snapshot transfer. Since this is a few GB of stuff at 1MB/sec, it's taking a long while.

Preventing sending snapshots to a remote host in certain situations

I'm testing using znapzend to sync a master SmartOS server to a slave, for backup purposes. It seems to have a lot of potential for this application. Once the VM is created on both servers, I have znapzend sending the snapshots to the backup server every 5 minutes, so in the event of a catastrophic hardware failure (say the power supply starts on fire!), it is just a matter of doing "vmadm start" to start the VMs on the slave server.

The problem scenario I see, though, is this: the master server has failure, say a network card goes down, and so you start up the VMs on the slave server. Now, the operator replaces the network card in the master and starts up that server, and then znapzend will send the snapshots over to the backup server (which has been running and updating data for a couple days now), and poof, all your changes are gone as it will overwrite all the snapshots.

Now, in practice this is probably rare because it depends on someone not really thinking about the replacement process, and also the VMs would need to be stopped on the backup server or else the snapshots are mounted and won't be overwritten...but I think it could still happen pretty easily.

Can you think of a way I can get znapzend to not send snapshots to a remote host in certain situations? I think the best situation would be if it could somehow detect if a remote FS has been updated to be newer than the one it is trying to send over, and error out if that is the case as you wouldn't want to overwrite a newer FS with an older one. A lesser alternative would be to have it check some sort of filesystem property that you could set to indicate that you don't want it overwritten by znapzend. That would rely on the operator remembering to set that property, so not as automatic, but still could be functional.

Thoughts? Thanks for this tool.

Don't remove snaps on SRC if this would brake further zfs sends

Hi,

I have encountered two instances, where znapzend would remove the last common snapshot from SRC, breaking the chain to be able to zfs send/recv with DST further. I have not yet determined, what exactly happend on DST, but I assume some issues with mbuffer or the network. Anyway, my schedules are configured to keep 2 day worth of snaps with 6 snaps a day on SRC, while on the DST there shall be kept 6 days worth of snaps with 6 snaps a day.

Now, whatever made znapzend on SRC think, that it was successful in transferring the current snapshot over to DST, it actually wasn't and after a couple of days went by, znapzend was then unable to send any snaps over, since there was no common snapshot any longer.

So, wouldn't it be better, if znapzend actually checked with DST before removing the potential last common snspshot from SRC?

Cheers,
Stephan

znapzend with mbuffer, remote exits and hangs the sender

The tree looks something like this

 4186 ?        Ss     0:00 /software/tools/packages/plenv/versions/5.18.2/bin/perl /software/packages/znapzend-0.14.0/bin/znapzend
 4840 ?        S      0:00  \_ /software/tools/packages/plenv/versions/5.18.2/bin/perl /software/packages/znapzend-0.14.0/bin/znapzend
 7973 ?        Z      0:00      \_ [znapzend] <defunct>
 7999 ?        S      0:00      \_ sh -c zfs send -I bioinfo2/backup@2015-05-23-000000 bioinfo2/backup@2015-05-27-084500|/software/packages/mbuffer-20150412/bin/mbuffer -q -s 128k -W 60 -m 1G -O '192.168.170.8:9
 8001 ?        Sl    19:01          \_ /software/packages/mbuffer-20150412/bin/mbuffer -q -s 128k -W 60 -m 1G -O 192.168.170.8:9092

problem starting binary

hi, i get this message when starting znapzendzetup binary on openindiana oi_151a7:

./znapzendzetup
Version mismatch between Carp 1.08 (/usr/perl5/5.10.0/lib/Carp.pm) and Carp::Heavy 1.3301 (/home/frederic/znapzend-prebuilt-0.14.0/bin/../thirdparty/lib/perl5/Carp/Heavy.pm). Did you alter @inc after Carp was loaded?
Compilation failed in require at /usr/perl5/5.10.0/lib/File/Temp.pm line 152.
Compilation failed in require at ./znapzendzetup line 10.
BEGIN failed--compilation aborted at ./znapzendzetup line 10.

can somebody help?
thank you
frederic

exceptions under root of --recursive

It would be nice if one could change one or more of the inherited properties under a dataset with znapzend.org:recursive=on and have znapzend and znapzendzetup like that.

Znapzend doesn't create ZFS bookmarks accompanying the ZFS Snapshots

As discussed here: https://serverfault.com/questions/714238/is-it-possible-to-use-bookmarks-with-znapzend?noredirect=1#,

for use cases where the destination isn't always available, it is a cheap and sensible thing to do (probably) to create bookmarks along the snapshots to be able to make incremental backups after extended periods of time even though no common snapshot between source and target exists.

I'll have a look into code (whether I can wrap my head around) and report back

week becoming weeks and day becoming days

When I create a backup as suggested by the manual using week and day as the options, znapzendzetu writes weeks and days to the znapzend attributes on the dataset:

root@soln40l:~# znapzendzetup create --mbuffer=/opt/csw/bin/mbuffer --recursive --mbuffersize=1G --tsformat='%Y-%m-%d-%H%M%S' SRC '1week=>1day' tank DST:local '1week=>1day' backupTank
*** backup plan: tank ***
dst_local = backupTank
dst_local_plan = 1week=>1day
enabled = on
mbuffer = /opt/csw/bin/mbuffer
mbuffer_size = 1G
post_znap_cmd = off
pre_znap_cmd = off
recursive = on
src = tank
src_plan = 1week=>1day
tsformat = %Y-%m-%d-%H%M%S

Do you want to save this backup set [y/N]? y

root@soln40l:~# zfs get all tank | grep znap
tank org.znapzend:src_plan 1weeks=>1days local
tank org.znapzend:post_znap_cmd off local
tank org.znapzend:mbuffer_size 1G local
tank org.znapzend:dst_local backupTank local
tank org.znapzend:recursive on local
tank org.znapzend:pre_znap_cmd off local
tank org.znapzend:mbuffer /opt/csw/bin/mbuffer local
tank org.znapzend:tsformat %Y-%m-%d-%H%M%S local
tank org.znapzend:dst_local_plan 1weeks=>1days local
tank org.znapzend:enabled on local

However, running a znapzendzetup list against the dataset again returns "day" and "week":

root@soln40l:~# znapzendzetup list tank
*** backup plan: tank ***
dst_local = backupTank
dst_local_plan = 1week=>1day
enabled = on
mbuffer = /opt/csw/bin/mbuffer
mbuffer_size = 1G
post_znap_cmd = off
pre_znap_cmd = off
recursive = on
src = tank
src_plan = 1week=>1day
tsformat = %Y-%m-%d-%H%M%S

two way replication with mbuffer

Hello,

I'm running one way mbuffer replication with success,

anyway doing two way replication with znapzend?

Like this:
Node1 has source1 and dest2(rep from Node2) filesystems,Node2 has dest1(rep from Node1) and source2 filesystems

node1 source1 to node2 dest1

node2 source2 to node1 dest2

Thanks

Usage of less privileged account

We use zfs replication with our home-grown scripts since several years.
Some weeks ago I started integrating mbuffer in our scripts.
Last week I found this project and I looks very promising.
But I would like to use a less privileged account and not root as we did it before.

I changed ZFS.pm and added pfexec to sone zfs commands.
znapzend is started via SMF but an own script. Essentially it uses
"sudo -u $APPUSER $APP --features=oracleMode,recvu --logto=$LOGFILE --loglevel=info --daemonize --pidfile=$PIDFILE"
where the variables are defined in the script.

When I use the line above interactively as user APPUSER but without --daemonize it works.
As service with --daemonize I get the follwoing errors:
[Sun Dec 14 07:00:01 2014] [info] starting work on backupSet pool1/nfs/nfs1
[Sun Dec 14 07:00:02 2014] [warn] Can't use string ("pfexec") as an ARRAY ref while "strict refs" in use at /opt/bin/../lib/ZnapZend/ZFS.pm line 36.
[Sun Dec 14 07:00:02 2014] [warn] ERROR: suspending cleanup source dataset because at least one send task failed

My changes are in https://github.com/grueni/znapzend/tree/upstream1.
Could you give me a hint why there are differences between the interactive and background mode?
If we could solve the problem and you are interested I will prepare a proper pull request
OS is Oracle Solaris 11.2.

Feature request: remote source

For various reasons, I'd like to be able to do a "pull"-style backup, as such (dev environment):

root@kotick:/opt/znapzend-0.14.0/bin# znapzendzetup create --recursive --mbuffer=/usr/bin/mbuffer
--mbuffersize=1G --tsformat='%Y-%m-%d-%H%M%S'
SRC '7d=>1h,30d=>4h,90d=>1d' root@mowgli:zfs1/test
DST:a '7d=>1h,30d=>4h,90d=>1d,1y=>1w,10y=>1month' zfs1/backup/test
*** backup plan: root@mowgli:zfs1/test ***
dst_a = zfs1/backup/test
dst_a_plan = 7days=>1hour,30days=>4hours,90days=>1day,1year=>1week,10years=>1month
enabled = on
mbuffer = /usr/bin/mbuffer
mbuffer_size = 1G
post_znap_cmd = off
pre_znap_cmd = off
recursive = on
src = root@mowgli:zfs1/test
src_plan = 7days=>1hour,30days=>4hours,90days=>1day
tsformat = %Y-%m-%d-%H%M%S

Do you want to save this backup set [y/N]? y
cannot open 'root@mowgli:zfs1/test': invalid dataset name
ERROR: could not set property post_znap_cmd on root@mowgli:zfs1/test

Some of my backup machines are not online 24/7 and are being brought online only during certain time of the day (cold storage) and I would love to be able to execute backups from those hosts. How much work would be needed to implement that?

znapzendzetup without options --pfexec --sudo

With 1705b8c pfexec/sudo can be used. Thanks for this commit.
But there is some other work to do.

znapzendzetup and znapzendztatz must understand the new options too. At least znapzendzetup connects to the remote system and checks if mbuffer exists.This must fail if used without pfexec.

Install with brew on OSX

It would be very convenient for znapzend users on osx to use brew for installation and updates.
something like the following homebrew Formula znapzend.rb needs to be shipped to @Homebrew.

https://gist.github.com/lenada/e773a65aa9d37c0eb42c

@oetiker what is your general take on all those package managers out there? would you be willing to maintain and update some, for example the homebrew formula, linux apt, etc. yourself? or are you rather looking for external maintainers.
should I PR my Formula against https://github.com/Homebrew/homebrew ? who should do so on the next release?

by the way, znapzend is a really great tool :) appreciated!

Report date of last sucessfull round and not configured src_dataset

Erweiterung: znapzendztatz

Ein report, der zeigt, zu welchem zeitpunkt oder wie lange es her ist, der letzten erfolgreichen snapshot runde pro src_dataset, würde mir helfen, effizient zu prüfen ob alles i.o. ist.

Ein report, der alle src_dataset zeigt die nicht im znapzend konfiguriert sind, wäre als sicherung damit
nichts vergessen geht hilfreich.

Maintenance state with svcadm and Oracle 11.2

Hi there,
Since upgrading to Oracle 11.2 from 11.1 znapzend (pre built) goes into maintenance state with the following message: "Restarting too quickly, changing state to maintenance."

mbuffer: fatal: unknown option "-W" with mbuffer version 20110119

Ubuntu 14.04.3 fully updated. Wen running any znapzend job with mbuffer option specified, the job dies with the following error:

mbuffer: fatal: unknown option "-W"
cannot receive: failed to read from stream
warning: cannot send 'zfs1/backup/temp@2015-08-11-150546': Broken pipe

See below:

root@kotick:/opt/znapzend-0.14.0/bin# ./znapzendzetup create --recursive --mbuffer=/usr/bin/mbuffer -
-mbuffersize=1G --tsformat='%Y-%m-%d-%H%M%S' SRC '7d=>1h,30d=>4h,90d=>1d' zfs1/backup/temp DST:a '7d=
1h,30d=>4h,90d=>1d,1y=>1w,10y=>1month' zfs1/backup/temptest
*** backup plan: zfs1/backup/temp ***
dst_a = zfs1/backup/temptest
dst_a_plan = 7days=>1hour,30days=>4hours,90days=>1day,1year=>1week,10years=>1month
enabled = on
mbuffer = /usr/bin/mbuffer
mbuffer_size = 1G
post_znap_cmd = off
pre_znap_cmd = off
recursive = on
src = zfs1/backup/temp
src_plan = 7days=>1hour,30days=>4hours,90days=>1day
tsformat = %Y-%m-%d-%H%M%S

Do you want to save this backup set [y/N]? y
root@kotick:/opt/znapzend-0.14.0/bin# ./znapzendzetup list
*** backup plan: zfs1/backup/temp ***
dst_a = zfs1/backup/temptest
dst_a_plan = 7days=>1hour,30days=>4hours,90days=>1day,1year=>1week,10years=>1month
enabled = on
mbuffer = /usr/bin/mbuffer
mbuffer_size = 1G
post_znap_cmd = off
pre_znap_cmd = off
recursive = on
src = zfs1/backup/temp
src_plan = 7days=>1hour,30days=>4hours,90days=>1day
tsformat = %Y-%m-%d-%H%M%S

root@kotick:/opt/znapzend-0.14.0/bin# ./znapzend --noaction --debug --runonce=zfs1/backup/temp
[Tue Aug 11 15:04:10 2015] [info] refreshing backup plans...
[Tue Aug 11 15:04:10 2015] [info] found a valid backup plan for zfs1/backup/temp...
[Tue Aug 11 15:04:10 2015] [debug] snapshot worker for zfs1/backup/temp spawned (9569)
[Tue Aug 11 15:04:10 2015] [info] creating recursive snapshot on zfs1/backup/temp

zfs snapshot -r zfs1/backup/temp@2015-08-11-150410

[Tue Aug 11 15:04:10 2015] [debug] snapshot worker for zfs1/backup/temp done (9569)
[Tue Aug 11 15:04:10 2015] [debug] send/receive worker for zfs1/backup/temp spawned (9570)
[Tue Aug 11 15:04:10 2015] [info] starting work on backupSet zfs1/backup/temp

zfs list -H -r -o name zfs1/backup/temp

[Tue Aug 11 15:04:10 2015] [debug] sending snapshots from zfs1/backup/temp to zfs1/backup/temptest

zfs list -H -o name -t snapshot -s creation -d 1 zfs1/backup/temp

zfs list -H -o name -t snapshot -s creation -d 1 zfs1/backup/temptest

zfs list -H -o name -t snapshot -s creation -d 1 zfs1/backup/temptest

[Tue Aug 11 15:04:10 2015] [debug] cleaning up snapshots on zfs1/backup/temptest

zfs list -H -o name -t snapshot -s creation -d 1 zfs1/backup/temp

[Tue Aug 11 15:04:10 2015] [debug] cleaning up snapshots on zfs1/backup/temp
[Tue Aug 11 15:04:10 2015] [info] done with backupset zfs1/backup/temp in 0 seconds
[Tue Aug 11 15:04:10 2015] [debug] send/receive worker for zfs1/backup/temp done (9570)
root@kotick:/opt/znapzend-0.14.0/bin# ./znapzend --debug --runonce=zfs1/backup/temp
[Tue Aug 11 15:05:46 2015] [info] refreshing backup plans...
[Tue Aug 11 15:05:46 2015] [info] found a valid backup plan for zfs1/backup/temp...
[Tue Aug 11 15:05:46 2015] [debug] snapshot worker for zfs1/backup/temp spawned (9649)
[Tue Aug 11 15:05:46 2015] [info] creating recursive snapshot on zfs1/backup/temp

zfs snapshot -r zfs1/backup/temp@2015-08-11-150546

[Tue Aug 11 15:05:46 2015] [debug] snapshot worker for zfs1/backup/temp done (9649)
[Tue Aug 11 15:05:46 2015] [debug] send/receive worker for zfs1/backup/temp spawned (9654)
[Tue Aug 11 15:05:46 2015] [info] starting work on backupSet zfs1/backup/temp

zfs list -H -r -o name zfs1/backup/temp

[Tue Aug 11 15:05:46 2015] [debug] sending snapshots from zfs1/backup/temp to zfs1/backup/temptest

zfs list -H -o name -t snapshot -s creation -d 1 zfs1/backup/temp

zfs list -H -o name -t snapshot -s creation -d 1 zfs1/backup/temptest

zfs list -H -o name -t snapshot -s creation -d 1 zfs1/backup/temptest

zfs send zfs1/backup/temp@2015-08-11-150546|/usr/bin/mbuffer -q -s 128k -W 60 -m 1G|zfs recv -F zfs1/backup/temptest

mbuffer: fatal: unknown option "-W"
cannot receive: failed to read from stream
warning: cannot send 'zfs1/backup/temp@2015-08-11-150546': Broken pipe
[Tue Aug 11 15:05:46 2015] [warn] ERROR: cannot send snapshots to zfs1/backup/temptest

zfs list -H -o name -t snapshot -s creation -d 1 zfs1/backup/temptest

[Tue Aug 11 15:05:46 2015] [debug] cleaning up snapshots on zfs1/backup/temptest
[Tue Aug 11 15:05:46 2015] [warn] ERROR: suspending cleanup source dataset because at least one send task failed
[Tue Aug 11 15:05:46 2015] [info] done with backupset zfs1/backup/temp in 0 seconds
[Tue Aug 11 15:05:46 2015] [debug] send/receive worker for zfs1/backup/temp done (9654)

root@kotick:/opt/znapzend-0.14.0/bin# mbuffer -V
mbuffer version 20110119
Copyright 2001-2011 - T. Maier-Komor
License: GPLv3 - see file LICENSE
This program comes with ABSOLUTELY NO WARRANTY!!!
Donations via PayPal to [email protected] are welcome and support this work!

Installed from default ubuntu repository. What does "-W" do?

znapzendzetup disable on child

I've created a znapzend policy on 'mypoolname/user1' with recursive=ON.
I have several children on it :

mypoolname/user1/webdata
mypoolname/user1/sqldata
mypoolname/user1/olddata

I would like to disable (temporarily or not) znapzend on a specific child, let's say "olddata". This cannot be done :

znapzendzetup disable mypoolname/user1/olddata
ERROR: cannot disable backup config for mypoolname/user1/olddata. Did you create it?

If I try to edit the property myself it does not work :

zfs set org.znapzend:enabled=off mypoolname/user1/olddata

And now "disable" shows an other message :

znapzendzetup disable mypoolname/user1/olddata
ERROR: property recursive not set on backup for mypoolname/user1/olddata

Solaris Destroy Support

Solaris zfs destroy can only deal with one snapshot at a time, the efficient syntax of openzfs is not supported.

--loglevel not behaving as expected

Shouldn't this not print debug and info stuff to output/error?

znapzend --connectTimeout=10 --runonce=zpool1/tech-0 --loglevel=warning

[Sun May 17 21:18:50 2015] [info] refreshing backup plans...
[Sun May 17 21:18:52 2015] [info] found a valid backup plan for zpool1/tech-0...
[Sun May 17 21:18:52 2015] [debug] snapshot worker for zpool1/tech-0 spawned (14664)
[Sun May 17 21:18:52 2015] [info] running pre snapshot command on zpool1/tech-0
[Sun May 17 21:18:52 2015] [info] creating snapshot on zpool1/tech-0
[Sun May 17 21:18:52 2015] [info] running post snapshot command on zpool1/tech-0
[Sun May 17 21:18:52 2015] [debug] snapshot worker for zpool1/tech-0 done (14664)
[Sun May 17 21:18:52 2015] [debug] send/receive worker for zpool1/tech-0 spawned (14670)
[Sun May 17 21:18:52 2015] [info] starting work on backupSet zpool1/tech-0
[Sun May 17 21:18:52 2015] [debug] starting work on dst a
[Sun May 17 21:18:52 2015] [debug] sending snapshots from zpool1/tech-0 to root@x4275-3-15-20:zpool1/tech-0
[Sun May 17 21:18:53 2015] [debug] receive process on x4275-3-15-20 spawned (14673)

0.12.3 doesn't remove any snapshots if there is a zfs hold on one of them

If there is a zfs hold on a snapshot of the right format on the destination, the combined zfs destroy will fail, and snapshots will accumulate on the destination.

The "dataset is busy" below is caused by "zfs hold keep pool/from_src/user@ 2013-04-03-072351" (for example).

The target is OmniOS r151010.

Ideally, a failure of a combined destroy would try again omitting the snapshots complained about in the error message.

Alternatively (or additionally), a failure of a combined destroy should try again with a single zfs destroy per snapshot.

(Snapshot destruction may fail for other reasons, for instance if one is cloned or mounted; combined zfs destroys will destroy nothing in those cases too.)

Presumably this also affects removal of snapshots on the source as well.

ssh -o Compression=yes -o CompressionLevel=1 -o Cipher=arcfour -o batchMode=yes -o ConnectTimeout=3 [email protected] zfs destroy 'pool/from_src/user@2013-04-03-072351,2013-11-15-145353,2013-11-16-145351,2013-11-17-145349,2013-11-18-145347,2013-11-19-145345,2013-11-20-145342,2013-11-26-152014,2013-12-07-013608,2013-12-08-013606,2013-12-11-133558,2013-12-16-175846,2013-12-17-175844,2013-12-18-175842,2014-01-15-223040,2014-01-16-103039,2014-01-16-134236,2014-01-16-151116,2014-01-16-211115,2014-01-17-031114,2014-01-17-091114,2014-01-17-211112,2014-01-18-031113,2014-01-18-091111,2014-01-18-151111,2014-01-18-211110,2014-01-19-031110,2014-01-19-091109,2014-01-19-151108,2014-01-19-211108,2014-01-20-031107,2014-01-20-091107,2014-01-20-151106,2014-01-20-211106,2014-01-21-031105,2014-01-21-091104,2014-01-21-111946,2014-01-22-032608,2014-01-22-070446,2014-01-22-075959,2014-01-22-121843,2014-01-22-150741,2014-01-22-161841,2014-01-22-175854,2014-01-23-014254,2014-01-23-033448,2014-01-23-033536,2014-01-23-033639,2014-01-23-033843,2014-01-23-233752,2014-01-24-033751,2014-01-24-043455,2014-01-24-083454,2014-01-24-123454,2014-01-24-163454,2014-01-24-203453,2014-01-25-003453,2014-01-25-043452,2014-01-25-073550,2014-01-25-113550,2014-01-25-153549,2014-01-25-193549,2014-01-25-233548,2014-01-26-033548,2014-01-26-073548,2014-01-26-113547,2014-01-26-153547,2014-01-26-193547,2014-01-26-233546,2014-01-27-033546,2014-01-27-073546,2014-01-27-113545,2014-01-27-153545,2014-01-27-193544,2014-01-27-233544,2014-01-28-012309,2014-01-28-025401,2014-01-28-065401,2014-01-28-105401,2014-01-28-145400,2014-01-28-185400,2014-01-28-225359,2014-01-29-025359,2014-01-29-065359,2014-01-29-105358,2014-01-29-145358,2014-01-29-185358,2014-01-29-225357,2014-01-30-065357,2014-01-30-105356,2014-01-30-145356,2014-01-30-172635,2014-01-30-212635,2014-01-31-012634,2014-01-31-052634,2014-01-31-092634,2014-01-31-132633,2014-01-31-172633,2014-01-31-212633,2014-02-01-012632,2014-02-01-052632,2014-02-01-092632,2014-02-01-132631,2014-02-01-172631,2014-02-01-212631,2014-02-02-012630,2014-02-02-052630,2014-02-02-092630,2014-02-02-132629,2014-02-02-172629,2014-02-02-212628,2014-02-02-230350,2014-02-03-030350,2014-02-03-070350,2014-02-03-110349,2014-02-03-150349,2014-02-03-190349,2014-02-03-230348,2014-02-04-030348,2014-02-04-070348,2014-02-04-110347,2014-02-04-150347,2014-02-04-190346,2014-02-04-230346,2014-02-05-030346,2014-02-05-070345,2014-02-05-110345,2014-02-05-150345,2014-02-05-190344,2014-02-05-230344,2014-02-06-063619,2014-02-06-103619,2014-02-06-143618,2014-02-06-183618,2014-02-06-223618,2014-02-07-023617,2014-02-07-063617,2014-02-07-103616,2014-02-07-143616,2014-02-07-183616,2014-02-07-223615,2014-02-08-023615,2014-02-08-063615,2014-02-08-103614,2014-02-08-143614,2014-02-08-165652,2014-02-08-205652,2014-02-09-005651,2014-02-09-045651,2014-02-09-085651,2014-02-09-125650,2014-02-09-165650,2014-02-09-205650,2014-02-10-031215,2014-02-10-045649,2014-02-10-085649,2014-02-10-125648,2014-02-10-165648,2014-02-10-205647,2014-02-11-005647,2014-02-11-045647,2014-02-11-085646,2014-02-11-125646,2014-02-11-165646,2014-02-11-205645,2014-02-12-005645,2014-02-12-045645,2014-02-12-085644,2014-02-12-125644,2014-02-12-165643,2014-02-12-205643,2014-02-13-035825,2014-02-13-075825,2014-02-13-115824,2014-02-13-155824,2014-02-13-195824,2014-02-13-235823,2014-02-14-035823,2014-02-14-075823,2014-02-14-115822,2014-02-14-155822,2014-02-14-195822,2014-02-14-235821,2014-02-15-035821,2014-02-15-075820,2014-02-15-115820,2014-02-15-155820,2014-02-15-195819,2014-02-15-235819,2014-02-16-035819,2014-02-16-075818,2014-02-16-115818,2014-02-16-155818,2014-02-16-163011,2014-02-16-180743,2014-02-16-204650,2014-02-17-004650,2014-02-17-044650,2014-02-17-084649,2014-02-17-124649,2014-02-17-164649,2014-02-17-204648,2014-02-18-004648,2014-02-18-044648,2014-02-18-084647,2014-02-18-124647,2014-02-18-164647,2014-02-18-204646,2014-02-19-004646,2014-02-19-044646,2014-02-19-084645,2014-02-19-124645,2014-02-19-164644,2014-02-19-173601,2014-02-19-180842,2014-02-19-201717,2014-02-20-041716,2014-02-20-081716,2014-02-20-121715,2014-02-20-161715,2014-02-20-201715,2014-02-21-001714,2014-02-21-041714,2014-02-21-081714,2014-02-21-121713,2014-02-21-161713,2014-02-21-201713,2014-02-22-001712,2014-02-22-041712,2014-02-22-081711,2014-02-22-121711,2014-02-22-161711,2014-02-22-201710,2014-02-23-001710,2014-02-23-041710,2014-02-23-081709,2014-02-23-121709,2014-02-23-161709,2014-02-23-201708,2014-02-24-001708,2014-02-24-041708,2014-02-24-142057,2014-02-24-182057,2014-02-24-222056,2014-02-25-022056,2014-02-25-062056,2014-02-25-102055,2014-02-25-142055,2014-02-25-182055,2014-02-25-222054,2014-02-26-022054,2014-02-26-062054,2014-02-26-102053,2014-02-26-142053,2014-02-26-182053,2014-02-26-224400,2014-02-27-030442,2014-02-27-042940,2014-02-27-142856,2014-02-27-145902,2014-02-27-155153,2014-02-27-195152,2014-02-27-235152,2014-02-28-035151,2014-02-28-075151,2014-02-28-115151,2014-02-28-155150,2014-02-28-195150,2014-02-28-235150,2014-03-01-035149,2014-03-01-075149,2014-03-01-115148,2014-03-01-155148,2014-03-01-195148,2014-03-01-235147,2014-03-02-035147,2014-03-02-075147,2014-03-02-115146,2014-03-02-155146,2014-03-02-195146,2014-03-02-235145,2014-03-03-035145,2014-03-03-075145,2014-03-03-115144,2014-03-03-155144,2014-03-03-195144,2014-03-03-235143,2014-03-04-035143,2014-03-04-075143,2014-03-04-115142,2014-03-04-155142,2014-03-04-195141,2014-03-04-235141,2014-03-05-035141,2014-03-05-075141,2014-03-05-115140,2014-03-05-134721,2014-03-05-174721,2014-03-05-214720,2014-03-06-054719,2014-03-06-094719,2014-03-06-134719,2014-03-06-174718,2014-03-06-214718,2014-03-07-014718,2014-03-07-054717,2014-03-07-094717,2014-03-07-134717,2014-03-07-174716,2014-03-07-214716,2014-03-08-014716,2014-03-08-054715,2014-03-08-094715,2014-03-08-134715,2014-03-08-174714,2014-03-08-214714,2014-03-09-014713,2014-03-09-054713,2014-03-09-094713,2014-03-09-134712,2014-03-09-174712,2014-03-09-214712,2014-03-10-014711,2014-03-10-054711,2014-03-10-094711,2014-03-10-134710,2014-03-10-174710,2014-03-10-214710,2014-03-11-014709,2014-03-11-054709,2014-03-11-094708,2014-03-11-134708,2014-03-11-174708,2014-03-11-214707,2014-03-12-014707,2014-03-12-054707,2014-03-12-094706,2014-03-12-134706,2014-03-12-174706,2014-03-12-214705,2014-03-13-065510,2014-03-13-105509,2014-03-13-145509,2014-03-13-185508,2014-03-13-225508,2014-03-14-025508,2014-03-14-065507,2014-03-14-105507,2014-03-14-145507,2014-03-14-185506,2014-03-14-225506,2014-03-15-025506,2014-03-15-065505,2014-03-15-105505,2014-03-15-145504,2014-03-15-185504,2014-03-15-225504,2014-03-16-025503,2014-03-16-065503,2014-03-16-105503,2014-03-16-145502,2014-03-16-185502,2014-03-16-225502,2014-03-17-025501,2014-03-17-065501,2014-03-17-105501,2014-03-17-145500,2014-03-17-185500,2014-03-17-225459,2014-03-18-025459,2014-03-18-065459,2014-03-18-105458,2014-03-18-134501,2014-03-18-174501,2014-03-18-214500,2014-03-18-225010,2014-03-19-025009,2014-03-19-065009,2014-03-19-105008,2014-03-19-145008,2014-03-19-185007,2014-03-19-225007,2014-03-20-065006,2014-03-20-105006,2014-03-20-145006,2014-03-20-185005,2014-03-20-225005,2014-03-21-025004,2014-03-21-065004,2014-03-21-105004,2014-03-21-145003,2014-03-21-185003,2014-03-21-225002,2014-03-22-025002,2014-03-22-065002,2014-03-22-105001,2014-03-22-145001,2014-03-22-185001,2014-03-22-225000,2014-03-23-025000,2014-03-23-065000,2014-03-23-104959,2014-03-23-144959,2014-03-23-184958,2014-03-23-224958,2014-03-24-024958,2014-03-24-070009,2014-03-24-110008,2014-03-24-150008,2014-03-24-190007,2014-03-24-230007,2014-03-25-030007,2014-03-25-070006,2014-03-25-110006,2014-03-25-150006,2014-03-25-190005,2014-03-25-230005,2014-03-26-030005,2014-03-26-070004,2014-03-26-110004,2014-03-26-150003,2014-03-26-190003,2014-03-26-230003,2014-03-27-070002,2014-03-27-110002,2014-03-27-150001,2014-03-27-190001,2014-03-27-230000,2014-03-28-030000,2014-03-28-070000,2014-03-28-105959,2014-03-28-145959,2014-03-28-185959,2014-03-28-225958,2014-03-29-025958,2014-03-29-065957,2014-03-29-105957,2014-03-29-145957,2014-03-29-185956,2014-03-29-195607,2014-03-29-235607,2014-03-30-045607,2014-03-30-085606,2014-03-30-125606,2014-03-30-165606,2014-03-30-205605,2014-03-31-005607,2014-03-31-045605,2014-03-31-085604,2014-03-31-125604,2014-03-31-165603,2014-03-31-205604,2014-04-01-075034,2014-04-01-115002,2014-04-01-155002,2014-04-01-201211,2014-04-02-001154,2014-04-02-011006,2014-04-02-051006,2014-04-02-091008,2014-04-02-113010,2014-04-02-153009,2014-04-02-193009,2014-04-02-233009,2014-04-03-073008,2014-04-03-113008,2014-04-03-153007,2014-04-03-193007,2014-04-03-233006,2014-04-04-033006,2014-04-04-073005,2014-04-04-113005,2014-04-04-153005,2014-04-04-193004,2014-04-04-233004,2014-04-05-033008,2014-04-05-073003,2014-04-05-122322,2014-04-05-142944,2014-04-05-144458,2014-04-05-193628,2014-04-05-224457,2014-04-06-024456,2014-04-06-064456,2014-04-06-104456,2014-04-06-144455,2014-04-06-184455,2014-04-06-224455,2014-04-07-024454,2014-04-07-064454,2014-04-07-104453,2014-04-07-144453,2014-04-07-184453,2014-04-07-224452,2014-04-08-024452,2014-04-08-064451,2014-04-08-104451,2014-04-08-144451,2014-04-08-184450,2014-04-08-224450,2014-04-09-071355,2014-04-09-104449,2014-04-09-144449,2014-04-10-082943,2014-04-10-104446,2014-04-10-144446,2014-04-10-184446,2014-04-11-023220,2014-04-11-063220,2014-04-11-103220,2014-04-11-143219,2014-04-11-183218,2014-04-11-223218,2014-04-12-023218,2014-04-12-063217,2014-04-12-103217,2014-04-12-143217,2014-04-12-183216,2014-04-12-223216,2014-04-13-023216,2014-04-13-063215,2014-04-13-103215,2014-04-13-143215,2014-04-13-183214,2014-04-13-223214,2014-04-14-023213,2014-04-14-063213,2014-04-14-103213,2014-04-14-143212,2014-04-14-183212,2014-04-14-223212,2014-04-15-023211,2014-04-15-063211,2014-04-15-103211,2014-04-15-143210,2014-04-15-183210,2014-04-15-223210,2014-04-16-023209,2014-04-16-063209,2014-04-16-103208,2014-04-16-143208,2014-04-16-183208,2014-04-16-223207,2014-04-17-063207,2014-04-17-103206,2014-04-17-143206,2014-04-17-183206,2014-04-17-223205,2014-04-18-023205,2014-04-18-063204,2014-04-18-103204,2014-04-18-143204,2014-04-18-183203,2014-04-18-223203,2014-04-19-023203,2014-04-19-063202,2014-04-19-103202,2014-04-19-143202,2014-04-19-183201,2014-04-19-223201,2014-04-20-023200,2014-04-20-063201,2014-04-20-103200,2014-04-20-153916,2014-04-20-193916,2014-04-20-233915,2014-04-21-033915,2014-04-21-073914,2014-04-21-113914,2014-04-21-153914,2014-04-21-193913,2014-04-21-233913,2014-04-22-033913,2014-04-22-073912,2014-04-22-113912,2014-04-22-153912,2014-04-22-193911,2014-04-22-233911,2014-04-23-033910,2014-04-23-073910,2014-04-23-113910,2014-04-23-153911,2014-04-23-193909,2014-04-23-233909,2014-04-24-073908,2014-04-24-113908,2014-04-24-153907,2014-04-24-193907,2014-04-24-233906,2014-04-25-033906,2014-04-25-073906,2014-04-25-113905,2014-04-25-153905,2014-04-25-193905,2014-04-25-233904,2014-04-26-033904,2014-04-26-073904,2014-04-26-113903,2014-04-26-153903,2014-04-26-193903,2014-04-26-233902,2014-04-27-033902,2014-04-27-073901,2014-04-27-113901,2014-04-27-153901,2014-04-27-193900,2014-04-27-233900,2014-04-28-033900,2014-04-28-073859,2014-04-28-113859,2014-04-28-153859,2014-04-28-193858,2014-04-28-233858,2014-04-29-033857,2014-04-29-073857,2014-04-29-113857,2014-04-29-153856,2014-04-29-193856,2014-04-29-233856,2014-04-30-033855,2014-04-30-073855,2014-04-30-113854,2014-04-30-153854,2014-04-30-193854,2014-04-30-211427,2014-05-01-044205,2014-05-01-055041,2014-05-01-095041,2014-05-01-135040,2014-05-01-175040,2014-05-01-215040,2014-05-02-015039,2014-05-02-055039,2014-05-02-095039,2014-05-02-135038,2014-05-02-175038,2014-05-02-215038,2014-05-03-015037,2014-05-03-055037,2014-05-03-095036,2014-05-03-135036,2014-05-03-175036,2014-05-03-215035,2014-05-04-015035,2014-05-04-055035,2014-05-04-095034,2014-05-04-135034,2014-05-04-175034,2014-05-04-215033,2014-05-05-015033,2014-05-05-115640,2014-05-05-155640,2014-05-05-195639,2014-05-05-235639,2014-05-06-035639,2014-05-06-075638,2014-05-06-115638,2014-05-06-155637,2014-05-06-195637,2014-05-06-235637,2014-05-07-035636,2014-05-07-075636,2014-05-07-115635,2014-05-07-155635,2014-05-07-195635,2014-05-07-235634,2014-05-08-075634,2014-05-08-115633,2014-05-08-150504,2014-05-08-190504,2014-05-08-230503,2014-05-09-030503,2014-05-09-070503,2014-05-09-110502,2014-05-09-150502,2014-05-09-190501,2014-05-09-230501,2014-05-10-030501,2014-05-10-070500,2014-05-10-110500,2014-05-10-150550,2014-05-15-203842,2014-05-16-003842,2014-05-16-043841,2014-05-16-083841,2014-05-16-123840,2014-05-17-020702,2014-05-17-134509,2014-05-17-163838,2014-05-17-220548,2014-05-18-003837,2014-05-18-043836,2014-05-18-083836,2014-05-18-123836,2014-05-18-163835,2014-05-18-203835,2014-05-19-003834,2014-05-19-043834,2014-05-19-083834,2014-05-19-123833,2014-05-19-163833,2014-05-19-203833,2014-05-20-003832,2014-05-20-022227,2014-05-20-025606,2014-05-20-033453,2014-05-20-044452,2014-05-20-084407,2014-05-20-124421,2014-05-20-164359,2014-05-20-204421,2014-05-21-004458,2014-05-21-044602,2014-05-21-084435,2014-05-21-124403,2014-05-21-164444,2014-05-21-204701,2014-05-22-232144,2014-05-23-025041,2014-05-23-070746,2014-05-23-110745,2014-05-23-150745,2014-05-24-131451,2014-05-24-134344,2014-05-24-154907,2014-05-24-173608,2014-05-24-220807,2014-05-24-234906,2014-05-25-034906,2014-05-25-074905,2014-05-25-114905,2014-05-25-154905,2014-05-25-194904,2014-05-25-234904,2014-05-26-034904,2014-05-26-074903,2014-05-26-114903,2014-05-26-154902,2014-05-26-194902,2014-05-26-234902,2014-05-27-013957,2014-05-27-053956,2014-05-27-093956,2014-05-27-133956,2014-05-27-173955,2014-05-27-213955,2014-05-28-013954,2014-05-28-053954,2014-05-28-055941,2014-05-28-084226,2014-05-28-171453,2014-05-28-173953,2014-05-28-213953,2014-05-28-221348,2014-05-29-061347,2014-05-29-101347,2014-05-29-141346,2014-05-29-181346,2014-05-29-221345,2014-05-30-021345,2014-05-30-061345,2014-05-30-101344,2014-05-30-141344,2014-05-30-221343,2014-05-31-030704,2014-05-31-070703,2014-05-31-112804,2014-05-31-150703,2014-05-31-190702,2014-05-31-230702,2014-06-01-070701,2014-06-01-110701,2014-06-01-150700,2014-06-01-190700,2014-06-01-230700,2014-06-02-053033,2014-06-02-110913,2014-06-02-184417,2014-06-03-030657,2014-06-03-070657,2014-06-03-110656,2014-06-03-150656,2014-06-03-190656,2014-06-03-230655,2014-06-04-054143,2014-06-04-074949,2014-06-04-114949,2014-06-04-154949,2014-06-04-194948,2014-06-04-234948,2014-06-05-052448,2014-06-05-092447,2014-06-05-132447,2014-06-05-172446,2014-06-05-234848,2014-06-06-001545,2014-06-06-010227,2014-06-06-014618,2014-06-06-045245,2014-06-06-063146,2014-06-06-120140,2014-06-06-122759,2014-06-06-162759,2014-06-06-202758,2014-06-07-042757,2014-06-07-064657,2014-06-07-122257,2014-06-07-162257,2014-06-07-172559,2014-06-07-212558,2014-06-08-052558,2014-06-08-092557,2014-06-08-132557,2014-06-08-172556,2014-06-08-212556,2014-06-09-052555,2014-06-09-092555,2014-06-09-132554,2014-06-09-172554,2014-06-09-212554,2014-06-10-052553,2014-06-10-092552,2014-06-10-132552,2014-06-10-181414,2014-06-11-024741,2014-06-11-052551,2014-06-11-092550,2014-06-11-163212,2014-06-11-201225,2014-06-12-045224,2014-06-12-083948,2014-06-12-092732,2014-06-12-100301,2014-06-12-121224,2014-06-12-161223,2014-06-12-201223,2014-06-13-041222,2014-06-13-081222,2014-06-13-121221,2014-06-13-161221,2014-06-13-201220,2014-06-14-041220,2014-06-14-081219,2014-06-14-121219,2014-06-14-161218,2014-06-14-201218,2014-06-15-041217,2014-06-15-081217,2014-06-15-121216,2014-06-15-161216,2014-06-15-201216,2014-06-16-041215,2014-06-16-081215,2014-06-16-121215,2014-06-16-133531,2014-06-16-175358,2014-06-16-215357,2014-06-17-055356,2014-06-17-064555,2014-06-17-104555,2014-06-17-144555,2014-06-17-162559,2014-06-17-202559,2014-06-18-042558,2014-06-18-082558,2014-06-18-122557,2014-06-18-162557,2014-06-18-202557,2014-06-19-042556,2014-06-19-082555,2014-06-19-122555,2014-06-19-162555,2014-06-19-202554,2014-06-20-042553,2014-06-20-082553,2014-06-20-122553,2014-06-20-162600,2014-06-20-181821,2014-06-20-221821,2014-06-21-061819,2014-06-21-101819,2014-06-21-141819,2014-06-21-181818,2014-06-21-221818,2014-06-22-061817,2014-06-22-101817,2014-06-22-141816,2014-06-22-181816,2014-06-22-221815,2014-06-23-052156,2014-06-23-092156,2014-06-23-132155,2014-06-23-172155,2014-06-23-212155,2014-06-24-052154,2014-06-24-105406,2014-06-24-145406,2014-06-24-185405,2014-06-24-225405,2014-06-25-065404,2014-06-25-105404,2014-06-25-145403,2014-06-25-190657,2014-06-25-195956,2014-06-25-235955,2014-06-26-075955,2014-06-26-115954,2014-06-26-155954,2014-06-26-195953,2014-06-26-235953,2014-06-27-075952,2014-06-27-115952,2014-06-27-135212,2014-06-27-175211,2014-06-27-223708,2014-06-28-055210,2014-06-28-095210,2014-06-28-135209,2014-06-28-175209,2014-06-28-215209,2014-06-29-055208,2014-06-29-095207,2014-06-29-135207,2014-06-29-175207,2014-06-29-215206,2014-06-30-064432,2014-06-30-104431,2014-06-30-144431,2014-06-30-184430,2014-06-30-224430,2014-07-01-124957,2014-07-01-150113,2014-07-01-184428,2014-07-01-205829,2014-07-02-045829,2014-07-02-085828,2014-07-02-125828,2014-07-02-165828,2014-07-02-173504,2014-07-02-213504,2014-07-03-050659,2014-07-03-090659,2014-07-03-130658,2014-07-03-170658,2014-07-03-183117,2014-07-03-223117,2014-07-04-063116,2014-07-04-103115,2014-07-04-143115,2014-07-04-183115,2014-07-05-183112,2014-07-05-223112,2014-07-06-063111,2014-07-06-103111,2014-07-06-143110,2014-07-06-183110,2014-07-06-223110,2014-07-07-042416,2014-07-07-082415,2014-07-07-122415,2014-07-07-162415,2014-07-07-202414,2014-07-08-042414,2014-07-08-182842,2014-07-08-202412,2014-07-09-042411,2014-07-09-082411,2014-07-09-120830,2014-07-09-160829,2014-07-09-200829,2014-07-10-040828,2014-07-10-080828,2014-07-10-120827,2014-07-10-160827,2014-07-10-200826,2014-07-11-040826,2014-07-11-080825,2014-07-11-120825,2014-07-11-160824,2014-07-11-200824,2014-07-12-152311,2014-07-12-192310,2014-07-12-232310,2014-07-13-072309,2014-07-13-112309,2014-07-13-152321,2014-07-13-194121,2014-07-13-232308,2014-07-14-072307,2014-07-14-112306,2014-07-14-140720,2014-07-14-211859,2014-07-15-083158,2014-07-15-123158,2014-07-15-163157,2014-07-15-203157,2014-07-16-043156,2014-07-16-083156,2014-07-16-174917,2014-07-16-203155,2014-07-17-043154,2014-07-17-083153,2014-07-17-123153,2014-07-17-163153,2014-07-17-203152,2014-07-18-013602,2014-07-18-053602,2014-07-18-093601,2014-07-18-133606,2014-07-21-053417,2014-07-21-093417,2014-07-21-133416,2014-07-21-173416,2014-07-21-213415,2014-07-22-070019,2014-07-22-093414,2014-07-22-133414,2014-07-22-173413,2014-07-22-192208,2014-07-22-232208,2014-07-23-073958,2014-07-23-112207,2014-07-23-152207,2014-07-23-210011,2014-07-24-063029,2014-07-24-090010,2014-07-24-130010,2014-07-24-162301,2014-07-24-202301,2014-07-25-042300,2014-07-25-082259,2014-07-25-122259,2014-07-25-162259,2014-07-25-202258,2014-07-26-042257,2014-07-26-082257,2014-07-26-122257,2014-07-26-162256,2014-07-26-202256,2014-07-27-042255,2014-07-27-082255,2014-07-27-122254,2014-07-27-162254,2014-07-27-202253,2014-07-28-042253,2014-07-28-082252,2014-07-28-122252,2014-07-28-173027,2014-07-28-213026,2014-07-29-053025,2014-07-29-093025,2014-07-29-133025,2014-07-29-213024,2014-07-30-053023,2014-07-30-093023,2014-07-30-173022,2014-07-30-213021,2014-07-31-053021,2014-07-31-093022,2014-07-31-165144,2014-07-31-205144,2014-08-01-045144,2014-08-01-085143,2014-08-01-165142,2014-08-01-205142,2014-08-02-045142,2014-08-02-085142,2014-08-02-165141,2014-08-02-205141,2014-08-03-045139,2014-08-03-085138,2014-08-03-165137,2014-08-03-205137,2014-08-03-235540,2014-08-04-075540,2014-08-04-115539,2014-08-04-195538,2014-08-04-235538,2014-08-05-075537,2014-08-05-115537,2014-08-05-195536,2014-08-05-235536,2014-08-06-075535,2014-08-06-115535,2014-08-06-195534,2014-08-06-235533,2014-08-07-063523,2014-08-07-103522,2014-08-07-183522,2014-08-07-223521,2014-08-08-063521,2014-08-08-103520,2014-08-08-183519,2014-08-08-223519,2014-08-09-063518,2014-08-09-103517,2014-08-09-183517,2014-08-09-223516,2014-08-10-063516,2014-08-10-103515,2014-08-10-183514,2014-08-10-223514,2014-08-11-063513,2014-08-11-103513,2014-08-11-201404,2014-08-12-041404,2014-08-12-081403,2014-08-12-180000,2014-08-12-190000,2014-08-12-200000,2014-08-12-210000,2014-08-12-220000,2014-08-12-230000,2014-08-13-010000,2014-08-13-020000,2014-08-13-030000,2014-08-13-040000,2014-08-13-050000,2014-08-13-060000,2014-08-13-070000,2014-08-13-080000,2014-08-13-090000,2014-08-13-100000,2014-08-13-110000,2014-08-13-130000,2014-08-13-140000,2014-08-13-150000,2014-08-13-160000,2014-08-13-170000,2014-08-13-180000,2014-08-13-190000,2014-08-13-200000,2014-08-13-210000,2014-08-13-220000,2014-08-13-230000,2014-08-14-010000,2014-08-14-020000,2014-08-14-030000,2014-08-14-040000,2014-08-14-050000,2014-08-14-060000,2014-08-14-070000,2014-08-14-080000,2014-08-14-090000,2014-08-14-100000,2014-08-14-110000,2014-08-14-130000,2014-08-14-140000,2014-08-14-150000,2014-08-14-160000,2014-08-14-170000,2014-08-14-180000,2014-08-14-190000,2014-08-14-200000,2014-08-14-210000,2014-08-14-220000,2014-08-14-230000,2014-08-15-010000,2014-08-15-020000,2014-08-15-030000,2014-08-15-040000,2014-08-15-050000,2014-08-15-060000,2014-08-15-070000,2014-08-15-080000,2014-08-15-090000,2014-08-15-100000,2014-08-15-110000,2014-08-15-130000,2014-08-15-140000,2014-08-15-150000,2014-08-15-160000,2014-08-15-170000,2014-08-15-180000,2014-08-15-190000,2014-08-15-200000,2014-08-15-210000,2014-08-15-220000,2014-08-15-230000,2014-08-16-010000,2014-08-16-020000,2014-08-16-030000,2014-08-16-040000,2014-08-16-050000,2014-08-16-060000,2014-08-16-070000,2014-08-16-080000,2014-08-16-090000,2014-08-16-100000,2014-08-16-110000,2014-08-16-130000,2014-08-16-150000,2014-08-16-160000,2014-08-16-170000,2014-08-16-180000,2014-08-16-190000,2014-08-16-200000,2014-08-16-210000,2014-08-16-220000,2014-08-16-230000,2014-08-17-010000,2014-08-17-020000,2014-08-17-030000,2014-08-17-150000,2014-08-17-160000,2014-08-17-170000,2014-08-17-180000,2014-08-17-190000,2014-08-17-200000,2014-08-17-210000,2014-08-17-220000,2014-08-17-230000,2014-08-18-010000,2014-08-18-020000,2014-08-18-030000,2014-08-18-040000,2014-08-18-050000,2014-08-18-060000,2014-08-18-070000,2014-08-18-080000,2014-08-18-090000,2014-08-18-100000,2014-08-18-110000,2014-08-18-130000,2014-08-18-140000,2014-08-18-150000,2014-08-18-160000,2014-08-18-170000,2014-08-18-180000,2014-08-18-190000,2014-08-18-200000,2014-08-18-210000,2014-08-18-220000,2014-08-18-230000,2014-08-19-010000,2014-08-19-020000,2014-08-19-030000,2014-08-19-040000,2014-08-19-050000,2014-08-19-060000,2014-08-19-070000,2014-08-19-080000,2014-08-19-090000,2014-08-19-100000,2014-08-19-110000,2014-08-19-130000,2014-08-19-140000,2014-08-19-150000,2014-08-19-160000,2014-08-19-170000,2014-08-19-180000,2014-08-19-190000,2014-08-19-200000,2014-08-19-210000,2014-08-19-220000,2014-08-19-230000,2014-08-20-010000,2014-08-20-020000,2014-08-20-030000,2014-08-20-040000,2014-08-20-050000,2014-08-20-060000,2014-08-20-070000,2014-08-20-080000,2014-08-20-090000,2014-08-20-100000,2014-08-20-110000,2014-08-20-130000,2014-08-20-140000,2014-08-20-150000,2014-08-20-160000,2014-08-20-170000,2014-08-20-180000,2014-08-20-190000,2014-08-20-200000,2014-08-20-210000,2014-08-20-220000,2014-08-20-230000,2014-08-21-010000,2014-08-21-030000,2014-08-21-040000,2014-08-21-050000,2014-08-21-060000,2014-08-21-070000,2014-08-21-080000,2014-08-21-090000,2014-08-21-100000,2014-08-21-110000,2014-08-21-130000,2014-08-21-140000,2014-08-21-150000,2014-08-21-170000,2014-08-21-180000,2014-08-21-190000,2014-08-21-210000,2014-08-21-220000,2014-08-21-230000,2014-08-22-010000,2014-08-22-020000,2014-08-22-030000,2014-08-22-050000,2014-08-22-060000,2014-08-22-070000,2014-08-22-090000,2014-08-22-100000,2014-08-22-190000,2014-08-22-210000,2014-08-22-220000,2014-08-22-230000,2014-08-23-010000,2014-08-23-020000,2014-08-23-030000,2014-08-23-050000,2014-08-23-060000,2014-08-23-070000,2014-08-23-090000,2014-08-23-100000,2014-08-23-110000,2014-08-23-130000,2014-08-23-140000,2014-08-23-150000,2014-08-23-170000,2014-08-23-180000,2014-08-23-190000,2014-08-23-210000,2014-08-23-220000,2014-08-23-230000,2014-08-24-010000,2014-08-24-020000,2014-08-24-030000,2014-08-24-050000,2014-08-24-060000,2014-08-24-070000,2014-08-24-090000,2014-08-24-100000,2014-08-24-110000,2014-08-24-130000,2014-08-24-140000,2014-08-24-150000,2014-08-24-170000,2014-08-24-180000,2014-08-24-190000,2014-08-24-210000,2014-08-24-220000,2014-08-24-230000,2014-08-25-010000,2014-08-25-020000,2014-08-25-030000,2014-08-25-050000,2014-08-25-060000,2014-08-25-070000,2014-08-25-090000,2014-08-25-100000,2014-08-25-110000,2014-08-25-130000,2014-08-25-140000,2014-08-25-150000,2014-08-25-170000,2014-08-25-180000,2014-08-25-190000,2014-08-25-210000,2014-08-25-220000,2014-08-25-230000,2014-08-26-010000,2014-08-26-020000,2014-08-26-030000,2014-08-26-050000,2014-08-26-060000,2014-08-26-070000,2014-08-26-090000,2014-08-26-100000,2014-08-26-110000,2014-08-26-130000,2014-08-26-140000,2014-08-26-150000,2014-08-26-170000,2014-08-26-180000,2014-08-26-190000,2014-08-26-210000,2014-08-26-220000,2014-08-26-230000,2014-08-27-010000,2014-08-27-020000,2014-08-27-030000,2014-08-27-050000,2014-08-27-060000,2014-08-27-070000,2014-08-27-090000,2014-08-27-100000,2014-08-27-110000,2014-08-27-130000,2014-08-27-140000,2014-08-27-150000,2014-08-27-170000,2014-08-27-180000,2014-08-27-190000,2014-08-27-210000,2014-08-27-220000,2014-08-27-230000,2014-08-28-010000,2014-08-28-020000,2014-08-28-030000,2014-08-28-050000,2014-08-28-060000,2014-08-28-070000,2014-08-28-090000,2014-08-28-100000,2014-08-28-110000,2014-08-28-140000,2014-08-28-150000'
cannot destroy snapshot pool/from_src/user@2013-04-03-072351: dataset is busy
cannot destroy snapshot pool/from_src/user@2013-11-15-145353: dataset is busy
cannot destroy snapshot pool/from_src/user@2013-11-16-145351: dataset is busy
cannot destroy snapshot pool/from_src/user@2013-11-17-145349: dataset is busy
cannot destroy snapshot pool/from_src/user@2013-11-18-145347: dataset is busy
cannot destroy snapshot pool/from_src/user@2013-11-19-145345: dataset is busy
cannot destroy snapshot pool/from_src/user@2013-11-20-145342: dataset is busy
cannot destroy snapshot pool/from_src/user@2013-11-26-152014: dataset is busy
cannot destroy snapshot pool/from_src/user@2013-12-07-013608: dataset is busy
cannot destroy snapshot pool/from_src/user@2013-12-08-013606: dataset is busy
cannot destroy snapshot pool/from_src/user@2013-12-11-133558: dataset is busy
cannot destroy snapshot pool/from_src/user@2013-12-16-175846: dataset is busy
cannot destroy snapshot pool/from_src/user@2013-12-17-175844: dataset is busy
cannot destroy snapshot pool/from_src/user@2013-12-18-175842: dataset is busy

mbuffer on only one side

It would be nice if one could fire up an mbuffer on a remote dst, opportunistically or by configuration.

If mbuffer exists, or if the dataset is configured to use it only on the dst end, then instead of ssh invoking zfs recv directly, it could invoke sh -c 'mbuffer -s ... | zfs recv ...'

Even with the overhead of ssh, this will generally improve the throughput of the zfs send/recv pair.

For symmetry, if mbuffer exists only on the sending side, that could still be useful.

More ambitiously, pairing socat or netcat with mbuffer might be practical to eliminate the ssh overhead as well.

znapzend creates recursive snapshots, although recursive is set to off

I was chasing an issue, where znapzend won't ship snapshots to the remote destination, when (probably) a clone is configured on the receiving dataset. To rule out issues with recursion, I configured znapzend with recursive=off, like this one:

root@nfsvmpool01:# znapzendzetup list sasTank
root@nfsvmpool01:
# znapzendzetup list sasTank/nfsvmpool01sas
*** backup plan: sasTank/nfsvmpool01sas ***
dst_nfsvmpool02 = [email protected]:sasTank/nfsvmpool01sas
dst_nfsvmpool02_plan= 6days=>4hours
enabled = on
mbuffer = /opt/csw/bin/mbuffer:10003
mbuffer_size = 1G
post_znap_cmd = off
pre_znap_cmd = off
recursive = off
src = sasTank/nfsvmpool01sas
src_plan = 1day=>4hours
tsformat = %Y-%m-%d-%H%M%S

So, no znapzend config on sasTank and a non-recursive config on sasTank/nfsvmpool01sas.
However, znapzend creates recursive snapshots, anyway… as shown in the znapzend log:

[Mon Dec 15 20:00:00 2014] [debug] snapshot worker for sasTank spawned (16812)
[Mon Dec 15 20:00:00 2014] [info] creating recursive snapshot on sasTank
[Mon Dec 15 20:00:00 2014] [debug] snapshot worker for sataTank spawned (16815)
[Mon Dec 15 20:00:00 2014] [info] creating recursive snapshot on sataTank
[Mon Dec 15 20:00:00 2014] [debug] snapshot worker for sasTank done (16812)
[Mon Dec 15 20:00:00 2014] [debug] send/receive worker for sasTank spawned (16817)
[Mon Dec 15 20:00:00 2014] [info] starting work on backupSet sasTank
[Mon Dec 15 20:00:00 2014] [debug] sending snapshots from sasTank to [email protected]:sasTank
[Mon Dec 15 20:00:00 2014] [debug] snapshot worker for sataTank done (16815)
[Mon Dec 15 20:00:00 2014] [debug] send/receive worker for sataTank spawned (16822)
[Mon Dec 15 20:00:00 2014] [info] starting work on backupSet sataTank
[Mon Dec 15 20:00:00 2014] [debug] sending snapshots from sataTank to [email protected]:sataTank

Although znapzend should only create/ship snapshots on the sasTank/nfsvmpool01sas dataset, there are popping up snapshots on the sasTank datset as well:

admin@nfsvmpool02:/export/home/admin$ zfs list -r sasTank
NAME USED AVAIL REFER MOUNTPOINT
sasTank 61,5G 1,52T 26K /sasTank
sasTank@2014-12-15-160000 1K - 26K -
sasTank@2014-12-15-200000 0 - 26K -
sasTank/nfsvmpool01sas 61,5G 1,52T 54,0G /sasTank/nfsvmpool01sas
sasTank/nfsvmpool01sas@2014-12-13-120000 434M - 54,3G -
sasTank/nfsvmpool01sas@2014-12-13-160000 250M - 54,3G -
sasTank/nfsvmpool01sas@2014-12-13-200000 255M - 54,3G -
sasTank/nfsvmpool01sas@2014-12-14-000000 336M - 54,2G -
sasTank/nfsvmpool01sas@2014-12-14-040000 304M - 54,0G -
sasTank/nfsvmpool01sas@2014-12-14-080000 246M - 54,1G -
sasTank/nfsvmpool01sas@2014-12-14-120000 255M - 54,1G -
sasTank/nfsvmpool01sas@2014-12-14-160000 274M - 54,1G -
sasTank/nfsvmpool01sas@2014-12-14-200000 272M - 54,0G -
sasTank/nfsvmpool01sas@2014-12-15-000000 360M - 54,0G -
sasTank/nfsvmpool01sas@2014-12-15-040000 311M - 54,0G -
sasTank/nfsvmpool01sas@2014-12-15-080000 283M - 54,1G -
sasTank/nfsvmpool01sas@2014-12-15-120000 354M - 54,1G -
sasTank/nfsvmpool01sas@2014-12-15-160000 0 - 54,0G -
sasTank/nfsvmpool01sas@_Backup 0 - 54,0G -
sasTank/nfsvmpool01sas_Backup 1K 1,52T 54,0G /sasTank/nfsvmpool01sas_Backup

So, as per configuration, no snapshots on sasTank should be created and thus
sasTank@2014-12-15-160000 1K - 26K -
sasTank@2014-12-15-200000 0 - 26K -

shouldn't be there, should they?

-Stephan

znapzend daemon status

Enhancement request

It would be fantastic to have

znapzend status <logtime> <worktime>

command. It would provide information to external monitoring scripts, like for example nagios, about statistics of backups:

  1. how many backupsets were processed in last <logtime>
  2. how many of workers have finished in last <logtime>
  3. how many workers in last <logtime> have not finished yet and are taking longer than <worktime> and provide names of those backupsets eventually

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.