Giter VIP home page Giter VIP logo

jimsalterjrs / sanoid Goto Github PK

View Code? Open in Web Editor NEW
2.9K 2.9K 289.0 1021 KB

These are policy-driven snapshot management and replication tools which use OpenZFS for underlying next-gen storage. (Btrfs support plans are shelved unless and until btrfs becomes reliable.)

Home Page: http://www.openoid.net/products/

License: GNU General Public License v3.0

Shell 15.54% Perl 83.94% Makefile 0.52%
replication snapshot zfs-filesystem

sanoid's People

Contributors

0xfate avatar attie avatar danielewood avatar darkbasic avatar deltik avatar dlangille avatar endreszabo avatar hrast01 avatar jimsalterjrs avatar jsoref avatar kd8bny avatar klemens-u avatar lopsided98 avatar lordaro avatar mat813 avatar matveevandrey avatar mr-vinn avatar mschout avatar phreaker0 avatar rbike avatar redmop avatar rlaager avatar rodgerd avatar rulerof avatar shodanshok avatar spheenik avatar stardude900 avatar thehaven avatar tiedotguy avatar tmlapp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sanoid's Issues

Integrate with Samba snapdir?

Has anybody attempted to use sanoid with Samba's vfs_shadow_copy2 module?

It doesn't appear I'd be able to set "shadow:format" to any format which would match sanoid's snapshots. I've also found no obvious way to tell sanoid how to configure the date format.

 NAME                                                     USED  AVAIL  REFER  MOUNTPOINT
 zroot/data/shared@autosnap_2015-03-17_10:34:09_hourly       0      -   817G  -
 zroot/data/shared@autosnap_2015-03-17_10:34:09_monthly      0      -   817G  -
 zroot/data/shared@autosnap_2015-03-17_10:34:09_daily        0      -   817G  -

syncoid error with dataset containing spaces in recursive mode

When trying to sync a child dataset containing spaces, syncoid fails as the space is not being escaped.

Steps to reproduce:

  1. zfs create pool/test
  2. zfs create "pool/test/Hello World"
  3. syncoid -r pool/test remote_user@remotehost:pool/test

Output:
root@hostname:# syncoid -r data5tb/test root@hostname2:data5tb_backup/test
Sending incremental data5tb/test@syncoid_hostname_2016-06-05:10:09:53 ... syncoid_hostname_2016-06-05:10:11:32 (
4 KB):
1.52KiB 0:00:00 [5.38KiB/s] [==================================================================> ] 38%
cannot open 'data5tb/test/Hello': dataset does not exist
cannot open 'World': dataset does not exist
usage:
snapshot|snap [-r] [-o property=value] ... <filesystem|volume>@ ...

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow
CRITICAL ERROR: /usr/bin/sudo /sbin/zfs snapshot data5tb/test/Hello World@syncoid_hostname_2016-06-05:10:11:41
failed: 512 at /usr/local/bin/syncoid line 697.

Can the syncoid SSH port be specified on the command line?

A customer uses SSH port 2223 for the source server I'll be replicating from in syncoid. I have entries in /etc/ssh/ssh_config for the hostname to force port 2223:

Host *ancaste*
    Port 2223

I don't think syncoid honors this, so I'm looking for the cleanest way to be able to modify the SSH port on the fly without hardcoding the script.

Number of snapshot

I am testing sanoid and it Iooks promising. I have hourly = 36 in my sanoid.conf but now that I have more than 50 snapshots. Where should I go?

Ability to specify ssh port for syncoid

I will be running syncronizations for backup to a host on the internat that is running a non-standard SSH port. It would be good if syncoid could handle this.

I created #37 to add this feature.

cron says: "could not find any snapshots to destroy; check snapshot names."

Hi,
I'm getting the following message from every second sanoid execution after the full hour (full hour sanoid calls works, all others too, I'm using your suggestion from the README running it every minute)

could not find any snapshots to destroy; check snapshot names.
could not remove tank/proxmox-local/template@autosnap_2015-05-06_20:00:01_hourly : 256 at /opt/sanoid/sanoid line 226.

On manual checking the according snapshot has been deleted as intended.

Any ideas?

Separate scheduling of snapshot cleanup

Is it possible to schedule removing of snapshots at a later time, such as after hours, instead of doing them during the day?

I'm using zfSnap and I have it making snapshots every 15 min, hourly, daily, and weekly, and I have it only removing snapshots at 12:05a daily.

FreeBSD and SmartOS support

Could you please add FreeBSD and SmartOS support? Sanoid works for me on FreeBSD 10.2 after I changed the paths for the external commands sanoid/syncoid is using but it would be great if Sanoid would support these OSs by default.

Syncoid: Backup to file?

Hello there,
first of all: thanks for this great tool. ๐Ÿ‘

But I have a problem: I have a backup destination that has no zfs installed, it is possible to use syncoid to backup the snapshots to file?

Best regards

Eun

syncoid occational errors: could not find any snapshots to destroy; check snapshot names

I am seeing the following happen occasionally when I am running syncoid manually to replicate to a remote host - note: rerunning always fixes it.
I think that because sanoid is running in a cron task that syncoid is seeing an unexpected state change ?

Sending incremental disk-pool/opt@autosnap_2016-08-15_03:00:02_hourly ... syncoid_media_2016-08-15:12:34:52 (~ 100.2 MB):
100MiB 0:00:01 [61.3MiB/s] [================================>] 100%
could not find any snapshots to destroy; check snapshot names.
CRITICAL ERROR: /usr/bin/ssh -c [email protected],arcfour -p 22 -S /tmp/syncoid-root-root@backup-1471278891 root@backup " /sbin/zfs destroy disk-pool/opt@syncoid_media_2016-08-15:12:21:39; /sbin/zfs destroy disk-pool/opt@syncoid_media_2016-08-15:03:30:58" failed: 256 at /usr/local/bin/syncoid line 666.

mollyguard in syncoid for new users

New users have a tendency to want to create a target before replicating to it.

ZFS wants to create a target dataset ITSELF for an initial replication, not have the user create it manually first. So zfs create targetpool/targetset ; syncoid sourcepool/sourceset targetpool/targetset will always fail.

Added a mollyguard to both explain WHY syncoid refuses to sync when no matching snapshots exist, and check for unusually small target datasets (<64M) and if so, add an additional message about not trying to create targets yourself.

Current error:

root@banshee:~/git/sanoid# zfs create banshee/test2
root@banshee:~/git/sanoid# syncoid banshee/test banshee/test2
UNEXPECTED ERROR: target exists but has no matching snapshots!

Desired error:

root@banshee:~/git/sanoid# zfs create banshee/test2
root@banshee:~/git/sanoid# ./syncoid banshee/test banshee/test2

CRITICAL ERROR: Target exists but has no matching snapshots!
                Replication to target would require destroying existing
                target. Cowardly refusing to destroy your existing target.

          NOTE: Target dataset is < 64MB used - did you mistakenly run
                `zfs create banshee/test2` on the target? ZFS initial
                replication must be to a NON EXISTENT DATASET, which will
                then be CREATED BY the initial replication process.

Syncoid cannot receive incremental stream: most recent snapshot does not match incremental source

I recently configured sanoid and syncoid on Ubuntu 16.04 to take snapshots on host1 and sync them to host2. This appeared to work well at first, but eventually syncoid fails out with this error:

root@host1 # syncoid -r pool/dataset1 root@host2:pool/dataset1
Sending incremental pool/dataset1@syncoid_host1_2016-08-18:11:00:12 ... syncoid_host1_2016-08-18:11:07:58 (~ 4 KB):
Sending incremental pool/dataset1/data@autosnap_2016-08-18_11:00:01_hourly ... syncoid_host1_2016-08-18:11:07:59 (~ 4 KB):
cannot receive incremental stream: most recent snapshot of pool/dataset1/data does not
match incremental source
CRITICAL ERROR:  /sbin/zfs send -I pool/dataset1/data@autosnap_2016-08-18_11:00:01_hourly pool/dataset1/data@syncoid_host1_2016-08-18:11:07:59 | /usr/bin/pv -s 4096 | /usr/bin/lzop  | /usr/bin/mbuffer  -q -s 128k -m 16M 2>/dev/null | /usr/bin/ssh -c [email protected],arcfour -p 22 -S /tmp/syncoid-root-root@host2-1471536477 root@host2 ' /usr/bin/mbuffer  -q -s 128k -m 16M 2>/dev/null | /usr/bin/lzop -dfc |  /sbin/zfs receive pool/dataset1/data' failed: 256 at /usr/local/bin/syncoid line 241.

According to the Oracle documentation, this error occurs if the data related to the dataset is modified on the destination (host2), however if I run zfs get mounted pool/dataset1/data on host2 it shows that it is not mounted, thus it seems unlikely that the data was modified on host2. I do see that atime is on, but again how does that matter since it isn't mounted. What do I need to do to resolve this error?

Thanks!

syncoid fails recursive replication after initial execution

Suppose you have:

pool0
pool0/container
pool0/container/filesystem1
pool0/container/filesystem2
pool0/container/filesystem3
pool1

Then you run syncoid -r pool0/container pool1/container and that results (after ending succesfuly) in having:

pool0
pool0/container
pool0/container@syncoid_host1_2016-11-01:09:12:31
pool0/container/filesystem1
pool0/container/filesystem1@syncoid_host1_2016-11-01:09:12:44
pool0/container/filesystem2
pool0/container/filesystem2@syncoid_host1_2016-11-01:09:15:11
pool0/container/filesystem3
pool0/container/filesystem3@syncoid_host1_2016-11-01:09:20:32
pool1
pool1/container
pool1/container/filesystem1
pool1/container/filesystem2
pool1/container/filesystem3

Some time passes and then you try to update the copy running again syncoid -r pool0/container pool1/container, but that fails with:

Sending incremental pool0/container@syncoid_host1_2016-11-01:09:12:31 ... syncoid_host1_2016-11-01:09:32:39 (~ 4 KB):
1,52kB 0:00:00 [5,88kB/s] [==============================================>                                                                             ] 38%            
cannot open 'pool0/container@syncoid_host1_2016-11-01:09:12:31': dataset does not exist
cannot create snapshot 'pool0/container@syncoid_host1_2016-11-01:09:12:31@syncoid_host1_2016-11-01:09:32:40': multiple '@' delimiters in name
CRITICAL ERROR:  /usr/bin/sudo /sbin/zfs snapshot pool0/container@syncoid_host1_2016-11-01:09:12:31@syncoid_tulkas_2016-11-01:09:32:40
 failed: 256 at /usr/local/bin/syncoid line 724.

Apparently recursive syncoid finds self-managed snapshots as child filesystems and then try to replicate them (which it shouldn't) but then it fails at that because it first destroys such snapshots (as it should, after replicating the corresponding filesystem).

Month '-1' out of range 0..11

I'm receiving this error Month '-1' out of range 0..11 at /usr/local/bin/sanoid line 318. when using latest master as well as Time::Local version 1.23 on ubuntu 14.04 and Time::Local version 1.20 on FreeBSD 10.0

It appears that modifying instances of:
push @preferredtime,($datestamp{'mon'}-1); # january is month 0
to
push @preferredtime,($datestamp{'mon'});

resolves for me. I can submit a pull request, but I'm not sure if this would break support for older versions?

UNEXPECTED ERROR: target exists but has no matching snapshots

hi there,

I have a problem running syncoid - although I believe that it should work.

running /sbin/zfs get -Hpd 1 creation tank/Backup/syncoid/mail.tld/vmail locally, I get

tank/Backup/syncoid/mail.tld/vmail      creation        1449855984      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_monthly-2015-08-01-0452        creation        1438404721      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_monthly-2015-09-01-0452        creation        1441083121      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_monthly-2015-10-01-0452        creation        1443675121      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_weekly-2015-10-18-0448 creation        1445143691      -
tank/Backup/syncoid/mail.tld/vmail@syncoid_sheol_2015-10-19:10:03:51    creation        1445241832      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_weekly-2015-10-25-0547 creation        1445752049      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_weekly-2015-11-01-0547 creation        1446356851      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_monthly-2015-11-01-0552        creation        1446357121      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_weekly-2015-11-08-0547 creation        1446961656      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-11-0530  creation        1447219830      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-12-0600  creation        1447308035      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-13-0600  creation        1447394402      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-14-0620  creation        1447482040      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_weekly-2015-11-15-0547 creation        1447566453      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-15-0629  creation        1447568999      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-16-0624  creation        1447655090      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-17-0542  creation        1447738947      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-18-0608  creation        1447826935      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-19-0607  creation        1447913242      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-20-0557  creation        1447999024      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-21-0619  creation        1448086769      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_weekly-2015-11-22-0548 creation        1448171300      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-22-0613  creation        1448172800      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-23-0623  creation        1448259814      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-24-0637  creation        1448347064      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-25-0646  creation        1448433965      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-26-0623  creation        1448518995      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-27-0600  creation        1448604054      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-28-0648  creation        1448693295      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_weekly-2015-11-29-0547 creation        1448776060      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-29-0619  creation        1448777993      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-11-30-0640  creation        1448865625      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_monthly-2015-12-01-0552        creation        1448949134      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-12-01-0620  creation        1448950855      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-12-02-0550  creation        1449035419      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-12-03-0545  creation        1449121543      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-12-04-0547  creation        1449208073      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-12-05-0558  creation        1449295131      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_weekly-2015-12-06-0547 creation        1449380879      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-12-06-0609  creation        1449382188      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-12-07-0620  creation        1449469216      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-12-08-0535  creation        1449552925      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-12-09-0629  creation        1449642549      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-12-10-0540  creation        1449726057      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-10-1917 creation        1449775021      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-10-2017 creation        1449778622      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-10-2117 creation        1449782222      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-10-2217 creation        1449785821      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-10-2317 creation        1449789421      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-0017 creation        1449793021      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-0117 creation        1449796621      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-0217 creation        1449800221      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-0317 creation        1449803821      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-0417 creation        1449807421      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-0517 creation        1449811021      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-0617 creation        1449814621      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_daily-2015-12-11-0617  creation        1449814650      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-0717 creation        1449818221      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-0817 creation        1449821822      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-0917 creation        1449825421      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-1017 creation        1449829021      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-1117 creation        1449832621      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-1217 creation        1449836221      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-1317 creation        1449839821      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-1417 creation        1449843421      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-1517 creation        1449847022      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-1617 creation        1449850621      -
tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-1717 creation        1449854221      -

running it remotely ( /usr/bin/ssh [email protected] /usr/bin/sudo /sbin/zfs get -Hpd 1 creation tank/vmail ), I get

tank/vmail      creation        1437263508      -
tank/vmail@zfs-auto-snap_monthly-2015-08-01-0452        creation        1438404721      -
tank/vmail@zfs-auto-snap_monthly-2015-09-01-0452        creation        1441083121      -
tank/vmail@zfs-auto-snap_monthly-2015-10-01-0452        creation        1443675121      -
tank/vmail@zfs-auto-snap_weekly-2015-10-18-0448 creation        1445143691      -
tank/vmail@syncoid_sheol_2015-10-19:10:03:51    creation        1445241832      -
tank/vmail@zfs-auto-snap_weekly-2015-10-25-0547 creation        1445752049      -
tank/vmail@zfs-auto-snap_weekly-2015-11-01-0547 creation        1446356851      -
tank/vmail@zfs-auto-snap_monthly-2015-11-01-0552        creation        1446357121      -
tank/vmail@zfs-auto-snap_weekly-2015-11-08-0547 creation        1446961656      -
tank/vmail@zfs-auto-snap_daily-2015-11-11-0530  creation        1447219830      -
tank/vmail@zfs-auto-snap_daily-2015-11-12-0600  creation        1447308035      -
tank/vmail@zfs-auto-snap_daily-2015-11-13-0600  creation        1447394402      -
tank/vmail@zfs-auto-snap_daily-2015-11-14-0620  creation        1447482040      -
tank/vmail@zfs-auto-snap_weekly-2015-11-15-0547 creation        1447566453      -
tank/vmail@zfs-auto-snap_daily-2015-11-15-0629  creation        1447568999      -
tank/vmail@zfs-auto-snap_daily-2015-11-16-0624  creation        1447655090      -
tank/vmail@zfs-auto-snap_daily-2015-11-17-0542  creation        1447738947      -
tank/vmail@zfs-auto-snap_daily-2015-11-18-0608  creation        1447826935      -
tank/vmail@zfs-auto-snap_daily-2015-11-19-0607  creation        1447913242      -
tank/vmail@zfs-auto-snap_daily-2015-11-20-0557  creation        1447999024      -
tank/vmail@zfs-auto-snap_daily-2015-11-21-0619  creation        1448086769      -
tank/vmail@zfs-auto-snap_weekly-2015-11-22-0548 creation        1448171300      -
tank/vmail@zfs-auto-snap_daily-2015-11-22-0613  creation        1448172800      -
tank/vmail@zfs-auto-snap_daily-2015-11-23-0623  creation        1448259814      -
tank/vmail@zfs-auto-snap_daily-2015-11-24-0637  creation        1448347064      -
tank/vmail@zfs-auto-snap_daily-2015-11-25-0646  creation        1448433965      -
tank/vmail@zfs-auto-snap_daily-2015-11-26-0623  creation        1448518995      -
tank/vmail@zfs-auto-snap_daily-2015-11-27-0600  creation        1448604054      -
tank/vmail@zfs-auto-snap_daily-2015-11-28-0648  creation        1448693295      -
tank/vmail@zfs-auto-snap_weekly-2015-11-29-0547 creation        1448776060      -
tank/vmail@zfs-auto-snap_daily-2015-11-29-0619  creation        1448777993      -
tank/vmail@zfs-auto-snap_daily-2015-11-30-0640  creation        1448865625      -
tank/vmail@zfs-auto-snap_monthly-2015-12-01-0552        creation        1448949134      -
tank/vmail@zfs-auto-snap_daily-2015-12-01-0620  creation        1448950855      -
tank/vmail@zfs-auto-snap_daily-2015-12-02-0550  creation        1449035419      -
tank/vmail@zfs-auto-snap_daily-2015-12-03-0545  creation        1449121543      -
tank/vmail@zfs-auto-snap_daily-2015-12-04-0547  creation        1449208073      -
tank/vmail@zfs-auto-snap_daily-2015-12-05-0558  creation        1449295131      -
tank/vmail@zfs-auto-snap_weekly-2015-12-06-0547 creation        1449380879      -
tank/vmail@zfs-auto-snap_daily-2015-12-06-0609  creation        1449382188      -
tank/vmail@zfs-auto-snap_daily-2015-12-07-0620  creation        1449469216      -
tank/vmail@zfs-auto-snap_daily-2015-12-08-0535  creation        1449552925      -
tank/vmail@zfs-auto-snap_daily-2015-12-09-0629  creation        1449642549      -
tank/vmail@zfs-auto-snap_daily-2015-12-10-0540  creation        1449726057      -
tank/vmail@zfs-auto-snap_hourly-2015-12-10-2017 creation        1449778622      -
tank/vmail@zfs-auto-snap_hourly-2015-12-10-2117 creation        1449782222      -
tank/vmail@zfs-auto-snap_hourly-2015-12-10-2217 creation        1449785821      -
tank/vmail@zfs-auto-snap_hourly-2015-12-10-2317 creation        1449789421      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-0017 creation        1449793021      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-0117 creation        1449796621      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-0217 creation        1449800221      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-0317 creation        1449803821      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-0417 creation        1449807421      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-0517 creation        1449811021      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-0617 creation        1449814621      -
tank/vmail@zfs-auto-snap_daily-2015-12-11-0617  creation        1449814650      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-0717 creation        1449818221      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-0817 creation        1449821822      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-0917 creation        1449825421      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-1017 creation        1449829021      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-1117 creation        1449832621      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-1217 creation        1449836221      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-1317 creation        1449839821      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-1417 creation        1449843421      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-1517 creation        1449847022      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-1617 creation        1449850621      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-1717 creation        1449854221      -
tank/vmail@syncoid_vos.lan_2015-12-11:18:46:23  creation        1449855983      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-1817 creation        1449857821      -
tank/vmail@zfs-auto-snap_frequent-2015-12-11-1845       creation        1449859502      -
tank/vmail@zfs-auto-snap_frequent-2015-12-11-1900       creation        1449860401      -
tank/vmail@zfs-auto-snap_frequent-2015-12-11-1915       creation        1449861301      -
tank/vmail@zfs-auto-snap_hourly-2015-12-11-1917 creation        1449861422      -
tank/vmail@syncoid_vos.lan_2015-12-11:20:17:41  creation        1449861461      -
tank/vmail@syncoid_vos.lan_2015-12-11:20:18:30  creation        1449861511      -
tank/vmail@syncoid_vos.lan_2015-12-11:20:19:55  creation        1449861595      -
tank/vmail@syncoid_vos.lan_2015-12-11:20:21:54  creation        1449861715      -
tank/vmail@syncoid_vos.lan_2015-12-11:20:23:20  creation        1449861800      -
tank/vmail@syncoid_vos.lan_2015-12-11:20:26:06  creation        1449861966      -
tank/vmail@zfs-auto-snap_frequent-2015-12-11-1930       creation        1449862201      -
tank/vmail@syncoid_vos.lan_2015-12-11:20:30:20  creation        1449862220      -

so the lines

tank/vmail@zfs-auto-snap_hourly-2015-12-11-1717 creation        1449854221      -

and

tank/Backup/syncoid/mail.tld/vmail@zfs-auto-snap_hourly-2015-12-11-1717 creation        1449854221      -

should match.

running sanoid ( sanoid/syncoid -debug [email protected]:tank/vmail tank/Backup/syncoid/mail.tld/vmail ) I get:

DEBUG: checking availability of /usr/bin/lzop on source...
DEBUG: checking availability of /usr/bin/lzop on target...
DEBUG: checking availability of /usr/bin/lzop on local machine...
DEBUG: checking availability of /usr/local/bin/mbuffer on source...
WARN: /usr/local/bin/mbuffer not available on source ssh:-S /tmp/[email protected] [email protected] - sync will continue without source buffering.
DEBUG: checking availability of /usr/local/bin/mbuffer on target...
DEBUG: checking availability of /usr/local/bin/pv on local machine...
DEBUG: syncing source tank/vmail to target tank/Backup/syncoid/mail.tld/vmail.
DEBUG: checking to see if tank/Backup/syncoid/mail.tld/vmail on  is already in zfs receive using  /bin/ps axo args= ...
DEBUG: checking to see if target filesystem exists using "  /sbin/zfs get -H name tank/Backup/syncoid/mail.tld/vmail 2>&1 |"...
DEBUG: getting list of snapshots on tank/vmail using /usr/bin/ssh -c arcfour -S /tmp/[email protected] [email protected] /usr/bin/sudo /sbin/zfs get -Hpd 1 creation tank/vmail |...
DEBUG: getting list of snapshots on tank/Backup/syncoid/mail.tld/vmail using  /usr/bin/sudo /sbin/zfs get -Hpd 1 creation tank/Backup/syncoid/mail.tld/vmail |...
DEBUG: getting current value of readonly on tank/Backup/syncoid/mail.tld/vmail...
  /sbin/zfs get -H readonly tank/Backup/syncoid/mail.tld/vmail
DEBUG: setting readonly to on on tank/Backup/syncoid/mail.tld/vmail...
  /sbin/zfs set readonly=on tank/Backup/syncoid/mail.tld/vmail
UNEXPECTED ERROR: target exists but has no matching snapshots!

is this a bug or am i doing something wrong?

Syncoid fails with default cipher on some systems in 1.4.6a

New OpenSSH builds (Ubuntu Xenial, FreeBSD 10.3) do not support arcfour cipher by default.

Update Syncoid cipherlist to support chacha20 by preference with fallback to arcfour, since some older OpenSSH builds reportedly don't support chacha20. Fixed in 1.4.6b.

UNEXPECTED ERROR: target exists but has no matching snapshots

I am having the same problem as closed issue number 16 #16 see below ... hope im not missing something obvious ... there was mention of creating symlinks but i am just trying to do this locally for now.

root@pve-2:/bpool/backups/openoid/home# /usr/local/bin/syncoid --debug apool/home bpool/backups/openoid/home
DEBUG: checking availability of /usr/bin/lzop on source...
DEBUG: checking availability of /usr/bin/lzop on target...
DEBUG: checking availability of /usr/bin/lzop on local machine...
DEBUG: checking availability of /usr/bin/mbuffer on source...
DEBUG: checking availability of /usr/bin/mbuffer on target...
DEBUG: checking availability of /usr/bin/pv on local machine...
DEBUG: syncing source apool/home to target bpool/backups/openoid/home.
DEBUG: checking to see if bpool/backups/openoid/home on is already in zfs receive using /bin/ps -Ao args= ...
DEBUG: checking to see if target filesystem exists using " /sbin/zfs get -H name bpool/backups/openoid/home 2>&1 |"...
DEBUG: getting list of snapshots on apool/home using /usr/bin/sudo /sbin/zfs get -Hpd 1 creation apool/home |...
DEBUG: getting list of snapshots on bpool/backups/openoid/home using /usr/bin/sudo /sbin/zfs get -Hpd 1 creation bpool/backups/openoid/home |...
DEBUG: getting current value of readonly on bpool/backups/openoid/home...
/sbin/zfs get -H readonly bpool/backups/openoid/home
DEBUG: setting readonly to on on bpool/backups/openoid/home...
/sbin/zfs set readonly=on bpool/backups/openoid/home
UNEXPECTED ERROR: target exists but has no matching snapshots!

Syncoid doesn't purge foreign syncoid snapshots

Hello,

We have a issue where old Syncoid snapshots are not being purged. Our environment is setup as the following:

Host1: VM Host
Host2: VM Host

Host1 --> Syncoid to Host2
Host2 --> Syncoid to Host1

This way we have a copy of the VM disk on each host for any fail over needs.

Then we also have a system for saving the VM disks long term.

Backup preforms a Syncoid pull from Host1
Backup preforms a Syncoid pull from Host2

Correct me if I am wrong but when Syncoid runs, it creates 2 snapshots. One on the source and one on the dest.

On our backup box we see:
(ZFS Dataset)@syncoid_Host1_2016-10-17:11:00:56

And on the VMHost we see:
(ZFS Dataset)@syncoid_Backup_2016-10-17:11:00:56

But there are snapshots from several weeks ago. It does not appear that when Syncoid is running it purges the old snapshots it created.

Are we missing something?

Thanks!

syncoid feature request: notice clones on source and re-create them as clones on target

First, thank you very much for syncoid! It's making my life much better.

Don't know if this is in scope, but:
Today upon completion of a recursive syncoid to an empty target dataset I noticed that the target was using 9GB, while the source was using only 5GB. This turned out to be because a 4GB dataset on the source was actually a clone of a snapshot. It would be great if syncoid could notice that a source dataset is a clone (with origin property) and instead of doing a full send/receive, instead clone the appropriate snapshot on the target.

please consider simple error checking

When dealing with backups and data replication it usually pays to fail early and/or at least be loud about it. Acting on non zero exit codes from system() calls and setting overall exit code to non-zero vaue would be a preferred way to do it.

Simple snapshot creation failure encountered in syncoid, sub newsyncsnap, L668 results in no clear warning message, failed sync, and program still exits with code 0.

sanoid, sub take_snapshots, L335 also doesn't seem to care whether the operation failed or not.

If session cleanup and die() seem too harsh, please consider issuing a BIG FAT WARNING instead.

Thanks!

Snapshot retention not applying.

I have this setup on Ubuntu 16.04. I am submitting a new issue with my sanoid.conf and my cron job, I believe is the same as the other "Too Frequent" snapshot issue which I am experiencing as well.

Here is the most recent snapshots:

rpool/Win7@autosnap_2016-07-01_08:39:01_daily                 4.91M      -  35.0G  -
rpool/Win7@autosnap_2016-07-01_08:39:01_hourly                5.47M      -  35.0G  -
rpool/Win7@autosnap_2016-07-01_08:41:01_daily                 8.90M      -  35.1G  -
rpool/Win7@autosnap_2016-07-01_08:41:01_hourly                9.13M      -  35.1G  -
rpool/Win7@autosnap_2016-07-01_08:46:01_daily                 9.59M      -  35.1G  -
rpool/Win7@autosnap_2016-07-01_08:46:01_hourly                9.79M      -  35.1G  -
rpool/Win7@autosnap_2016-07-01_09:00:01_hourly                64.2M      -  35.0G  -
rpool/Win7@autosnap_2016-07-01_09:02:01_hourly                59.2M      -  35.0G  -
rpool/Win7@autosnap_2016-07-01_09:06:01_hourly                70.5M      -  35.0G  -
rpool/Win7@autosnap_2016-07-01_10:02:01_hourly                70.2M      -  35.0G  -
rpool/Win7@autosnap_2016-07-01_10:05:01_hourly                39.8M      -  35.0G  -
rpool/Win7@autosnap_2016-07-01_10:06:02_hourly                37.3M      -  35.0G  -
rpool/Win7@autosnap_2016-07-01_10:09:01_hourly                28.7M      -  35.0G  -
rpool/Win7@autosnap_2016-07-01_10:10:01_hourly                30.7M      -  35.0G  -
rpool/Win7@autosnap_2016-07-01_11:04:02_hourly                14.6M      -  35.1G  -
rpool/Win7@autosnap_2016-07-01_11:06:01_hourly                7.55M      -  35.1G  -
rpool/Win7@autosnap_2016-07-01_11:08:01_hourly                5.64M      -  35.1G  -
rpool/Win7@autosnap_2016-07-01_11:09:01_hourly                3.66M      -  35.1G  -
rpool/Win7@autosnap_2016-07-01_11:11:01_hourly                18.7M      -  35.1G  -
rpool/Win7@autosnap_2016-07-01_12:00:01_hourly                8.62M      -  35.1G  -
rpool/Win7@autosnap_2016-07-01_12:02:01_hourly                8.04M      -  35.1G  -
rpool/Win7@autosnap_2016-07-01_13:05:02_hourly                4.14M      -  35.0G  -

From the above, it appears to take 3 snapshots per daily and 3, sometimes 4 or 5, hourly within the hour.

zfs list -t snapshot | wc -l
912

Snapshots are only for the above Win7 VM with Hourly and Daily.

With my sanoid.conf, this many hourly snapshots should not be possible. I only snapshot the rpool/Win7 on my laptop below using the template_production.

Crontab line(Quotes because github made it display weird with the asterick):

* * * * * /usr/local/bin/sanoid --cron

Sanoid.conf file:

######################################
# This is a sample sanoid.conf file. #
# It should go in /etc/sanoid.       #
######################################

# name your backup modules with the path to their ZFS dataset - no leading slash.
[zpoolname/datasetname]
    # pick one or more templates - they're defined (and editable) below. Comma separated, processed in order.
    # in this example, template_demo's daily value overrides template_production's daily value.
    use_template = production,demo

    # if you want to, you can override settings in the template directly inside module definitions like this.
    # in this example, we override the template to only keep 12 hourly and 1 monthly snapshot for this dataset.
    hourly = 12
    monthly = 1

# you can also handle datasets recursively.
[zpoolname/parent]
    use_template = production
    recursive = yes
    # if you want sanoid to manage the child datasets but leave this one alone, set process_children_only.
    process_children_only = yes

# you can selectively override settings for child datasets which already fall under a recursive definition.
[zpoolname/parent/child]
    # child datasets already initialized won't be wiped out, so if you use a new template, it will
    # only override the values already set by the parent template, not replace it completely.
    use_template = demo
[rpool/Win7]
    use_template = production



#############################
# templates below this line #
#############################

# name your templates template_templatename. you can create your own, and use them in your module definitions above.

[template_demo]
    daily = 60

[template_production]
    hourly = 24
    daily = 7
    monthly = 0
    yearly = 0
    autosnap = yes
    autoprune = yes

[template_backup]
    autoprune = yes
    hourly = 30
    daily = 90
    monthly = 12
    yearly = 0

    ### don't take new snapshots - snapshots on backup 
    ### datasets are replicated in from source, not
    ### generated locally
    autosnap = no

    ### monitor hourlies and dailies, but don't warn or 
    ### crit until they're over 48h old, since replication 
    ### is typically daily only
    hourly_warn = 2880
    hourly_crit = 3600
    daily_warn = 48
    daily_crit = 60

I noticed this started when I built this machine at the beginning of June. My retention is also not applying. Perhaps this is related? I do not know perl :(

Edit: Fixed typos and made sentence about multiple hourly snapshots clear.

Issues with 'ps' on SmartOS

It seems the assumptions for ps arguments don't work well on SmartOS or perhaps other Solaris-ish / Illumos-based systems.

When using the BSD-style options (grouped arguments without '-' preceding them), it falls back to the compatibility /usr/ucb/ps binary, which doesn't support the '-o' argument.

It looks like this can be solved (for SmartOS) by always using the UNIX-style options with a preceding hyphen and the '-Ao' flag instead of 'axo' for checking ZFS activity.

Syncoid does not check for root properly.

It looks like Syncoid is supposed to check to see if the script is running with root privileges, and if not, use sudo to escalate.

When installed on a distribution that does not have sudo installed, Syncoid fails to check properly, and despite being run as root, it fails to send the snapshots to their destination. I was using Proxmox 4.0 when I encountered this. A quick apt-get install sudo fixes the issue. I was ssh'd in as root rather than su'ing to root, if that makes a difference.

Multiple Destinations

Is it possible to have Sanoid send snapshots to different servers/locations on separate schedules? Say, every 15 min to an onsite server, and nightly to an offsite server.

Differing binary locations on hosts

I've been trying to get syncoid working between various hosts (and OS types), but the script seems to assume a standard binary layout across hosts. Unfortunately, some OSs have different conventions and package managers don't always make the same assumptions.

What about detecting binaries based on $PATH, using which <executable>?

A side benefit would reduce the need to modify the script itself on "non-standard" systems to keep git updates simpler.

Syncoid should give usage guide when invoked with no arguments

In the latest (b4bed03) version running on Ubuntu 16.04 server, running Syncoid gives me the following:

Use of uninitialized value $fs in pattern match (m//) at ./syncoid line 752.
Use of uninitialized value $fs in pattern match (m//) at ./syncoid line 752.
Use of uninitialized value $fs in regexp compilation at ./syncoid line 508.
Use of uninitialized value $fs in regexp compilation at ./syncoid line 508.
...

Some quick testing with git bisect indicates that the first bad commit is 2876637

Reverting that commit on HEAD fixes the error for me.

Skip Hourly Snapshots

I only need to take daily snapshots (skipping hourly), and at a particular time of day, so I have my sanoid.conf as follows:

[mypool/mydataset]
use_template = my_template
recursive = yes


#############################
# templates below this line #
#############################

[my_template]
hourly = 0
daily = 33
monthly = 3
yearly = 0
autosnap = yes
autoprune = yes

I then have sanoid configured to run daily at the specific time that I need it to run at:

0 6 * * * /usr/local/bin/sanoid --cron

However, an hourly snapshot is still being taken every day:

mypool/mydataset@autosnap_2016-10-01_06:00:02_hourly       0      -   736G  -
mypool/mydataset@autosnap_2016-10-02_06:00:02_daily        0      -   735G  -
mypool/mydataset@autosnap_2016-10-02_06:00:02_hourly       0      -   735G  -
mypool/mydataset@autosnap_2016-10-03_06:00:02_hourly       0      -   735G  -
mypool/mydataset@autosnap_2016-10-03_06:00:02_daily        0      -   735G  -

How can I prevent this and only take daily snapshots?

Thanks!

"Target exists but has no matching snapshots!" Incremental sync not working

Hi there,

first of all thanks for your effort. So far i really like Sanoid :)

But unfortunately syncoid doesn't work for me.

I try to pull my VM dataset from my server (n1) to my secondary server (n2).
The first sync with syncoid seems successfull, but incremental syncing doesn't seem to work.

I tried this now several times. Removed all snapshots and made new ones with sanoid.
It's always the same.
The first sync works, succeeding ones stop with:

CRITICAL ERROR: Target exists but has no matching snapshots!

Maybe i'm making some errors, but i don't know what I'm doing wrong...

Could you have a look at this?

Thanks in advance!

Here are some infos:

Before syncoid:

Target:
root@n2 ~ 1# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tankbak 349G 550G 23K /tankbak
root@n2 ~ 1#

root@n2 ~ 1# zfs list -t snapshot
no datasets available
root@n2 ~ 1#

Source:
root@n1 ~ # zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 2,85T 4,80T 198K none
tank/data 2,39T 4,80T 2,28T /data
tank/vmdata 474G 4,80T 451G /vmdata
root@n1 ~ #

root@n1 ~ # zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
tank/data@zfs-auto-snap_weekly-2016-05-01-0447 722M - 2,29T -
tank/data@zfs-auto-snap_weekly-2016-05-08-0447 698K - 2,29T -
tank/data@zfs-auto-snap_weekly-2016-05-15-0447 767K - 2,32T -
tank/data@zfs-auto-snap_weekly-2016-05-22-0447 558K - 2,23T -
tank/data@zfs-auto-snap_daily-2016-05-24-0425 239M - 2,23T -
tank/data@zfs-auto-snap_daily-2016-05-25-0425 209K - 2,23T -
tank/data@zfs-auto-snap_daily-2016-05-26-0425 181M - 2,25T -
tank/data@zfs-auto-snap_daily-2016-05-27-0425 366K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-28-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-29-0425 0 - 2,26T -
tank/data@zfs-auto-snap_weekly-2016-05-29-0447 0 - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-30-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-31-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-01-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-02-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-03-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-04-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-05-0425 0 - 2,26T -
tank/data@zfs-auto-snap_weekly-2016-06-05-0447 0 - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-06-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-07-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-08-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-09-0426 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-10-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-11-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-12-0425 0 - 2,26T -
tank/data@zfs-auto-snap_weekly-2016-06-12-0447 0 - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-13-0426 208M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-14-0425 673M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-15-0425 209K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-16-0425 198K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-17-0425 209K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-18-0425 209K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-19-0425 0 - 2,27T -
tank/data@zfs-auto-snap_weekly-2016-06-19-0447 0 - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-20-0425 220M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-21-0426 370M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-22-0425 221K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-23-0425 0 - 2,28T -
tank/vmdata@autosnap_2016-06-22_11:08:01_hourly 0 - 433G -
tank/vmdata@autosnap_2016-06-22_11:08:01_daily 0 - 433G -
tank/vmdata@autosnap_2016-06-22_11:08:01_monthly 0 - 433G -
tank/vmdata@autosnap_2016-06-22_11:08:01_yearly 0 - 433G -
tank/vmdata@autosnap_2016-06-22_12:00:02_hourly 15,2M - 433G -
tank/vmdata@autosnap_2016-06-22_13:00:01_hourly 23,4M - 433G -
tank/vmdata@autosnap_2016-06-22_14:00:01_hourly 22,1M - 433G -
tank/vmdata@autosnap_2016-06-22_15:00:01_hourly 18,0M - 433G -
tank/vmdata@autosnap_2016-06-22_16:00:01_hourly 19,3M - 433G -
tank/vmdata@autosnap_2016-06-22_17:00:01_hourly 14,7M - 433G -
tank/vmdata@autosnap_2016-06-22_18:00:01_hourly 16,3M - 433G -
tank/vmdata@autosnap_2016-06-22_19:00:01_hourly 19,0M - 433G -
tank/vmdata@autosnap_2016-06-22_20:00:01_hourly 20,0M - 433G -
tank/vmdata@autosnap_2016-06-22_21:00:01_hourly 26,2M - 433G -
tank/vmdata@autosnap_2016-06-22_22:00:01_hourly 25,6M - 433G -
tank/vmdata@autosnap_2016-06-22_23:00:01_hourly 331M - 433G -
tank/vmdata@autosnap_2016-06-22_23:59:01_daily 166M - 436G -
tank/vmdata@autosnap_2016-06-23_00:00:01_hourly 98,2M - 436G -
tank/vmdata@autosnap_2016-06-23_01:00:02_hourly 211M - 451G -
tank/vmdata@autosnap_2016-06-23_02:00:01_hourly 102M - 451G -
tank/vmdata@autosnap_2016-06-23_03:00:01_hourly 24,7M - 451G -
tank/vmdata@autosnap_2016-06-23_04:00:01_hourly 15,1M - 451G -
tank/vmdata@autosnap_2016-06-23_05:00:01_hourly 14,1M - 451G -
tank/vmdata@autosnap_2016-06-23_06:00:01_hourly 26,0M - 451G -
tank/vmdata@autosnap_2016-06-23_07:00:01_hourly 21,8M - 451G -
tank/vmdata@autosnap_2016-06-23_08:00:01_hourly 18,7M - 451G -
tank/vmdata@autosnap_2016-06-23_09:00:02_hourly 25,0M - 451G -
tank/vmdata@autosnap_2016-06-23_10:00:01_hourly 7,52M - 451G -
root@n1 ~ #

First syncoid:

root@n2 ~ 1# /usr/local/bin/sanoid/syncoid -debug root@n1:tank/vmdata tankbak/vmdata
DEBUG: checking availability of /usr/bin/lzop on local machine...
DEBUG: checking availability of /usr/bin/mbuffer on source...
DEBUG: checking availability of /usr/bin/mbuffer on target...
DEBUG: checking availability of /usr/bin/pv on local machine...
DEBUG: syncing source tank/vmdata to target tankbak/vmdata.
DEBUG: checking to see if tankbak/vmdata on is already in zfs receive using /bin/ps -Ao args= ...
DEBUG: checking to see if target filesystem exists using " /sbin/zfs get -H name tankbak/vmdata 2>&1 |"...
DEBUG: getting list of snapshots on tank/vmdata using /usr/bin/ssh -c [email protected],arcfour -p 22 -S /tmp/syncoid-root-root@n1-1466669226 root@n1 /usr/bin/sudo /sbin/zfs get -Hpd 1 creation tank/vmdata |...
DEBUG: target tankbak/vmdata does not exist. Finding oldest available snapshot on source tank/vmdata ...
DEBUG: getting estimated transfer size from source -S /tmp/syncoid-root-root@n1-1466669226 root@n1 using "/usr/bin/ssh -c [email protected],arcfour -p 22 -S /tmp/syncoid-root-root@n1-1466669226 root@n1 /sbin/zfs send -nP tank/vmdata@autosnap_2016-06-22_11:08:01_hourly 2>&1 |"...
DEBUG: sendsize = 465363687656
INFO: Sending oldest full snapshot tank/vmdata@autosnap_2016-06-22_11:08:01_hourly (~ 433.4 GB) to new target filesystem:
DEBUG: /usr/bin/ssh -c [email protected],arcfour -p 22 -S /tmp/syncoid-root-root@n1-1466669226 root@n1 ' /sbin/zfs send tank/vmdata@autosnap_2016-06-22_11:08:01_hourly | /usr/bin/lzop | /usr/bin/mbuffer -q -s 128k -m 16M 2>/dev/null' | /usr/bin/mbuffer -q -s 128k -m 16M 2>/dev/null | /usr/bin/lzop -dfc | /usr/bin/pv -s 465363687656 | /sbin/zfs receive -F tankbak/vmdata
DEBUG: checking to see if tankbak/vmdata on is already in zfs receive using /bin/ps -Ao args= ...
433GiB 1:56:56 [63,3MiB/s] [=====================================================================>] 100%
DEBUG: getting estimated transfer size from source -S /tmp/syncoid-root-root@n1-1466669226 root@n1 using "/usr/bin/ssh -c [email protected],arcfour -p 22 -S /tmp/syncoid-root-root@n1-1466669226 root@n1 /sbin/zfs send -nP -I tank/vmdata@autosnap_2016-06-22_11:08:01_hourly tank/vmdata@syncoid_n2_2016-06-23:10:07:06 2>&1 |"...
DEBUG: sendsize = 43796521336
DEBUG: checking to see if tankbak/vmdata on is already in zfs receive using /bin/ps -Ao args= ...
INFO: Updating new target filesystem with incremental tank/vmdata@autosnap_2016-06-22_11:08:01_hourly ... syncoid_n2_2016-06-23:10:07:06 (~ 40.8 GB):
DEBUG: /usr/bin/ssh -c [email protected],arcfour -p 22 -S /tmp/syncoid-root-root@n1-1466669226 root@n1 ' /sbin/zfs send -I tank/vmdata@autosnap_2016-06-22_11:08:01_hourly tank/vmdata@syncoid_n2_2016-06-23:10:07:06 | /usr/bin/lzop | /usr/bin/mbuffer -q -s 128k -m 16M 2>/dev/null' | /usr/bin/mbuffer -q -s 128k -m 16M 2>/dev/null | /usr/bin/lzop -dfc | /usr/bin/pv -s 43796521336 | /sbin/zfs receive -F tankbak/vmdata
40,8GiB 0:11:47 [ 59MiB/s] [=====================================================================> ] 99%
root@n2 ~ 1#

After first sync:

Source:
root@n1 ~ # zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
tank/data@zfs-auto-snap_weekly-2016-05-01-0447 722M - 2,29T -
tank/data@zfs-auto-snap_weekly-2016-05-08-0447 698K - 2,29T -
tank/data@zfs-auto-snap_weekly-2016-05-15-0447 767K - 2,32T -
tank/data@zfs-auto-snap_weekly-2016-05-22-0447 558K - 2,23T -
tank/data@zfs-auto-snap_daily-2016-05-24-0425 239M - 2,23T -
tank/data@zfs-auto-snap_daily-2016-05-25-0425 209K - 2,23T -
tank/data@zfs-auto-snap_daily-2016-05-26-0425 181M - 2,25T -
tank/data@zfs-auto-snap_daily-2016-05-27-0425 366K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-28-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-29-0425 0 - 2,26T -
tank/data@zfs-auto-snap_weekly-2016-05-29-0447 0 - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-30-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-31-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-01-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-02-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-03-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-04-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-05-0425 0 - 2,26T -
tank/data@zfs-auto-snap_weekly-2016-06-05-0447 0 - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-06-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-07-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-08-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-09-0426 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-10-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-11-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-12-0425 0 - 2,26T -
tank/data@zfs-auto-snap_weekly-2016-06-12-0447 0 - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-13-0426 208M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-14-0425 673M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-15-0425 209K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-16-0425 198K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-17-0425 209K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-18-0425 209K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-19-0425 0 - 2,27T -
tank/data@zfs-auto-snap_weekly-2016-06-19-0447 0 - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-20-0425 220M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-21-0426 370M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-22-0425 221K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-23-0425 209K - 2,28T -
tank/vmdata@autosnap_2016-06-22_11:08:01_hourly 0 - 433G -
tank/vmdata@autosnap_2016-06-22_11:08:01_daily 0 - 433G -
tank/vmdata@autosnap_2016-06-22_11:08:01_monthly 0 - 433G -
tank/vmdata@autosnap_2016-06-22_11:08:01_yearly 0 - 433G -
tank/vmdata@autosnap_2016-06-22_12:00:02_hourly 15,2M - 433G -
tank/vmdata@autosnap_2016-06-22_13:00:01_hourly 23,4M - 433G -
tank/vmdata@autosnap_2016-06-22_14:00:01_hourly 22,1M - 433G -
tank/vmdata@autosnap_2016-06-22_15:00:01_hourly 18,0M - 433G -
tank/vmdata@autosnap_2016-06-22_16:00:01_hourly 19,3M - 433G -
tank/vmdata@autosnap_2016-06-22_17:00:01_hourly 14,7M - 433G -
tank/vmdata@autosnap_2016-06-22_18:00:01_hourly 16,3M - 433G -
tank/vmdata@autosnap_2016-06-22_19:00:01_hourly 19,0M - 433G -
tank/vmdata@autosnap_2016-06-22_20:00:01_hourly 20,0M - 433G -
tank/vmdata@autosnap_2016-06-22_21:00:01_hourly 26,2M - 433G -
tank/vmdata@autosnap_2016-06-22_22:00:01_hourly 25,6M - 433G -
tank/vmdata@autosnap_2016-06-22_23:00:01_hourly 331M - 433G -
tank/vmdata@autosnap_2016-06-22_23:59:01_daily 166M - 436G -
tank/vmdata@autosnap_2016-06-23_00:00:01_hourly 98,2M - 436G -
tank/vmdata@autosnap_2016-06-23_01:00:02_hourly 211M - 451G -
tank/vmdata@autosnap_2016-06-23_02:00:01_hourly 102M - 451G -
tank/vmdata@autosnap_2016-06-23_03:00:01_hourly 24,7M - 451G -
tank/vmdata@autosnap_2016-06-23_04:00:01_hourly 15,1M - 451G -
tank/vmdata@autosnap_2016-06-23_05:00:01_hourly 14,1M - 451G -
tank/vmdata@autosnap_2016-06-23_06:00:01_hourly 26,0M - 451G -
tank/vmdata@autosnap_2016-06-23_07:00:01_hourly 21,8M - 451G -
tank/vmdata@autosnap_2016-06-23_08:00:01_hourly 18,7M - 451G -
tank/vmdata@autosnap_2016-06-23_09:00:02_hourly 25,0M - 451G -
tank/vmdata@autosnap_2016-06-23_10:00:01_hourly 7,94M - 451G -
tank/vmdata@syncoid_n2_2016-06-23:10:07:06 7,91M - 451G -
tank/vmdata@autosnap_2016-06-23_11:00:01_hourly 18,6M - 451G -
tank/vmdata@autosnap_2016-06-23_12:00:01_hourly 11,2M - 451G -
root@n1 ~ #

Target:
root@n2 ~ 1# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
tankbak/vmdata@autosnap_2016-06-22_11:08:01_hourly 1K - 369G -
tankbak/vmdata@autosnap_2016-06-22_11:08:01_daily 1K - 369G -
tankbak/vmdata@autosnap_2016-06-22_11:08:01_monthly 1K - 369G -
tankbak/vmdata@autosnap_2016-06-22_11:08:01_yearly 1K - 369G -
tankbak/vmdata@autosnap_2016-06-22_12:00:02_hourly 5,22M - 369G -
tankbak/vmdata@autosnap_2016-06-22_13:00:01_hourly 8,58M - 369G -
tankbak/vmdata@autosnap_2016-06-22_14:00:01_hourly 7,96M - 369G -
tankbak/vmdata@autosnap_2016-06-22_15:00:01_hourly 6,43M - 369G -
tankbak/vmdata@autosnap_2016-06-22_16:00:01_hourly 7,02M - 369G -
tankbak/vmdata@autosnap_2016-06-22_17:00:01_hourly 4,93M - 369G -
tankbak/vmdata@autosnap_2016-06-22_18:00:01_hourly 5,53M - 369G -
tankbak/vmdata@autosnap_2016-06-22_19:00:01_hourly 6,89M - 369G -
tankbak/vmdata@autosnap_2016-06-22_20:00:01_hourly 7,38M - 369G -
tankbak/vmdata@autosnap_2016-06-22_21:00:01_hourly 9,76M - 369G -
tankbak/vmdata@autosnap_2016-06-22_22:00:01_hourly 9,56M - 369G -
tankbak/vmdata@autosnap_2016-06-22_23:00:01_hourly 128M - 370G -
tankbak/vmdata@autosnap_2016-06-22_23:59:01_daily 100M - 374G -
tankbak/vmdata@autosnap_2016-06-23_00:00:01_hourly 56,9M - 374G -
tankbak/vmdata@autosnap_2016-06-23_01:00:02_hourly 63,2M - 399G -
tankbak/vmdata@autosnap_2016-06-23_02:00:01_hourly 33,2M - 399G -
tankbak/vmdata@autosnap_2016-06-23_03:00:01_hourly 9,53M - 399G -
tankbak/vmdata@autosnap_2016-06-23_04:00:01_hourly 5,53M - 399G -
tankbak/vmdata@autosnap_2016-06-23_05:00:01_hourly 5,04M - 399G -
tankbak/vmdata@autosnap_2016-06-23_06:00:01_hourly 9,91M - 399G -
tankbak/vmdata@autosnap_2016-06-23_07:00:01_hourly 8,33M - 400G -
tankbak/vmdata@autosnap_2016-06-23_08:00:01_hourly 6,87M - 400G -
tankbak/vmdata@autosnap_2016-06-23_09:00:02_hourly 9,57M - 400G -
tankbak/vmdata@autosnap_2016-06-23_10:00:01_hourly 3,37M - 400G -
tankbak/vmdata@syncoid_n2_2016-06-23:10:07:06 0 - 400G -

Second unsuccessful sync:

/usr/local/bin/sanoid/syncoid -debug root@n1:tank/vmdata tankbak/vmdata
DEBUG: checking availability of /usr/bin/lzop on source...
DEBUG: checking availability of /usr/bin/lzop on target...
DEBUG: checking availability of /usr/bin/lzop on local machine...
DEBUG: checking availability of /usr/bin/mbuffer on source...
DEBUG: checking availability of /usr/bin/mbuffer on target...
DEBUG: checking availability of /usr/bin/pv on local machine...
DEBUG: syncing source tank/vmdata to target tankbak/vmdata.
DEBUG: checking to see if tankbak/vmdata on is already in zfs receive using /bin/ps -Ao args= ...
DEBUG: checking to see if target filesystem exists using " /sbin/zfs get -H name tankbak/vmdata 2>&1 |"...
DEBUG: getting list of snapshots on tank/vmdata using /usr/bin/ssh -c [email protected],arcfour -p 22 -S /tmp/syncoid-root-root@n1-1466677106 root@n1 /usr/bin/sudo /sbin/zfs get -Hpd 1 creation tank/vmdata |...
DEBUG: getting list of snapshots on tankbak/vmdata using /usr/bin/sudo /sbin/zfs get -Hpd 1 creation tankbak/vmdata |...
DEBUG: getting current value of -p used on tankbak/vmdata...
/sbin/zfs get -H -p used tankbak/vmdata

CRITICAL ERROR: Target exists but has no matching snapshots!
Replication to target would require destroying existing
target. Cowardly refusing to destroy your existing target.

root@n2 ~ 1#

Sanoid - Taking far too frequent snaps

Relevant zfs get written

dpool/data@autosnap_2015-09-26_13:49:03_daily                            written   0        -
dpool/data@autosnap_2015-09-26_13:49:03_monthly                          written   0        -
dpool/data@autosnap_2015-09-26_13:49:03_yearly                           written   0        -
dpool/data@autosnap_2015-09-26_13:49:03_hourly                           written   0        -
dpool/data@autosnap_2015-09-26_13:44:01_daily                            written   0        -
dpool/data@autosnap_2015-09-26_13:44:01_monthly                          written   0        -
dpool/data@autosnap_2015-09-26_13:44:01_yearly                           written   0        -
dpool/data@autosnap_2015-09-26_13:50:01_yearly                           written   0        -
dpool/data@autosnap_2015-09-26_13:44:01_hourly                           written   0        -
dpool/data@autosnap_2015-09-26_13:50:01_hourly                           written   0        -
dpool/data@autosnap_2015-09-26_13:50:01_daily                            written   0        -
dpool/data@autosnap_2015-09-26_13:50:01_monthly                          written   0        -
dpool/data@autosnap_2015-09-26_13:45:01_hourly                           written   0        -
dpool/data@autosnap_2015-09-26_13:45:01_yearly                           written   0        -
dpool/data@autosnap_2015-09-26_13:54:01_hourly                           written   0        -
dpool/data@autosnap_2015-09-26_13:45:01_daily                            written   0        -
dpool/data@autosnap_2015-09-26_13:54:01_yearly                           written   0        -
dpool/data@autosnap_2015-09-26_13:45:01_monthly                          written   0        -
dpool/data@autosnap_2015-09-26_13:54:01_monthly                          written   0        -
dpool/data@autosnap_2015-09-26_13:54:01_daily                            written   0        -

Cron line
* * * * * /usr/local/bin/sanoid --cron

/etc/sanoid/sanoid.conf

######################################
# This is a sample sanoid.conf file. #
# It should go in /etc/sanoid.       #
######################################

[dpool/data]
    use_template = data
    recursive = yes

[dpool/backups]
    use_template = backup
    recursive = yes

[dpool/backup]
    use_template = backup
    recursive = yes

[dpool/root]
    use_template = os
    recursive = yes


#############################
# templates below this line #
#############################

# name your templates template_templatename. you can create your own, and use them in your module definitions above.

[template_os]
    hourly = 48
    daily = 30
    monthly = 3
    yearly = 0
    autosnap = yes
    autoprune = yes
    hourly_warn = 2880
    hourly_crit = 3600
    daily_warn = 48
    daily_crit = 60

[template_data]
    hourly = 48
    daily = 30
    monthly = 12
    yearly = 7
    autosnap = yes
    autoprune = yes
    hourly_warn = 2880
    hourly_crit = 3600
    daily_warn = 48
    daily_crit = 60

[template_backup]
    autoprune = yes
    hourly = 48
    daily = 30
    monthly = 12
    yearly = 7

    ### don't take new snapshots - snapshots on backup 
    ### datasets are replicated in from source, not
    ### generated locally
    autosnap = no

    ### monitor hourlies and dailies, but don't warn or 
    ### crit until they're over 48h old, since replication 
    ### is typically daily only
    hourly_warn = 2880
    hourly_crit = 3600
    daily_warn = 48
    daily_crit = 60


Don't hardcode perl path

Syncoid hardcodes the path to perl:

!/usr/bin/perl

Which doesn't work on some systems, e.g. freenas. Instead, use env:

!/usr/bin/env perl

Sanoid and Synoid - Snapshots taken by this script are non-atomic

The problem:

Snapshots are not taken atomicly. Each object in the config file is taking individually.

Example:

DatasetA has a database
DatasetB has a file repository

A snapshot of DatasetA is taken
Both datasets are updated (database update, file added to repository)
A snapshot of DatasetB is taken
Your datasets are now out of sync. The database update is not saved, so the file added to the repo is orphaned.

Example 2:

DatasetA has a database
DatasetB has a file repository

A snapshot of DatasetB is taken
Both datasets are updated (database update, file added to repository)
A snapshot of DatasetA is taken
Your datasets are now out of sync. The database update is saved, but the file repo update is not, leaving orphaned data in the database. This is worse than the above.

Possible fix:

Take all the snapshots in a single command. As all the snapshots are pushed to an array n sanoid, this should need only minor changes near system($zfs, "snapshot", "$snap"); This is also needs fixing in syncoid, something around my $snapcmd = "$rhost $mysudocmd $zfscmd snapshot $fs\@$snapname\n"; This should not change expiring.

syncoid: getting "Too many levels of symbolic links" in snapshot dir on remote host

I don't think this is a problem with syncoid but I wanted to check if there is a problem either with what I'm doing or with zfs.

I'm on ubuntu 16.04 with the latest updates and I'm using syncoid from master as of Aug 8. After the first incremental syncoid I am getting "Too many levels of symbolic links" errors when trying to list the files in the remote snapshot dir.

Here are the commands I've run with robocat as the source host and roboduck as the remote backup host:

root@roboduck:~# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank   757K  74.1G    19K  /tank
root@robocat:~# zfs create tank/testfs
root@robocat:~# echo "test" > /tank/testfs/testfile01.txt 
root@robocat:~# date; /usr/local/bin/syncoid tank/testfs root@roboduck:tank/testfs
Sun Aug 21 13:41:42 EDT 2016
INFO: Sending oldest full snapshot tank/testfs@syncoid_robocat_2016-08-21:13:41:43 (~ 40 KB) to new target filesystem:
41.5KiB 0:00:00 [18.1MiB/s] [================================>] 102%            
root@roboduck:~# date; ls /tank/testfs/
Sun Aug 21 13:42:03 EDT 2016
testfile01.txt
root@roboduck:~# ls /tank/testfs/.zfs/snapshot/
syncoid_robocat_2016-08-21:13:41:43
root@roboduck:~# ls /tank/testfs/.zfs/snapshot/syncoid_robocat_2016-08-21\:13\:41\:43/
testfile01.txt
root@robocat:~# date; /usr/local/bin/syncoid tank/testfs root@roboduck:tank/testfs
Sun Aug 21 13:43:02 EDT 2016
Sending incremental tank/testfs@syncoid_robocat_2016-08-21:13:41:43 ... syncoid_robocat_2016-08-21:13:43:02 (~ 4 KB):
1.52KiB 0:00:00 [5.33KiB/s] [===========>                      ] 38%            
root@roboduck:~# date; ls /tank/testfs/
Sun Aug 21 13:43:11 EDT 2016
testfile01.txt
root@roboduck:~# ls /tank/testfs/.zfs/snapshot/
syncoid_robocat_2016-08-21:13:43:02
root@roboduck:~# ls /tank/testfs/.zfs/snapshot/syncoid_robocat_2016-08-21\:13\:43\:02/
ls: cannot open directory '/tank/testfs/.zfs/snapshot/syncoid_robocat_2016-08-21:13:43:02/': Too many levels of symbolic links
root@roboduck:~# ls /tank/testfs/.zfs/snapshot/syncoid_robocat_2016-08-21\:13\:43\:02/testfile01.txt 
ls: cannot access '/tank/testfs/.zfs/snapshot/syncoid_robocat_2016-08-21:13:43:02/testfile01.txt': Too many levels of symbolic links
root@roboduck:~# cat /tank/testfs/.zfs/snapshot/syncoid_robocat_2016-08-21\:13\:43\:02/testfile01.txt 
test

On some other zfs filesystems I've been testing with, I've seen the files on the remote filesystem disappear after the first incremental but I don't have a simple repro for that yet. The snapshot dir errors are the same in both cases. I'm guessing this is a problem with zfs but it would be great if someone could have a quick look in case I'm doing something wrong.

crypto detection/fallback in Syncoid

Syncoid defaults to -c arcfour crypto for sshd, which is beneficial for throughput and is automatically supported on most Linux distributions. FreeBSD has dropped default support for arcfour in recent releases, though, which underscored the desirability of automatic detection of crypto availability, and fallback when the default isn't supported.

Goal is to keep arcfour as the default crypto, but have automatic fallback and successful connection in cases where arcfour is determined unsupported on the other side.

syncoid syncs all snapshots, not just the ones created by sanoid

I'm not sure if this is a bug or a feature, but I noticed that syncoid syncs all snapshots between the source and destination. I expected it to only sync the ones created by sanoid (those whose names start with autosnap).

The reason why I'm reporting this is that I also have snapshots (not created by sanoid) that don't need to be replicated (e.g. snapshots that I take every 15 minutes).

I guess the simplest solution would be to do some grep-ing in the getsnaps() function, but maybe an option like --only-autosnaps would be more appropriate. Or, even more versatile: add an option to allow the user to select which snapshots to sync (e.g. something like: --include-snap-type=weekly that would only send snapshots named autosnap_*_weekly)

15 minute snapshots?

It's not clear to me if I can do 15 minute snapshots with this or not.

My thinking is this:

I want to be able to provide a "psuedo-HA" environment where 2 hosts have enough storage for 2 different sets of VMs to replicate to each other as a "backup + DR" recovery solution.

So to illustrate:

Host A --Replicates VMs to Host B in the event this host goes down
VM1(Home server)
VM2(Home server)
VM3(Cold storage but ready for activation)
VM4(Cold storage but ready for activation)

Host B --Replicates VMs to Host A in the event this host goes down
VM3(Home server)
VM4(Home server)
VM1(Cold storage but ready for activation)
VM2(Cold storage but ready for activation)

In this scenario, I believe hourly snapshots may be going too far back when replicating with a syncoid cron running every hour to make sure each host holds each other hosts' VMs.

Edit: Made example a little bit clearer.

How do you list existing snapshots on local or remote and restore?

Hi, I'm new to this, just reading up on it it sounds great but how do I list snapshots taken/synced by sanoid? How would I restore a specific snapshot?

Or is that out of the scope of sanoid and needs to be done via the native builtin OS commands?

What exactly is findoid?

syncoid: sync zfs properties source -> target

At least part of this is just me getting my head around (and fighting) how zfs handles dataset mounting vs. how a "normal" Linux filesystem does it. I'm not a fan of datasets getting auto-mounted, particularly when using zfs /. So I tend to set canmount=noauto and/or mountpoint=legacy, or even canmount=off for datasets that are simply placeholders in the zfs name tree.

I notice that syncoid has getzfsvalue() and setzfsvalue() but doesn't currently use them. It would be nice imo if syncoid would set target properties (at least canmount & mountpoint) to match source dataset properties.

Override config file locations with argument

On some distros the default config directory is not in /etc. This means the script must be edited directly and will cause conflicts with future git pulls. Would be cleaner if the individual config files or the base config directory (or both!) could be overridden by a runtime argument, leaving the values in the script as the defaults.

target exists but has no matching snapshots!

SInce i did git pull yesterday all my datasets run in an error that there are no matching snapshots, even on an freshly synced dataset (with current version) it fails after the first run, if source or target is remote.i cant sync on local fs, it fails with "failed to read from stream".
im sorry i dit not pull any changes since my last additions so i just cant tell wich one broke. my latest working syncoid was "1.0.13" with my changes to getsnaps.

my OS
Linux zfscas02 3.2.0-4-amd64 #1 SMP Debian 3.2.65-1+deb7u2 x86_64 GNU/Linux

Question: What happens if syncoid get's interupted?

i want to transfer terrabytes of data with syncoid.
what will happen if the transfer get's interrupted due to a technical / network connection loss?

if i'm starting the sync again - where will it start?
thanks for the answer - btw: i'm using the openzfs implementation of zfs in the latest version.

Question concerning sanoid running on syncoid target system

I want to implement the following setup:

  • No sanoid/syncoid running on the source system, no periodic snapshotting running on the source.
  • Syncoid is periodically pulling in remote datasets (recursively of the whole remote pool).
  • Sanoid snapshots these datasets on the target system, creating a hierarchy of hourly, weekly.. etc

I got this working to the point where datasets are pulled in by syncoid and sanoid is creating the additional snapshots (based on the backup template from the default config).

However when running syncoid on the target again it destroys the snapshots created by sanoid. From reading the debug log I suppose it does this not explicitly but by performing some rollback operations when pulling in the next round of datasets.

Is the desired setup even possible with syncoid/sanoid? Or is it required that snapshots are created on the source systems?

Weekly Snapshots?

I like what you're doing with sanoid. I need to keep a weekly snapshot. Any chance of getting this added?

Similar config to monthly
weekly_wday = 1; # 0=sun . . . 6=Sat similar to cron
weekly_hour = 0;
weekly_min = 0;

Thank you for your work.

Specify config section for a complete pool as source

I'm trying to do a recursive backup of a whole pool, hence specifying only the poolname as source.

It works great when calling syncoid manually with -r, but it will not pick up options from the config file where a specified a section called

[mypoolname]
...options...

How to specify a config section that gets picked up if the source is the whole pool (no /, no dataset)?

Syncoid Dry Run Options

I want to see that my configuration is correct, or see what syncoid will do before I do it. A dry run option would be nice.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.