Giter VIP home page Giter VIP logo

Comments (7)

jimsalterjrs avatar jimsalterjrs commented on May 18, 2024

Very strange. Can you give me info on distro and version (ie Ubuntu 14.04
or whatever) on both sides please?


(Sent from my tablet - please blame any weird errors on autocorrect)

On June 23, 2016 06:30:50 FlowSem [email protected] wrote:

Hi there,

first of all thanks for your effort. So far i really like Sanoid :)

But unfortunately syncoid doesn't work for me.

I try to pull my VM dataset from my server (n1) to my secondary server.
The first sync with syncoid seems successfull, but incremental syncing
doesn't seem to work.

I tried this now several times. Removed all snapshots and made new ones
with sanoid.
It's always the same.
The first sync works, succeeding ones stop with:

CRITICAL ERROR: Target exists but has no matching snapshots!

Maybe i'm making some errors, but i don't know what I'm doing wrong...

Could you have a look at this?

Thanks in advance!

Here are some infos:

Before syncoid:

Target:
root@n2 ~ 1# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tankbak 349G 550G 23K /tankbak
root@n2 ~ 1#

root@n2 ~ 1# zfs list -t snapshot
no datasets available
root@n2 ~ 1#

Source:
root@n1 ~ # zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 2,85T 4,80T 198K none
tank/data 2,39T 4,80T 2,28T /data
tank/vmdata 474G 4,80T 451G /vmdata
root@n1 ~ #

root@n1 ~ # zfs list -t snapshot
NAME USED AVAIL REFER
MOUNTPOINT
tank/data@zfs-auto-snap_weekly-2016-05-01-0447 722M - 2,29T -
tank/data@zfs-auto-snap_weekly-2016-05-08-0447 698K - 2,29T -
tank/data@zfs-auto-snap_weekly-2016-05-15-0447 767K - 2,32T -
tank/data@zfs-auto-snap_weekly-2016-05-22-0447 558K - 2,23T -
tank/data@zfs-auto-snap_daily-2016-05-24-0425 239M - 2,23T -
tank/data@zfs-auto-snap_daily-2016-05-25-0425 209K - 2,23T -
tank/data@zfs-auto-snap_daily-2016-05-26-0425 181M - 2,25T -
tank/data@zfs-auto-snap_daily-2016-05-27-0425 366K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-28-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-29-0425 0 - 2,26T -
tank/data@zfs-auto-snap_weekly-2016-05-29-0447 0 - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-30-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-31-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-01-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-02-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-03-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-04-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-05-0425 0 - 2,26T -
tank/data@zfs-auto-snap_weekly-2016-06-05-0447 0 - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-06-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-07-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-08-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-09-0426 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-10-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-11-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-12-0425 0 - 2,26T -
tank/data@zfs-auto-snap_weekly-2016-06-12-0447 0 - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-13-0426 208M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-14-0425 673M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-15-0425 209K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-16-0425 198K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-17-0425 209K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-18-0425 209K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-19-0425 0 - 2,27T -
tank/data@zfs-auto-snap_weekly-2016-06-19-0447 0 - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-20-0425 220M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-21-0426 370M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-22-0425 221K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-23-0425 0 - 2,28T -
tank/vmdata@autosnap_2016-06-22_11:08:01_hourly 0 - 433G -
tank/vmdata@autosnap_2016-06-22_11:08:01_daily 0 - 433G -
tank/vmdata@autosnap_2016-06-22_11:08:01_monthly 0 - 433G -
tank/vmdata@autosnap_2016-06-22_11:08:01_yearly 0 - 433G -
tank/vmdata@autosnap_2016-06-22_12:00:02_hourly 15,2M - 433G -
tank/vmdata@autosnap_2016-06-22_13:00:01_hourly 23,4M - 433G -
tank/vmdata@autosnap_2016-06-22_14:00:01_hourly 22,1M - 433G -
tank/vmdata@autosnap_2016-06-22_15:00:01_hourly 18,0M - 433G -
tank/vmdata@autosnap_2016-06-22_16:00:01_hourly 19,3M - 433G -
tank/vmdata@autosnap_2016-06-22_17:00:01_hourly 14,7M - 433G -
tank/vmdata@autosnap_2016-06-22_18:00:01_hourly 16,3M - 433G -
tank/vmdata@autosnap_2016-06-22_19:00:01_hourly 19,0M - 433G -
tank/vmdata@autosnap_2016-06-22_20:00:01_hourly 20,0M - 433G -
tank/vmdata@autosnap_2016-06-22_21:00:01_hourly 26,2M - 433G -
tank/vmdata@autosnap_2016-06-22_22:00:01_hourly 25,6M - 433G -
tank/vmdata@autosnap_2016-06-22_23:00:01_hourly 331M - 433G -
tank/vmdata@autosnap_2016-06-22_23:59:01_daily 166M - 436G -
tank/vmdata@autosnap_2016-06-23_00:00:01_hourly 98,2M - 436G -
tank/vmdata@autosnap_2016-06-23_01:00:02_hourly 211M - 451G -
tank/vmdata@autosnap_2016-06-23_02:00:01_hourly 102M - 451G -
tank/vmdata@autosnap_2016-06-23_03:00:01_hourly 24,7M - 451G -
tank/vmdata@autosnap_2016-06-23_04:00:01_hourly 15,1M - 451G -
tank/vmdata@autosnap_2016-06-23_05:00:01_hourly 14,1M - 451G -
tank/vmdata@autosnap_2016-06-23_06:00:01_hourly 26,0M - 451G -
tank/vmdata@autosnap_2016-06-23_07:00:01_hourly 21,8M - 451G -
tank/vmdata@autosnap_2016-06-23_08:00:01_hourly 18,7M - 451G -
tank/vmdata@autosnap_2016-06-23_09:00:02_hourly 25,0M - 451G -
tank/vmdata@autosnap_2016-06-23_10:00:01_hourly 7,52M - 451G -
root@n1 ~ #

First syncoid:

root@n2 ~ 1# /usr/local/bin/sanoid/syncoid -debug root@n1:tank/vmdata
tankbak/vmdata
DEBUG: checking availability of /usr/bin/lzop on local machine...
DEBUG: checking availability of /usr/bin/mbuffer on source...
DEBUG: checking availability of /usr/bin/mbuffer on target...
DEBUG: checking availability of /usr/bin/pv on local machine...
DEBUG: syncing source tank/vmdata to target tankbak/vmdata.
DEBUG: checking to see if tankbak/vmdata on is already in zfs receive
using /bin/ps -Ao args= ...
DEBUG: checking to see if target filesystem exists using " /sbin/zfs get
-H name tankbak/vmdata 2>&1 |"...
DEBUG: getting list of snapshots on tank/vmdata using /usr/bin/ssh -c
[email protected],arcfour -p 22 -S
/tmp/syncoid-root-root@n1-1466669226 root@n1 /usr/bin/sudo /sbin/zfs get
-Hpd 1 creation tank/vmdata |...
DEBUG: target tankbak/vmdata does not exist. Finding oldest available
snapshot on source tank/vmdata ...
DEBUG: getting estimated transfer size from source -S
/tmp/syncoid-root-root@n1-1466669226 root@n1 using "/usr/bin/ssh -c
[email protected],arcfour -p 22 -S
/tmp/syncoid-root-root@n1-1466669226 root@n1 /sbin/zfs send -nP
tank/vmdata@autosnap_2016-06-22_11:08:01_hourly 2>&1 |"...
DEBUG: sendsize = 465363687656
INFO: Sending oldest full snapshot
tank/vmdata@autosnap_2016-06-22_11:08:01_hourly (~ 433.4 GB) to new target
filesystem:
DEBUG: /usr/bin/ssh -c [email protected],arcfour -p 22 -S
/tmp/syncoid-root-root@n1-1466669226 root@n1 ' /sbin/zfs send
tank/vmdata@autosnap_2016-06-22_11:08:01_hourly | /usr/bin/lzop |
/usr/bin/mbuffer -q -s 128k -m 16M 2>/dev/null' | /usr/bin/mbuffer -q -s
128k -m 16M 2>/dev/null | /usr/bin/lzop -dfc | /usr/bin/pv -s 465363687656
| /sbin/zfs receive -F tankbak/vmdata
DEBUG: checking to see if tankbak/vmdata on is already in zfs receive
using /bin/ps -Ao args= ...
433GiB 1:56:56 [63,3MiB/s]
[=====================================================================>] 100%
DEBUG: getting estimated transfer size from source -S
/tmp/syncoid-root-root@n1-1466669226 root@n1 using "/usr/bin/ssh -c
[email protected],arcfour -p 22 -S
/tmp/syncoid-root-root@n1-1466669226 root@n1 /sbin/zfs send -nP -I
tank/vmdata@autosnap_2016-06-22_11:08:01_hourly
tank/vmdata@syncoid_n2_2016-06-23:10:07:06 2>&1 |"...
DEBUG: sendsize = 43796521336
DEBUG: checking to see if tankbak/vmdata on is already in zfs receive
using /bin/ps -Ao args= ...
INFO: Updating new target filesystem with incremental
tank/vmdata@autosnap_2016-06-22_11:08:01_hourly ...
syncoid_n2_2016-06-23:10:07:06 (~ 40.8 GB):
DEBUG: /usr/bin/ssh -c [email protected],arcfour -p 22 -S
/tmp/syncoid-root-root@n1-1466669226 root@n1 ' /sbin/zfs send -I
tank/vmdata@autosnap_2016-06-22_11:08:01_hourly
tank/vmdata@syncoid_n2_2016-06-23:10:07:06 | /usr/bin/lzop |
/usr/bin/mbuffer -q -s 128k -m 16M 2>/dev/null' | /usr/bin/mbuffer -q -s
128k -m 16M 2>/dev/null | /usr/bin/lzop -dfc | /usr/bin/pv -s 43796521336 |
/sbin/zfs receive -F tankbak/vmdata
40,8GiB 0:11:47 [ 59MiB/s]
[=====================================================================> ] 99%
root@n2 ~ 1#

After first sync:

Source:
root@n1 ~ # zfs list -t snapshot
NAME USED AVAIL REFER
MOUNTPOINT
tank/data@zfs-auto-snap_weekly-2016-05-01-0447 722M - 2,29T -
tank/data@zfs-auto-snap_weekly-2016-05-08-0447 698K - 2,29T -
tank/data@zfs-auto-snap_weekly-2016-05-15-0447 767K - 2,32T -
tank/data@zfs-auto-snap_weekly-2016-05-22-0447 558K - 2,23T -
tank/data@zfs-auto-snap_daily-2016-05-24-0425 239M - 2,23T -
tank/data@zfs-auto-snap_daily-2016-05-25-0425 209K - 2,23T -
tank/data@zfs-auto-snap_daily-2016-05-26-0425 181M - 2,25T -
tank/data@zfs-auto-snap_daily-2016-05-27-0425 366K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-28-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-29-0425 0 - 2,26T -
tank/data@zfs-auto-snap_weekly-2016-05-29-0447 0 - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-30-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-05-31-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-01-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-02-0425 209K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-03-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-04-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-05-0425 0 - 2,26T -
tank/data@zfs-auto-snap_weekly-2016-06-05-0447 0 - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-06-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-07-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-08-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-09-0426 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-10-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-11-0425 198K - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-12-0425 0 - 2,26T -
tank/data@zfs-auto-snap_weekly-2016-06-12-0447 0 - 2,26T -
tank/data@zfs-auto-snap_daily-2016-06-13-0426 208M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-14-0425 673M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-15-0425 209K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-16-0425 198K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-17-0425 209K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-18-0425 209K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-19-0425 0 - 2,27T -
tank/data@zfs-auto-snap_weekly-2016-06-19-0447 0 - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-20-0425 220M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-21-0426 370M - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-22-0425 221K - 2,27T -
tank/data@zfs-auto-snap_daily-2016-06-23-0425 209K - 2,28T -
tank/vmdata@autosnap_2016-06-22_11:08:01_hourly 0 - 433G -
tank/vmdata@autosnap_2016-06-22_11:08:01_daily 0 - 433G -
tank/vmdata@autosnap_2016-06-22_11:08:01_monthly 0 - 433G -
tank/vmdata@autosnap_2016-06-22_11:08:01_yearly 0 - 433G -
tank/vmdata@autosnap_2016-06-22_12:00:02_hourly 15,2M - 433G -
tank/vmdata@autosnap_2016-06-22_13:00:01_hourly 23,4M - 433G -
tank/vmdata@autosnap_2016-06-22_14:00:01_hourly 22,1M - 433G -
tank/vmdata@autosnap_2016-06-22_15:00:01_hourly 18,0M - 433G -
tank/vmdata@autosnap_2016-06-22_16:00:01_hourly 19,3M - 433G -
tank/vmdata@autosnap_2016-06-22_17:00:01_hourly 14,7M - 433G -
tank/vmdata@autosnap_2016-06-22_18:00:01_hourly 16,3M - 433G -
tank/vmdata@autosnap_2016-06-22_19:00:01_hourly 19,0M - 433G -
tank/vmdata@autosnap_2016-06-22_20:00:01_hourly 20,0M - 433G -
tank/vmdata@autosnap_2016-06-22_21:00:01_hourly 26,2M - 433G -
tank/vmdata@autosnap_2016-06-22_22:00:01_hourly 25,6M - 433G -
tank/vmdata@autosnap_2016-06-22_23:00:01_hourly 331M - 433G -
tank/vmdata@autosnap_2016-06-22_23:59:01_daily 166M - 436G -
tank/vmdata@autosnap_2016-06-23_00:00:01_hourly 98,2M - 436G -
tank/vmdata@autosnap_2016-06-23_01:00:02_hourly 211M - 451G -
tank/vmdata@autosnap_2016-06-23_02:00:01_hourly 102M - 451G -
tank/vmdata@autosnap_2016-06-23_03:00:01_hourly 24,7M - 451G -
tank/vmdata@autosnap_2016-06-23_04:00:01_hourly 15,1M - 451G -
tank/vmdata@autosnap_2016-06-23_05:00:01_hourly 14,1M - 451G -
tank/vmdata@autosnap_2016-06-23_06:00:01_hourly 26,0M - 451G -
tank/vmdata@autosnap_2016-06-23_07:00:01_hourly 21,8M - 451G -
tank/vmdata@autosnap_2016-06-23_08:00:01_hourly 18,7M - 451G -
tank/vmdata@autosnap_2016-06-23_09:00:02_hourly 25,0M - 451G -
tank/vmdata@autosnap_2016-06-23_10:00:01_hourly 7,94M - 451G -
tank/vmdata@syncoid_n2_2016-06-23:10:07:06 7,91M - 451G -
tank/vmdata@autosnap_2016-06-23_11:00:01_hourly 18,6M - 451G -
tank/vmdata@autosnap_2016-06-23_12:00:01_hourly 11,2M - 451G -
root@n1 ~ #

Target:
root@n2 ~ 1# zfs list -t snapshot
NAME USED AVAIL REFER
MOUNTPOINT
tankbak/vmdata@autosnap_2016-06-22_11:08:01_hourly 1K - 369G -
tankbak/vmdata@autosnap_2016-06-22_11:08:01_daily 1K - 369G -
tankbak/vmdata@autosnap_2016-06-22_11:08:01_monthly 1K - 369G -
tankbak/vmdata@autosnap_2016-06-22_11:08:01_yearly 1K - 369G -
tankbak/vmdata@autosnap_2016-06-22_12:00:02_hourly 5,22M - 369G -
tankbak/vmdata@autosnap_2016-06-22_13:00:01_hourly 8,58M - 369G -
tankbak/vmdata@autosnap_2016-06-22_14:00:01_hourly 7,96M - 369G -
tankbak/vmdata@autosnap_2016-06-22_15:00:01_hourly 6,43M - 369G -
tankbak/vmdata@autosnap_2016-06-22_16:00:01_hourly 7,02M - 369G -
tankbak/vmdata@autosnap_2016-06-22_17:00:01_hourly 4,93M - 369G -
tankbak/vmdata@autosnap_2016-06-22_18:00:01_hourly 5,53M - 369G -
tankbak/vmdata@autosnap_2016-06-22_19:00:01_hourly 6,89M - 369G -
tankbak/vmdata@autosnap_2016-06-22_20:00:01_hourly 7,38M - 369G -
tankbak/vmdata@autosnap_2016-06-22_21:00:01_hourly 9,76M - 369G -
tankbak/vmdata@autosnap_2016-06-22_22:00:01_hourly 9,56M - 369G -
tankbak/vmdata@autosnap_2016-06-22_23:00:01_hourly 128M - 370G -
tankbak/vmdata@autosnap_2016-06-22_23:59:01_daily 100M - 374G -
tankbak/vmdata@autosnap_2016-06-23_00:00:01_hourly 56,9M - 374G -
tankbak/vmdata@autosnap_2016-06-23_01:00:02_hourly 63,2M - 399G -
tankbak/vmdata@autosnap_2016-06-23_02:00:01_hourly 33,2M - 399G -
tankbak/vmdata@autosnap_2016-06-23_03:00:01_hourly 9,53M - 399G -
tankbak/vmdata@autosnap_2016-06-23_04:00:01_hourly 5,53M - 399G -
tankbak/vmdata@autosnap_2016-06-23_05:00:01_hourly 5,04M - 399G -
tankbak/vmdata@autosnap_2016-06-23_06:00:01_hourly 9,91M - 399G -
tankbak/vmdata@autosnap_2016-06-23_07:00:01_hourly 8,33M - 400G -
tankbak/vmdata@autosnap_2016-06-23_08:00:01_hourly 6,87M - 400G -
tankbak/vmdata@autosnap_2016-06-23_09:00:02_hourly 9,57M - 400G -
tankbak/vmdata@autosnap_2016-06-23_10:00:01_hourly 3,37M - 400G -
tankbak/vmdata@syncoid_n2_2016-06-23:10:07:06 0 - 400G -

Second unsuccessful sync:

/usr/local/bin/sanoid/syncoid -debug root@n1:tank/vmdata tankbak/vmdata
DEBUG: checking availability of /usr/bin/lzop on source...
DEBUG: checking availability of /usr/bin/lzop on target...
DEBUG: checking availability of /usr/bin/lzop on local machine...
DEBUG: checking availability of /usr/bin/mbuffer on source...
DEBUG: checking availability of /usr/bin/mbuffer on target...
DEBUG: checking availability of /usr/bin/pv on local machine...
DEBUG: syncing source tank/vmdata to target tankbak/vmdata.
DEBUG: checking to see if tankbak/vmdata on is already in zfs receive
using /bin/ps -Ao args= ...
DEBUG: checking to see if target filesystem exists using " /sbin/zfs get
-H name tankbak/vmdata 2>&1 |"...
DEBUG: getting list of snapshots on tank/vmdata using /usr/bin/ssh -c
[email protected],arcfour -p 22 -S
/tmp/syncoid-root-root@n1-1466677106 root@n1 /usr/bin/sudo /sbin/zfs get
-Hpd 1 creation tank/vmdata |...
DEBUG: getting list of snapshots on tankbak/vmdata using /usr/bin/sudo
/sbin/zfs get -Hpd 1 creation tankbak/vmdata |...
DEBUG: getting current value of -p used on tankbak/vmdata...
/sbin/zfs get -H -p used tankbak/vmdata

CRITICAL ERROR: Target exists but has no matching snapshots!
Replication to target would require destroying existing
target. Cowardly refusing to destroy your existing target.

root@n2 ~ 1#


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
#41

from sanoid.

FlowSem avatar FlowSem commented on May 18, 2024

Sure.

n1:
root@n1 ~ # cat /etc/debian_version
8.4

root@n1 ~ # uname -a
Linux n1 4.2.8-1-pve #1 SMP Fri Feb 26 16:37:36 CET 2016 x86_64 GNU/Linux

root@n1 ~ # zpool status
pool: tank
state: ONLINE
scan: scrub repaired 0 in 14h54m with 0 errors on Sun Jun 19 21:41:44 2016
config:

    NAME                                            STATE     READ WRITE CKSUM
    tank                                            ONLINE       0     0     0
      raidz1-0                                      ONLINE       0     0     0
        ata-HGST_HDN724030ALE640_PK1234P8JK5J9P     ONLINE       0     0     0
        ata-HGST_HDN724030ALE640_PK1234P8JKAW7P     ONLINE       0     0     0
        ata-HGST_HDN724030ALE640_PK1234P8JLT1VX     ONLINE       0     0     0
        ata-HGST_HDN724030ALE640_PK1234P8JMLN2P     ONLINE       0     0     0
    logs
      scsi-3600508b1001030394134394436306500-part1  ONLINE       0     0     0
    cache
      scsi-3600508b1001030394134394436306500-part3  ONLINE       0     0     0

errors: No known data errors

n2:
root@n2 ~ 1# cat /etc/debian_version
8.5

root@n2 ~ 1# uname -a
Linux n2 4.4.6-1-pve #1 SMP Thu Apr 21 11:25:40 CEST 2016 x86_64 GNU/Linux

root@n2 ~ 1# zpool status
pool: tankbak
state: ONLINE
scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    tankbak     ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        sdc     ONLINE       0     0     0
        sda     ONLINE       0     0     0

errors: No known data errors

Both nodes run Proxmox VE.
n1 has VMs running.
n2 was fully reinstalled yesterday.

Both have the latest sanoid files (yesterday downloaded) in /usr/local/bin/sanoid/

from sanoid.

FlowSem avatar FlowSem commented on May 18, 2024

Just a quick follow-up:
I updated n1 to the latest kernel and debian version 8.5 and rebooted.
Then i destroyed again the dataset on n2, removed the syncoid snaps on n1 and started all over with a completly new sync.
The first syncoid run succeeded, the following sync failed again.

Error message was the same as before.

from sanoid.

FlowSem avatar FlowSem commented on May 18, 2024

Finally i resolved the problem.

sudo was missing on my target machine n2.

Maybe you could add some package requirements to the readme.
For debian i also had to install some perl dependencies.

These are the dependencies i needed on debian jessie 8.5:

perl -MCPAN -e 'install Perl::OSType'
perl -MCPAN -e 'install Module::Build';
cpan install Config::IniFiles
apt install mbuffer lzop pv sudo

from sanoid.

jimsalterjrs avatar jimsalterjrs commented on May 18, 2024

You shouldn't need mcpan - I believe debian has libconfig-inifiles-perl in
repos, just like Ubuntu does.

apt install libconfig-inifiles-perl


(Sent from my tablet - please blame any weird errors on autocorrect)

On July 7, 2016 08:50:50 FlowSem [email protected] wrote:

Finally i resolved the problem.

sudo was missing on my target machine n2.

Maybe you could add some package requirements to the readme.
For debian i also had to install some perl dependencies.

These are the dependencies i needed on debian jessie 8.5:

perl -MCPAN -e 'install Perl::OSType'
perl -MCPAN -e 'install Module::Build';
cpan install Config::IniFiles
apt install mbuffer lzop pv sudo


You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
#41 (comment)

from sanoid.

FlowSem avatar FlowSem commented on May 18, 2024

Well, maybe i should have checked that first :)

Anyway thanks. On my next install i will try that.

So far sanoid and syncoid chug along happily now.
Keep up your good work :)

from sanoid.

redmop avatar redmop commented on May 18, 2024

It definitely does. I run it on Proxmox (Debian).
On Jul 7, 2016 7:10 AM, "Jim Salter" [email protected] wrote:

You shouldn't need mcpan - I believe debian has libconfig-inifiles-perl in
repos, just like Ubuntu does.

apt install libconfig-inifiles-perl


(Sent from my tablet - please blame any weird errors on autocorrect)

On July 7, 2016 08:50:50 FlowSem [email protected] wrote:

Finally i resolved the problem.

sudo was missing on my target machine n2.

Maybe you could add some package requirements to the readme.
For debian i also had to install some perl dependencies.

These are the dependencies i needed on debian jessie 8.5:

perl -MCPAN -e 'install Perl::OSType'
perl -MCPAN -e 'install Module::Build';
cpan install Config::IniFiles
apt install mbuffer lzop pv sudo


You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
#41 (comment)


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#41 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ANsAr6hJP0k08-nQcpH6H8VE0NDdjTLJks5qTPq5gaJpZM4I8qyC
.

from sanoid.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.