Giter VIP home page Giter VIP logo

zfs_autobackup's Introduction

ZFS autobackup

Tests Coverage Status Python Package CodeQL

Introduction

ZFS-autobackup tries to be the most reliable and easiest to use tool, while having all the features.

You can either use it as a backup tool, replication tool or snapshot tool.

You can select what to backup by setting a custom ZFS property. This makes it easy to add/remove specific datasets, or just backup your whole pool.

Other settings are just specified on the commandline: Simply setup and test your zfs-autobackup command and fix all the issues you might encounter. When you're done you can just copy/paste your command to a cron or script.

Since it's using ZFS commands, you can see what it's actually doing by specifying --debug. This also helps a lot if you run into some strange problem or errors. You can just copy-paste the command that fails and play around with it on the commandline. (something I missed in other tools)

An important feature that's missing from other tools is a reliable --test option: This allows you to see what zfs-autobackup will do and tune your parameters. It will do everything, except make changes to your system.

Features

  • Works across operating systems: Tested with Linux, FreeBSD/FreeNAS and SmartOS.
  • Low learning curve: no complex daemons or services, no additional software or networking needed.
  • Plays nicely with existing replication systems. (Like Proxmox HA)
  • Automatically selects filesystems to backup by looking at a simple ZFS property.
  • Creates consistent snapshots. (takes all snapshots at once, atomicly.)
  • Multiple backups modes:
    • Backup local data on the same server.
    • "push" local data to a backup-server via SSH.
    • "pull" remote data from a server via SSH and backup it locally.
    • "pull+push": Zero trust between source and target.
  • Can be scheduled via simple cronjob or run directly from commandline.
  • Also supports complex backup geometries.
  • ZFS encryption support: Can decrypt / encrypt or even re-encrypt datasets during transfer.
  • Supports sending with compression. (Using pigz, zstd etc)
  • IO buffering to speed up transfer.
  • Bandwidth rate limiting.
  • Multiple backups from and to the same datasets are no problem.
  • Resillient to errors.
  • Ability to manually 'finish' failed backups to see whats going on.
  • Easy to debug and has a test-mode. Actual unix commands are printed.
  • Uses progressive thinning for older snapshots.
  • Uses zfs-holds on important snapshots to prevent accidental deletion.
  • Automatic resuming of failed transfers.
  • Easy migration from other zfs backup systems to zfs-autobackup.
  • Gracefully handles datasets that no longer exist on source.
  • Complete and clean logging.
  • All code is regression tested against actual ZFS environments.
  • Easy installation:
    • Just install zfs-autobackup via pip.
    • Only needs to be installed on one side.
    • Written in python and uses zfs-commands, no special 3rd party dependency's or compiled libraries needed.
    • No annoying config files or properties.

Getting started

Please look at our wiki to Get started.

Or read the Full manual

Sponsor list

This project was sponsored by:

zfs_autobackup's People

Contributors

bagbag avatar bk avatar digitalsignalperson avatar dkew8 avatar noifp avatar oddlama avatar p-eb avatar parke avatar psy0rz avatar sbidoul avatar tuffnatty avatar wxcafe avatar xrobau avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zfs_autobackup's Issues

Customize ZFS_MAX_UNCHANGED_BYTES

Currently ZFS_MAX_UNCHANGED_BYTES=200000, but in my testing before implementing zfs_autobackup, I find it is not detecting my changes unless I make them really big.

e.g. writing some random text to a file:

sudo zfs get -H -ovalue -p written@test-20200310202035 mypool/test
77824

200k seems arbitrary - expose as an argument?

Pattern backup history (e.g. progressive thinning)

It would be nice to be able to define a pattern for the backup history.
For example:

  • past 5 year, 1 backup each quarter year
  • past year, 1 backup every month
  • past half-year, 2 backups every month
  • past quarter-year, one backup every week
  • past month, one backup every day
  • past week, one backup every hour
  • past day, one backup every quarter hour (insane?)
  • past hour, one backup every minute (even more insane?)
  • etc etc... ;)

Any suggestions how/where to implement this are welcome. When I have the time I'll be willing to build it just need some confirmation about the idea what is wished to have :)

Allow receiving on encrypted datasets

Encrypted datasets do not support the embedded_data feature. Still, it is used if the sending side is unencrypted. Receival on an encrypted dataset will fail.

Commenting

#        if 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options:
#            cmd.append("-e") # WRITE_EMBEDDED, more compact stream

out in send_pipe solves the problem for me, I can then receive using
/root/zfsautobackup/zfs-autobackup.py --ssh-source t440 echo tank/t440 --no-snapshot --clear-refreservation --clear-mountpoint --filter-properties encryption --verbose
The dataset then inherits the encryption properties from the parent.

Proposal: The choice whether -e is set in the zfs send should also be dependent on whether the receiving side supports it.

I will try to submit a pull request, at the moment I am missing the time to do so though.

Enabling -D for sending a deduped stream don't work

Hello,
I want to replicate heavily deduped datasets and I was missing the -D option while sending.
After looking at the code I've found an outcommented section about this.
Just removing the the "#" in that code section results in a lot of error output.

#NOTE: performance is usually worse with this option, according to manual
#also -D will be depricated in newer ZFS versions
if not resume:
if "-D" in self.zfs_node.supported_send_options:
cmd.append("-D") # dedupped stream, sends less duplicate data

Additionaly I use the option "dedup=skein" instead of "dedup=on" for performance reasons (much better!) and this may be the reason why this won't work.
Anyway, I don't know much about python and any help would be nice.

Send or Recv Error?

I am trying to get zfs-autoback on FreeBSD 12.1 to create snapshots of zroot and send them to a zfs pool on raidz hdd on the same server. I get the following errors on the send portion of the zfs-autobackup command. I have no idea what this error means or how to resolve it.
Any help would be appreciated.

# zfs-autobackup --test --verbose --progress hddstorage /storage/Recovery/STEEL/snapshots/
zfs-autobackup v3.0 - Copyright 2020 E.H.Eefting ([email protected])
TEST MODE - SIMULATING WITHOUT MAKING ANY CHANGES

Source settings

[Source] Datasets are local
[Source] Keep the last 10 snapshots.
[Source] Keep every 1 day, delete after 1 week.
[Source] Keep every 1 week, delete after 1 month.
[Source] Keep every 1 month, delete after 1 year.
[Source] Selects all datasets that have property 'autobackup:hddstorage=true' (or childs of datasets that have 'autobackup:hddstorage=child')

Selecting

[Source] zroot: Selected (direct selection)
[Source] zroot/ROOT: Selected (inherited selection)
[Source] zroot/ROOT/12.1-RELEASE: Selected (inherited selection)
[Source] zroot/ROOT/12.1-RELEASE-p10: Selected (inherited selection)
[Source] zroot/ROOT/current: Selected (inherited selection)
[Source] zroot/ROOT/default: Selected (inherited selection)
[Source] zroot/swap: Ignored (disabled)
[Source] zroot/tmp: Ignored (disabled)
[Source] zroot/usr: Selected (inherited selection)
[Source] zroot/usr/home: Selected (inherited selection)
[Source] zroot/usr/ports: Selected (inherited selection)
[Source] zroot/usr/src: Selected (inherited selection)
[Source] zroot/var: Selected (inherited selection)
[Source] zroot/var/audit: Selected (inherited selection)
[Source] zroot/var/crash: Selected (inherited selection)
[Source] zroot/var/log: Selected (inherited selection)
[Source] zroot/var/mail: Selected (inherited selection)
[Source] zroot/var/tmp: Ignored (disabled)

Snapshotting

[Source] Creating snapshots hddstorage-20201018113559 in pool zroot

Target settings

[Target] Datasets are local
[Target] Keep the last 10 snapshots.
[Target] Keep every 1 day, delete after 1 week.
[Target] Keep every 1 week, delete after 1 month.
[Target] Keep every 1 month, delete after 1 year.
[Target] Receive datasets under: /storage/Recovery/STEEL/snapshots/

Sending and thinning

! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot/ROOT: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot/ROOT/12.1-RELEASE: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot/ROOT/12.1-RELEASE-p10: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot/ROOT/current: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot/ROOT/default: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot/usr: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot/usr/home: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot/usr/ports: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot/usr/src: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot/var: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot/var/audit: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot/var/crash: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot/var/log: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot/var/mail: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! 15 failures!

TEST MODE - DID NOT MAKE ANY CHANGES!

When I run with --debug I get the following:

# zfs-autobackup --debug --test --verbose --progress hddstorage /storage/Recovery/STEEL/snapshots
zfs-autobackup v3.0 - Copyright 2020 E.H.Eefting ([email protected])

TEST MODE - SIMULATING WITHOUT MAKING ANY CHANGES

Source settings

[Source] Datasets are local
[Source] Keep the last 10 snapshots.
[Source] Keep every 1 day, delete after 1 week.
[Source] Keep every 1 week, delete after 1 month.
[Source] Keep every 1 month, delete after 1 year.
[Source] Selects all datasets that have property 'autobackup:hddstorage=true' (or childs of datasets that have 'autobackup:hddstorage=child')

Selecting

[Source] Getting selected datasets
[Source] RUN > zfs get -t volume,filesystem -o name,value,source -s local,inherited -H autobackup:hddstorage
[Source] zroot: Selected (direct selection)
[Source] zroot/ROOT: Selected (inherited selection)
[Source] zroot/ROOT/12.1-RELEASE: Selected (inherited selection)
[Source] zroot/ROOT/12.1-RELEASE-p10: Selected (inherited selection)
[Source] zroot/ROOT/current: Selected (inherited selection)
[Source] zroot/ROOT/default: Selected (inherited selection)
[Source] zroot/swap: Ignored (disabled)
[Source] zroot/tmp: Ignored (disabled)
[Source] zroot/usr: Selected (inherited selection)
[Source] zroot/usr/home: Selected (inherited selection)
[Source] zroot/usr/ports: Selected (inherited selection)
[Source] zroot/usr/src: Selected (inherited selection)
[Source] zroot/var: Selected (inherited selection)
[Source] zroot/var/audit: Selected (inherited selection)
[Source] zroot/var/crash: Selected (inherited selection)
[Source] zroot/var/log: Selected (inherited selection)
[Source] zroot/var/mail: Selected (inherited selection)
[Source] zroot/var/tmp: Ignored (disabled)

Snapshotting

[Source] zroot: Getting snapshots
[Source] zroot: Checking if filesystem exists
[Source] RUN > zfs list zroot
[Source] RUN > zfs list -d 1 -r -t snapshot -H -o name zroot
[Source] zroot/ROOT: Getting snapshots
[Source] zroot/ROOT: Checking if filesystem exists
[Source] RUN > zfs list zroot/ROOT
[Source] RUN > zfs list -d 1 -r -t snapshot -H -o name zroot/ROOT
[Source] zroot/ROOT/12.1-RELEASE: Getting snapshots
[Source] zroot/ROOT/12.1-RELEASE: Checking if filesystem exists
[Source] RUN > zfs list zroot/ROOT/12.1-RELEASE
[Source] RUN > zfs list -d 1 -r -t snapshot -H -o name zroot/ROOT/12.1-RELEASE
[Source] zroot/ROOT/12.1-RELEASE-p10: Getting snapshots
[Source] zroot/ROOT/12.1-RELEASE-p10: Checking if filesystem exists
[Source] RUN > zfs list zroot/ROOT/12.1-RELEASE-p10
[Source] RUN > zfs list -d 1 -r -t snapshot -H -o name zroot/ROOT/12.1-RELEASE-p10
[Source] zroot/ROOT/current: Getting snapshots
[Source] zroot/ROOT/current: Checking if filesystem exists
[Source] RUN > zfs list zroot/ROOT/current
[Source] RUN > zfs list -d 1 -r -t snapshot -H -o name zroot/ROOT/current
[Source] zroot/ROOT/default: Getting snapshots
[Source] zroot/ROOT/default: Checking if filesystem exists
[Source] RUN > zfs list zroot/ROOT/default
[Source] RUN > zfs list -d 1 -r -t snapshot -H -o name zroot/ROOT/default
[Source] zroot/usr: Getting snapshots
[Source] zroot/usr: Checking if filesystem exists
[Source] RUN > zfs list zroot/usr
[Source] RUN > zfs list -d 1 -r -t snapshot -H -o name zroot/usr
[Source] zroot/usr/home: Getting snapshots
[Source] zroot/usr/home: Checking if filesystem exists
[Source] RUN > zfs list zroot/usr/home
[Source] RUN > zfs list -d 1 -r -t snapshot -H -o name zroot/usr/home
[Source] zroot/usr/ports: Getting snapshots
[Source] zroot/usr/ports: Checking if filesystem exists
[Source] RUN > zfs list zroot/usr/ports
[Source] RUN > zfs list -d 1 -r -t snapshot -H -o name zroot/usr/ports
[Source] zroot/usr/src: Getting snapshots
[Source] zroot/usr/src: Checking if filesystem exists
[Source] RUN > zfs list zroot/usr/src
[Source] RUN > zfs list -d 1 -r -t snapshot -H -o name zroot/usr/src
[Source] zroot/var: Getting snapshots
[Source] zroot/var: Checking if filesystem exists
[Source] RUN > zfs list zroot/var
[Source] RUN > zfs list -d 1 -r -t snapshot -H -o name zroot/var
[Source] zroot/var/audit: Getting snapshots
[Source] zroot/var/audit: Checking if filesystem exists
[Source] RUN > zfs list zroot/var/audit
[Source] RUN > zfs list -d 1 -r -t snapshot -H -o name zroot/var/audit
[Source] zroot/var/crash: Getting snapshots
[Source] zroot/var/crash: Checking if filesystem exists
[Source] RUN > zfs list zroot/var/crash
[Source] RUN > zfs list -d 1 -r -t snapshot -H -o name zroot/var/crash
[Source] zroot/var/log: Getting snapshots
[Source] zroot/var/log: Checking if filesystem exists
[Source] RUN > zfs list zroot/var/log
[Source] RUN > zfs list -d 1 -r -t snapshot -H -o name zroot/var/log
[Source] zroot/var/mail: Getting snapshots
[Source] zroot/var/mail: Checking if filesystem exists
[Source] RUN > zfs list zroot/var/mail
[Source] RUN > zfs list -d 1 -r -t snapshot -H -o name zroot/var/mail
[Source] Creating snapshots hddstorage-20201018115848 in pool zroot
[Source] SKIP > zfs snapshot zroot@hddstorage-20201018115848 zroot/ROOT@hddstorage-20201018115848 zroot/ROOT/12.1-RELEASE@hddstorage-20201018115848 zroot/ROOT/12.1-RELEASE-p10@hddstorage-20201018115848 zroot/ROOT/current@hddstorage-20201018115848 zroot/ROOT/default@hddstorage-20201018115848 zroot/usr@hddstorage-20201018115848 zroot/usr/home@hddstorage-20201018115848 zroot/usr/ports@hddstorage-20201018115848 zroot/usr/src@hddstorage-20201018115848 zroot/var@hddstorage-20201018115848 zroot/var/audit@hddstorage-20201018115848 zroot/var/crash@hddstorage-20201018115848 zroot/var/log@hddstorage-20201018115848 zroot/var/mail@hddstorage-20201018115848

Target settings

[Target] Datasets are local
[Target] Keep the last 10 snapshots.
[Target] Keep every 1 day, delete after 1 week.
[Target] Keep every 1 week, delete after 1 month.
[Target] Keep every 1 month, delete after 1 year.
[Target] Receive datasets under: /storage/Recovery/STEEL/snapshots

Sending and thinning

[Target] /storage/Recovery/STEEL/snapshots: Checking if filesystem exists
[Target] RUN > zfs list /storage/Recovery/STEEL/snapshots
[Target] /storage/Recovery/STEEL/snapshots: Checking if filesystem exists
[Target] RUN > zfs list /storage/Recovery/STEEL/snapshots
[Source] zpool zroot: Getting zpool properties
[Source] RUN > zpool get -H -p all zroot
[Target] zpool : Getting zpool properties
[Target] RUN > zpool get -H -p all
! [Target] STDERR > cannot open '': name must begin with a letter
! [Source] zroot: FAILED: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
! Exception: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.
Traceback (most recent call last):
File "/usr/local/bin/zfs-autobackup", line 1862, in
sys.exit(zfs_autobackup.run())
File "/usr/local/bin/zfs-autobackup", line 1825, in run
fail_count=self.sync_datasets(source_node, source_datasets)
File "/usr/local/bin/zfs-autobackup", line 1703, in sync_datasets
target_features=target_node.get_zfs_pool(target_dataset.split_path()[0]).features
File "/usr/local/bin/zfs-autobackup", line 521, in features
for (key,value) in self.properties.items():
File "/usr/local/bin/zfs-autobackup", line 252, in get
obj._cached_properties[propname]=self.func(obj)
File "/usr/local/bin/zfs-autobackup", line 510, in properties
for pair in self.zfs_node.run(tab_split=True, cmd=cmd, readonly=True, valid_exitcodes=[ 0 ]):
File "/usr/local/bin/zfs-autobackup", line 456, in run
raise(subprocess.CalledProcessError(p.returncode, encoded_cmd))
subprocess.CalledProcessError: Command '[b'zpool', b'get', b'-H', b'-p', b'all', b'']' returned non-zero exit status 1.

No argument for non-standard SSH port

It would be nice to have an additional argument to handle SSH over a port other than the standard 22. Something like '--ssh-port' that would be utilized by both '--ssh-target' and '--ssh-source' would be perfect.

Use source pool names as target pool name

As a user, I want to backup multiple pools to Backup Server that has the same pool schema, so I'm able to create a full replica ready to go in a disaster scenario.

Let's imagine the situation where you have 2 servers in equal hardware configuration, ZFS pools on those servers are divided on fast and slow storage (SSD/fs and HDD/fs) and you want to perform a backup.

Best regards!

support for pre- and post-snapshot scripts

Currently, the script only performs the ZFS related operations on the target system. Therefore, all the supporting actions like stopping and restarting the service must be done outside of it. This can lead to prolonged downtime. As the snapshot itself is almost immediate, it would make sense to move the actions to the point between snapshots and the actual transfer of the data. At this time, we're using --no-send/--no-snapshot combination, which is fine, but the support for adding the scripts would be much more convenient.

DeprecationWarning of imp module

This error occurs on version 3.0-rc3 and 3.0-rc4

 # zfs_autobackup
/usr/local/bin/zfs_autobackup:23: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp

local snapshots gone before attempting to replicate with --other-snapshots

I have zfs-auto-snapshot running to handle local frequent/hourly/daily/monthly snapshots. I ran my zfs-autobackup script with your new --other-snapshots flag so that all the zfs-auto-snap snapshots get replicated. However, I ran into an issue where one of my hourly snapshots expired and was deleted in between calling zfs-autobackup and it reaching that particular incremental send, resulting in

! [Target] STDERR|> cannot send mypool/dataset@zfs-auto-snap_hourly-2020-03-12-16h00U: snapshot mypool/dataset@zfs-auto-snap_hourly-2020-03-12-16h00U does not exist

...

and then failing to send the rest of the snapshots for that dataset.

I suppose I could stop the zfs-auto-snapshot service before doing backups, but that could leave the system vulnerable for hours depending how long the replication takes.

Any ideas on what good practice would be? Is it feasible for zfs-autobackup to "refresh" the snpashot list as it goes?

pip install yields two versions "zfs-autobackup" and "zfs_autobackup"

$ sudo -H pip install zfs-autobackup
Requirement already satisfied: zfs-autobackup in /usr/local/lib/python3.6/dist-packages (3.0rc3)
Requirement already satisfied: argparse in /usr/local/lib/python3.6/dist-packages (from zfs-autobackup) (1.4.0)
Requirement already satisfied: colorama in /usr/lib/python3/dist-packages (from zfs-autobackup) (0.3.7)

now to see the tab-complete

$ zfs
zfs             zfs_autobackup  zfs-autobackup  

hm,

$ which zfs_autobackup
/usr/local/bin/zfs_autobackup

$ which zfs-autobackup
/usr/local/bin/zfs-autobackup
$ ll /usr/local/bin | grep zfs
-rwxr-xr-x  1 root root 53903 Feb 19 20:37 zfs_autobackup*
-rwxr-xr-x  1 root root 53903 Feb 19 20:37 zfs-autobackup*

Looks like duplicate executable scripts

$ file /usr/local/bin/zfs_autobackup
/usr/local/bin/zfs_autobackup: Python script, ASCII text executable, with very long lines
$ file /usr/local/bin/zfs-autobackup
/usr/local/bin/zfs-autobackup: Python script, ASCII text executable, with very long lines

Tab & Space mixing

lines 446-452 have mixed tabs & spaces causing python error on execution.

$/tmp/zfs_autobackup/zfs_autobackup 
  File "/tmp/zfs_autobackup/zfs_autobackup", line 447
    error="Cant find latest target snapshot on source, did you destroy it accidently? "+source_filesystem+"@"+latest_target_snapshot
                                                                                                                                   ^
TabError: inconsistent use of tabs and spaces in indentation

Generate Metadata and Python 3?

Nice script.

For SmartOS setups, it will be wonderful if the script also generate the metadata (json) file and send it together with the backup files for close to full automation.

Also, can it be updated to Python 3+ since repositories nowadays defaults to Python 3?

Local only, "in place" snapshots

Would be cool to be able to run without target or source, and have zfs_autobackup do snapshots on the dataset itself. Currently I use a "hack" where I run zfs_autobackup --ssh-target 127.0.0.1 local none --no-send --keep-source 15min6h but it'd be nice to be able to just go zfs_autobackup local --keep 15min6h or something.

Also, currently this setup doesn't work properly (i.e. zfs_autobackup still connects to the "target" and tells me none doesn't exist) :/ (it worked with RC-9)

Feature Request: Use pigz

As CPUs are massively faster than anything else we have, we should be compressing data before moving it around. pigz is a multi-threaded implementation of gzip, which scales pretty much infinitely.

I've been running an older hacked version of zfs_backup which uses bash -c on the remote machine, and I was going to do a proper PR for this, but there seems to be some strange issue that I can't figure out.

This is my send command, which works fine:

ssh storage1 '/bin/bash' '-c' '( zfs send' '-L' '-e' '-c' '-D' '-v' '-P' '-p' 'pool1/mainstore@backup-20200428071553' '|pigz)'

I then use shell=True on the local Popen to allow this as you want to keep python out of the way of this, as it'll just be moving data in and out of memory and slowing things down (the second line is self.debug(encoded_cmd) before the Popen, on line 413):

# [Target] Piping input
# [Target] [b'/usr/bin/pigz', b'-d', b'|', b'zfs', b'recv', b'-u', b'-v', b'bigmirror/storage1/pool1/mainstore']
! DATASET FAILED: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
! Exception: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte

This looks like the new version is trying to readline.decode(utf-8) the binary data coming from zfs send (or, in this case, zfs send | pigz), and I can't understand why that would be happening.

Any hints?

Fails to recv with ZoL version 6.5.4 when dataset doesn't already exist.

#zfs_autobackup --ssh-source kvm1 --clear-mountpoint kvm1 data --debug --strip-path 1

# ssh 'kvm1' 'zfs' 'get' '-t' 'volume,filesystem' '-o' 'name,value,source' '-s' 'local,inherited' '-H' 'autobackup:kvm1'
# ssh 'kvm1' 'zfs' 'snapshot' 'datastore/Spice_C@kvm1-20170426151419' 'datastore/Spice_ContentLibrary@kvm1-20170426151419' 'datastore/Spice_ProgSys@kvm1-20170426151419'
# ssh 'kvm1' 'zfs' 'list' '-d' '1' '-r' '-t' 'snapshot' '-H' '-o' 'name' 'datastore/Spice_C' 'datastore/Spice_ContentLibrary' 'datastore/Spice_ProgSys'
Source snapshots: {'datastore/Spice_C': ['kvm1-20170426151419'],
 'datastore/Spice_ContentLibrary': ['kvm1-20170426151419'],
 'datastore/Spice_ProgSys': ['kvm1-20170426151419']}
# zfs 'list' '-d' '1' '-r' '-t' 'snapshot' '-H' '-o' 'name' 'data/Spice_C' 'data/Spice_ContentLibrary' 'data/Spice_ProgSys'
cannot open 'data/Spice_C': dataset does not exist
cannot open 'data/Spice_ContentLibrary': dataset does not exist
cannot open 'data/Spice_ProgSys': dataset does not exist
Target snapshots: {}
# zfs 'create' '-p' 'data'
# ssh 'kvm1' 'zfs' 'send' '-p' '-v' 'datastore/Spice_C@kvm1-20170426151419' | zfs 'recv' '-u' '-v' 'data/Spice_C'
cannot receive: failed to read from stream
Verifying if snapshot exists on target
# zfs 'list' 'data/Spice_C@kvm1-20170426151419'
cannot open 'data/Spice_C@kvm1-20170426151419': dataset does not exist
Traceback (most recent call last):
  File "/opt/bin/zfs_autobackup", line 484, in <module>
    ssh_target=args.ssh_target, target_filesystem=target_filesystem)
  File "/opt/bin/zfs_autobackup", line 285, in zfs_transfer
    run(ssh_to=ssh_target, cmd=["zfs", "list", target_filesystem+"@"+second_snapshot ])
  File "/opt/bin/zfs_autobackup", line 70, in run
    raise(subprocess.CalledProcessError(p.returncode, encoded_cmd))
subprocess.CalledProcessError: Command '['zfs', 'list', 'data/Spice_C@kvm1-20170426151419']' returned non-zero exit status 1

I'm able to get around the error if I first manually run the same command without the switches "-p -v" & "-u -v"
i.e. #ssh kvm1 zfs send datastore/Spice_C@kvm1-20170426151419 | zfs recv data/Spice_C

Once the datasets marked with autobackup:kvm1=true exist on both zfs hosts the program completes without error.

I don't know if it's something specific with my configuration.

I'm using --strip-path 1 because the zpool names are different and it failed to create data/datastore or data/datastore/Spice_C without that option


#zfs_autobackup --ssh-source kvm1 --clear-mountpoint kvm1 data --verbose
Getting selected source filesystems for backup kvm1 on kvm1
Selected: datastore/Spice_C (direct selection)
Selected: datastore/Spice_ContentLibrary (direct selection)
Selected: datastore/Spice_ProgSys (direct selection)
Creating source snapshot kvm1-20170426141821 on kvm1 
Getting source snapshot-list from kvm1
Getting target snapshot-list from local
cannot open 'data/datastore/Spice_C': dataset does not exist
cannot open 'data/datastore/Spice_ContentLibrary': dataset does not exist
cannot open 'data/datastore/Spice_ProgSys': dataset does not exist
Tranferring datastore/Spice_C initial backup snapshot kvm1-20170426141235
cannot receive: failed to read from stream
cannot open 'data/datastore/Spice_C@kvm1-20170426141235': dataset does not exist
Traceback (most recent call last):
  File "/opt/bin/zfs_autobackup", line 484, in <module>
    ssh_target=args.ssh_target, target_filesystem=target_filesystem)
  File "/opt/bin/zfs_autobackup", line 285, in zfs_transfer
    run(ssh_to=ssh_target, cmd=["zfs", "list", target_filesystem+"@"+second_snapshot ])
  File "/opt/bin/zfs_autobackup", line 70, in run
    raise(subprocess.CalledProcessError(p.returncode, encoded_cmd))
subprocess.CalledProcessError: Command '['zfs', 'list', 'data/datastore/Spice_C@kvm1-20170426141235']' returned non-zero exit status 1

support non-root users

Currently, the only way to do backups is to allow the root user to log in via ssh. This is not an option in many cases. Therefore, it would be nice to check if we are root at the beginning and escalate the privileges via su/sudo, if we are not.

"multiple snapshots of same fs not allowed" during "Snapshotting" phase

Hello! First time user here, first time attempting to use this awesome tool.

Unfortunately ran into an unexpected problem during snapshot. Can't really figure out what "multiple snapshots of same fs" is referring to. Per the below, I only see ONE snapshot per filesystem.

NOTE: some of those are ZVOL volumes.

Here's my params for entry:

zfs-autobackup --ssh-source fermmy-vms fermmy-server ztank/backup/fermmy-vms \
        --progress \
        --verbose \
        --clear-refreservation \
        --clear-mountpoint

Results in:

  #### Snapshotting
  [Source] Creating snapshot fermmy-server-20200219212008
! [Source] STDERR > cannot create snapshots : multiple snapshots of same fs not allowed
! [Source] STDERR > no snapshots were created
Traceback (most recent call last):
  File "/usr/local/bin/zfs-autobackup", line 1470, in <module>
    sys.exit(zfs_autobackup.run())
  File "/usr/local/bin/zfs-autobackup", line 1413, in run
    source_node.consistent_snapshot(source_datasets, source_node.new_snapshotname(), allow_empty=self.args.allow_empty)
  File "/usr/local/bin/zfs-autobackup", line 1252, in consistent_snapshot
    self.run(cmd, readonly=False)
  File "/usr/local/bin/zfs-autobackup", line 471, in run
    raise(subprocess.CalledProcessError(p.returncode, encoded_cmd))

I did a quick find-replace , with \n for readability:

subprocess.CalledProcessError: Command '[b'ssh'
b'fermmy-vms'
b"'zfs'"
b"'snapshot'"
b"'bpool@fermmy-server-20200219212008'"
b"'bpool/BOOT@fermmy-server-20200219212008'"
b"'bpool/BOOT/ubuntu@fermmy-server-20200219212008'"
b"'rpool@fermmy-server-20200219212008'"
b"'rpool/ROOT@fermmy-server-20200219212008'"
b"'rpool/ROOT/ubuntu@fermmy-server-20200219212008'"
b"'rpool/home@fermmy-server-20200219212008'"
b"'rpool/home/fermulator@fermmy-server-20200219212008'"
b"'rpool/home/root@fermmy-server-20200219212008'"
b"'rpool/tmp@fermmy-server-20200219212008'"
b"'rpool/usr@fermmy-server-20200219212008'"
b"'rpool/usr/local@fermmy-server-20200219212008'"
b"'rpool/var@fermmy-server-20200219212008'"
b"'rpool/var/lib@fermmy-server-20200219212008'"
b"'rpool/var/log@fermmy-server-20200219212008'"
b"'rpool/var/spool@fermmy-server-20200219212008'"
b"'zstorage@fermmy-server-20200219212008'"
b"'zstorage/nfs@fermmy-server-20200219212008'"
b"'zstorage/nfs/fermmy-boinc0@fermmy-server-20200219212008'"
b"'zstorage/nfs/fermmy-boinc0/disk0@fermmy-server-20200219212008'"
b"'zstorage/vms@fermmy-server-20200219212008'"
b"'zstorage/vms/archived@fermmy-server-20200219212008'"
b"'zstorage/vms/git@fermmy-server-20200219212008'"
b"'zstorage/vms/git/disk0@fermmy-server-20200219212008'"
b"'zstorage/vms/git/disk1@fermmy-server-20200219212008'"
b"'zstorage/vms/home-assistant@fermmy-server-20200219212008'"
b"'zstorage/vms/home-assistant/disk0@fermmy-server-20200219212008'"
b"'zstorage/vms/house@fermmy-server-20200219212008'"
b"'zstorage/vms/house/disk0@fermmy-server-20200219212008'"
b"'zstorage/vms/nextcloud@fermmy-server-20200219212008'"
b"'zstorage/vms/nextcloud/disk0@fermmy-server-20200219212008'"
b"'zstorage/vms/services@fermmy-server-20200219212008'"
b"'zstorage/vms/services/disk0@fermmy-server-20200219212008'"
b"'zstorage/vms/services/disk1@fermmy-server-20200219212008'"
b"'zstorage/vms/foo@fermmy-server-20200219212008'"
b"'zstorage/vms/foo/disk0@fermmy-server-20200219212008'"
b"'zstorage/vms/bar@fermmy-server-20200219212008'"
b"'zstorage/vms/bar/disk0@fermmy-server-20200219212008'"]' returned non-zero exit status 1.

Thin source trigger

Great job!

Do you have any interest to implement next:

You have several thin source zfs systems with a user data on them. You divide systems to datasets by a user. User login / modify data rarely on all systems. So what if there is some trigger system to start back upping. Maybe the backup server should look up new snapshots on thin sources and if it appearing then take a backup from it. That is leading to situation that backup times follows logout times. Maybe even have a schedule to feed "unused" systems to keep up to date.

You can probably guess the use case. Linux home folders sync between multiple computers.

Documentation Doesn't State Supported ZFS Versions

Hi psy0rz,

I would like to comment that the documentation does not state which OSes or ZFS versions are supported, I tried to run this on Solaris and Linux to see if it works. main issue is that some commands like 'zfs get -t filesystem,volume...' don't exist of those versions on ZFS.

thanks

Destroy old snapshots *after* sending the new ones in

When zfs_autobackup runs, it destroys old snapshots on the source before sending the new snapshots to the target. If for some reason receives fail, after a while the target will not have any snapshots in common with the source (source continues taking new snapshots and destroying old ones, target never gets the new snapshots) and a full send will have to be done.

old snapshots should be deleted only once new ones have been received on the target and it has been verified that they have been received correctly (i.e. the latest snapshot on the target is the same as on the source)

Feature Request: Direct mbuffer transfer

Currently a dataset can be send troth ssh wit mbuffer but mbuffer can listen on a tcp port for data stream. If we send the data directly to this tcp post without the compression and encryption of ssh it is much more faster than then troth ssh. For this solution we need to start the mbuffer on the destination before we can start sending data.

mbuffer -s 128k -m 1G -I 9090 | zfs receive -vF lremote-zfs/my-vm
zfs send local-zfs/my-vm | mbuffer -s 128k -m 1G -O remote-server:9090

Invalid option x on FreeBSD 12.2

Freebsd 12.2
zfs-autoback 3.0

/usr/local/bin/zfs-autobackup --other-snapshots --no-holds --clear-refreservation --clear-mountpoint hddstorage storage/Recovery/STEEL/snapshots

! [Target] STDERR > invalid option 'x'
! [Target] STDERR > usage:
! [Target] STDERR >
! [Target] STDERR > For the property list, run: zfs set|get
! [Target] STDERR >
! [Target] STDERR > For the delegated permission list, run: zfs allow|unallow
! [Source] zroot: FAILED: Command '[b'zfs', b'recv', b'-u', b'-x', b'refreservation', b'-o', b'canmount=noauto', b'-v', b'-s', b'storage/Recovery/STEEL/snapshots/zroot']' returned non-zero exit status 2.
! [Target] STDERR > invalid option 'x'
! [Target] STDERR > usage:
! [Target] STDERR >
! [Target] STDERR > For the property list, run: zfs set|get
! [Target] STDERR >
! [Target] STDERR > For the delegated permission list, run: zfs allow|unallow
! [Source] zroot/ROOT: FAILED: Command '(pipe)' died with <Signals.SIGPIPE: 13>.
...
! 12 failures!

Using --debug:
[Target] STDERR|> full zroot@hddstorage-20201104202122 12912
[Target] STDERR|> size 12912
[Target] STDERR|> full zroot@hddstorage-20201104202122 12912
! [Target] STDERR > invalid option 'x'
! [Target] STDERR > usage:
[Target] STDERR > receive|recv [-vnsFu] <filesystem|volume|snapshot>
[Target] STDERR > receive|recv [-vnsFu] [-o origin=] [-d | -e]
[Target] STDERR > receive|recv -A <filesystem|volume>
! [Target] STDERR >
! [Target] STDERR > For the property list, run: zfs set|get
! [Target] STDERR >
! [Target] STDERR > For the delegated permission list, run: zfs allow|unallow
! [Source] zroot: FAILED: Command '[b'zfs', b'recv', b'-u', b'-x', b'refreservation', b'-o', b'canmount=noauto', b'-v', b'-s', b'storage/Recovery/STEEL/snapshots/zroot']' returned non-zero exit status 2.
! Exception: Command '[b'zfs', b'recv', b'-u', b'-x', b'refreservation', b'-o', b'canmount=noauto', b'-v', b'-s', b'storage/Recovery/STEEL/snapshots/zroot']' returned non-zero exit status 2.
Traceback (most recent call last):
File "/usr/local/bin/zfs-autobackup", line 1862, in
sys.exit(zfs_autobackup.run())
File "/usr/local/bin/zfs-autobackup", line 1825, in run
fail_count=self.sync_datasets(source_node, source_datasets)
File "/usr/local/bin/zfs-autobackup", line 1707, in sync_datasets
source_dataset.sync_snapshots(target_dataset, show_progress=self.args.progress, features=common_features, filter_properties=filter_properties, set_properties=set_properties, ignore_recv_exit_code=self.args.ignore_transfer_errors, source_holds= not self.args.no_holds, rollback=self.args.rollback, raw=self.args.raw, other_snapshots=self.args.other_snapshots, no_send=self.args.no_send, destroy_incompatible=self.args.destroy_incompatible)
File "/usr/local/bin/zfs-autobackup", line 1295, in sync_snapshots
source_snapshot.transfer_snapshot(target_snapshot, features=features, prev_snapshot=prev_source_snapshot, show_progress=show_progress, filter_properties=allowed_filter_properties, set_properties=allowed_set_properties, ignore_recv_exit_code=ignore_recv_exit_code, resume_token=resume_token, raw=raw)
File "/usr/local/bin/zfs-autobackup", line 1060, in transfer_snapshot
target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties, set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code)
File "/usr/local/bin/zfs-autobackup", line 1022, in recv_pipe
self.zfs_node.run(cmd, input=pipe, valid_exitcodes=valid_exitcodes)
File "/usr/local/bin/zfs-autobackup", line 456, in run
raise(subprocess.CalledProcessError(p.returncode, encoded_cmd))
subprocess.CalledProcessError: Command '[b'zfs', b'recv', b'-u', b'-x', b'refreservation', b'-o', b'canmount=noauto', b'-v', b'-s', b'storage/Recovery/STEEL/snapshots/zroot']' returned non-zero exit status 2.

From the man page in Freebsd:
zfs receive|recv [-vnsFu] [-o origin=snapshot] filesystem|volume|snapshot
zfs receive|recv [-vnsFu] [-d | -e] [-o origin=snapshot] filesystem

There does not seem to be an option x.
Any ideas what is causing this?

pip install installs two versions

Installed with:

pip --isolated install zfs-autobackup

When checking with pip list version 3.0rc4 gets reported.

While typing the command I noticed that two versions existed, checking the diff between the two I notice that version 3.0-rc3 and 3.0-rc4 get installed with different names

# ls -al /usr/local/bin|grep zfs
-rwxr-xr-x    1 root  wheel    54698 Mar  8 20:14 zfs-autobackup
-rwxr-xr-x    1 root  wheel    53911 Mar  8 20:14 zfs_autobackup

rollback to last common snapshot

Thanks for the tool :) Another idea...

My scenario:
Source system is replicating to rotating offsite USB drives and goes through many zfs-autobackup replications. At some point, have to rollback a source dataset with zfs rollback -r and destroy any snapshots after which are now obsolete. Next time replicating to the offsite drive, would like it to automatically delete the obsolete snapshots to exactly mirror the source dataset.

Currently this happens:

! [Target] targetpool/BACKUPS/mydataset@offsite-20200311153231: Latest common snapshot, roll back to this.
! [Source] sourcepool/mydataset: DATASET FAILED: Cant find latest target snapshot on source.

So would be convenient to have it auto rollback.

The --rollback flag doesn't quite do this. For my purposes I could hack it to not raise the exception and zfs rollback -r to the latest common. What is the use case for --rollback currently?

I think this is similar to #17 but different

No way to turn off thinning

Destroying snapshots can be I/O intensive. Because of that it is preferable to schedule thinning when the pool is not busy. This is not possible with zfs-autobackup because it does not provide an option to turn off thinning.

Time for when snapshots roll-over for thinning

Hello

Not really a bug/issue but more of a query

I note that your snapshot thinning algorithm bases its blocks of GMT rather than local time. This means that for periods/ttls of size day or greater, then the single (daily/weekly/monthly/yearly) snapshot kept will be timestamped at roughly midnight GMT which will often be in the middle of the day in other parts of the world

Is there a specific choice to make the thinning rule revolve around GMT rather than local time?

Thanks

Jason

zfs recv error?

I'm trying to figure out where to go from here. I get the following on completion of backing up a zfs dataset.
root@freenas[~]# zfs-autobackup --ssh-source 10.33.55.70 mail data1/mail --progress --verbose
zfs-autobackup v3.0 - Copyright 2020 E.H.Eefting ([email protected])

Source settings

[Source] Datasets on: 10.33.55.70
[Source] Keep the last 10 snapshots.
[Source] Keep every 1 day, delete after 1 week.
[Source] Keep every 1 week, delete after 1 month.
[Source] Keep every 1 month, delete after 1 year.
[Source] Selects all datasets that have property 'autobackup:mail=true' (or childs of datasets that have 'autobackup:mail=child')

Selecting

[Source] mail/subvol-104-disk-0: Selected (direct selection)

Snapshotting

[Source] Creating snapshots mail-20200928105136 in pool mail

Target settings

[Target] Datasets are local
[Target] Keep the last 10 snapshots.
[Target] Keep every 1 day, delete after 1 week.
[Target] Keep every 1 week, delete after 1 month.
[Target] Keep every 1 month, delete after 1 year.
[Target] Receive datasets under: data1/mail

Sending and thinning

[Source] mail/subvol-104-disk-0@mail-20200925045627: Destroying
[Target] data1/mail/mail/subvol-104-disk-0@mail-20200925045627: Destroying
[Target] data1/mail/mail/subvol-104-disk-0@mail-20200928105136: receiving incremental
! [Target] STDERR > internal error: Invalid argument
! [Source] mail/subvol-104-disk-0: FAILED: Command '[b'zfs', b'recv', b'-u', b'-v', b'-s', b'data1/mail/mail/subvol-104-disk-0']' died with <Signals.SIGABRT: 6>.
! 1 failures!

Thoughts?

zfs rollback dataset busy issue

Hi,

I cannot rollback snapshots as the latest snapshot created with zfs-autobackup is always busy and cannot be destroyed!

zfs rollback -r zroot/var/mail@hddstorage-20201027135626
cannot destroy 'zroot/var/mail@hddstorage-20201027172933': dataset is busy

This goes for all snapshots created for the other zroot sub-folders I have 'tagged' with the hddstorage property.

Is this a zfs feature again? I use bectl to take boot environment snapshots, is zfs-autobackup incompatible with bectl and boot environments? It seems very counter intuitive that I cannot rollback before the last snapshot. What could be causing dataset busy when the latest dataset is attempting to be being destroyed by zfs?

Change default value of --min-change to 1

I'm not sure I got the exact meaning of this flag, but let me give an example so that you can see if my concerns are justified :-).

If I have a ZFS dataset for my /etc folder and I change a single byte in a small configuration file, then I assume the number of bytes written to register this change will be a very low number. Consequently the dataset will not be considered changed and no snapshot be made. This would mean that I have to remember to set this option to ensure all minor (but perhaps crucial) changes are backed-up.

Wouldn't the safe default for this option thus be 1 rather than 200000 (as with the methods that handle this internally in the python code)? Then you would not risk missing small but important changes to go unnoticed. This option is still very valuable of course, depending on the kind of data the dataset holds it may be great to avoid making snapshots for small changes, but it seem reasonable that users explicitly ask for this behavior than the other way around.

I saw the discussion in #33, and since the number 200000 is still rather arbitrary, 1 also seems like a reasonable default due to the clear distinction between 0 (no changes at all) and 1 (an ever so small change).


BTW, I really love this project! I have been looking for similar solutions but none have taken it all the way in the sense that this solution does.

Incremental Backup on usb?

Hi,
I was looking to find a way to backup to the USB but on the -help commands could not find the option is this script possible to backup incremental on a usb?
Thank you

Support for zfs clones

Hi,

I am using zfs a lot and one of the typical use-case is to server as the backend for lxc containers.
It is very practical to have a container as a base template and create new one as a clone of this base. E.g. you would have something like:

tank/lxc/debian-base
tank/lxc/nginx

where nginx would be debian with nginx installed.
The dataset tank/lxc/nginx is clone of tank/lxc/debian-base.

Now I want to backup all my lxc containers. The problem is, that both datasets will be send and received full. So it will take much more space on backup server than on the source!

For now, I have to count with this situation, so I make the full backup of the base dataset and then I make "replication" send of the cloned dataset. It will create the cloned dataset as clone also on the backup server. Afterwards I can use zfs-autobackup as usual.

Now I was thinking, zfs-autobackup could recognise such a situation and do the replication itself.
It would be probably necessary to check "origin" property. If "origin" property would be set, it should use "zfs send -R ..." for replication instead of full transfer.

What do you think about it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.