Giter VIP home page Giter VIP logo

wyng-backup's People

Contributors

gasull avatar tasket avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wyng-backup's Issues

Formal recovery mode after interruption

For v0.2 implement basic recovery mode, especially for delete, send and merge.

This isn't the same as issue #15, which is a full-featured resume for the send operation. Initially, recovering send will merely delete the prior attempt.

Create automated tests

A good testing regime could include the following:

  • Perform an initial backup
  • Verify volume
  • Perform an incremental backup
  • Verify volume
  • Prune first backup
  • Verify volume
  • Diff volume (cross-check with source vol data)

Changing program_name in the code could allow the test data set to be created under a different base folder name without interfering with real-life backup data.

Zstandard compression

  • Add support for the Zstandard/zstd compression scheme.
  • Add arguments to set the compressor and compression level.
  • Detect the compression format during receive.
  • Also add blake2b hashing (faster than sha256)

Also explore whether referencing a zstd dictionary for each volume is appropriate.

Todo misc

Scratchpad, for later expansion into separate issues:

  • Option to backup hot volumes (Qubes *-private-snap)
  • Guard against vm snap rotation during receive-save
  • Multiple storage pool configs

Archive locking and read-only modes

Implement handling of conditions when archives should be accessed as read-only:

  • Lock archive (in addition to in_process) so no writes can occur during receive, verify, etc.

  • Identify the home system in archive.ini. This is the only system that can write by default. Provide a function to re-home the archive to a new system.

Explore De-duplication options

Update: There are now 4 different dedup methods. See #8 (comment) for initial details.


Looking for ideas about a possible deduplication feature.

  1. One simple idea that I've already tried manually with find | uniq | ln is to find the duplicate hashes in the manifests and link their files together on the destination, thus saving disk space. This unsophisticated approach has low CPU & IO overhead but the prospective space savings is lackluster.

  2. It is also possible to retain and correlate more detailed metadata from thin_dump which indicates where source volumes share blocks. However, I've seen complaints about the CPU power needed to process these large xml files (that is not to say they couldn't be cleverly pre-processed to reduce overhead).

  3. More advanced dedup techniques include sliding hash window comparisons. At this point the dedup is actively doing new forms of compression and its not clear this is worth the trade-offs for most users. At the very least it appears beyond what Python can do efficiently.

  4. Detecting when a bkchunk is updated only at its start or end may save significant space (at least when lvm chunks are small). This would require special routines using extra bandwidth in merge and receive functions.

Future support for very large volumes

Features that enable and improve management of very large volumes:

  • Increase address size
  • Graduated chunk sizing based on address
  • Further optimizations

Current volume address size is limited to 64 bits. Increase perhaps to 80 bits (into zetabyte range). This is mainly a function of increasing the hexadecimal places in the chunk filenames.

Graduated chunk sizing would set automatic address boundaries for each chunk size, for example:

  • 128kB = 0 through 32GB
  • 512kB = 32GB through 256GB
  • 2048kB = 256GB through 4TB
  • 8192kB = 4TB through maximum

(128kB is the current chunk size for all addresses.)

Optimizing

There are a number of possibilities including a multiprocess/multithread send, stream transport with less overhead than tar, etc., and addressing Python techniques and/or alternate language.

Optimization - CPU

Changes that may improve throughput, especially for send:

  • Multithreaded encoding layer for compression and encryption
  • Other areas for concurrency: getting deltas, dedup init, send and receive main loops
  • Alternatives to tar streaming, such as direct file IO for internal: destination
  • Static buffers to avoid garbage collection
  • Structs, especially in deduplication code
  • Explore new Python optimization options
  • Tighten the main send loop, use locals
  • Use formats instead of + string concat
  • Quicker compression โ€“ issue #23

Enhanced structure and error checking of command line options

Currently some command-specific options such as --save-to are not checked for appropriate-ness at startup. This is partly due to the simplified processing of commands/options, which also limits where options can be placed on the command line.

  • Re-structure option parsing using sub-parsers

  • Add more conditional checking of options

  • Confine more option processing to main section

Support differential backup of existing snapshots

I have a "base" VM image and a few writable snapshots of it that have been modified .. sparsebak is cool but it would be really good if it could detect that a newly added volume has its origin in a thin volume that is already archived.. is this technically possible? I'm a little hazy on how the underlying thinpool chunk mapping system works but it seems like it ought to be.

Wondering if it could ultimately evolve into a system for distributing base VM images and deltas around the place :)

Integrated encryption

Integrate an encryption layer that can also be used to verify metadata and data from the destination archive.

Looking for examples and discussion on applied cryptography techniques from best practices to implementations in various tools including qvm-backup, restic, Time Machine, etc.

Factors

  • Security
  • Efficiency
  • Stream-ability: No intermediate data storage before destination
  • Transport compatibility (ssh, https)
  • Storage compatibility (filesystem, share, "cloud")

Implementation checklist

  • Add encryption for data
  • Add compression and encryption for metadata
  • Full verification chain for metadata, issue #79
  • Change volume dirs to use anonymous IDs
  • Assign safety ranges per key for each cipher and prevent nonce re-use
  • Implement key derivation and storage

Threat model

Wyng's threat model appears to be most similar to an encrypted database: A mass of data that is updated and curated periodically. Attackers gaining access to the entire volume ciphertext, possibly on successive occasions may be assumed.

Security issues

Encryption scheme should be robust and have low interactivity and complexity as well as high isolation potential.

Isolation would be in the form of a Qubes-like environment where the Admin VM (e.g. Domain 0) running the backup process is blocked from direct network access, and encryption/decryption is performed only there. Wyng should be able to encrypt effectively in such an isolated environment.

Compatibility with Admin isolation also extends to how any guest containers/VMs are handled: Encryption and integrity verification cannot rely on the guest environments or their OS templates.

Encryption strategies

  1. LUKS or VeraCrypt on a loop device (which can be isolated) with backing in a remote/shared image file. For example: cryptsetup -> losetup -> sshfs. This solution is readily available but imposes a performance penalty of ~20% on a VM-isolated configuration. It also requires painstaking user setup in a Linux-specific environment; difficult to integrate; poor choice for remote/cloud.

  2. Encfs - A FUSE file-encrypting layer may improve performance over a setup based on a loop device. It may also be simpler to setup or even integrate. Advantage: automatic filename (but not size or sequence) obfuscation. Drawback: issue with hardlinks in some encryption modes.

  3. CryFS - Another FUSE layer with built-in support for network transports. Complete file metadata obfuscation. Claims superior resistance to attack. Unknowns: Hardlink support, transport isolation potential.

  4. Direct crypto library/AES utilization - Uses no external layers, but requires painstaking attention to detail and review by a cryptographer if possible. This option may be a natural choice, given the simplicity of the archive chunk format; any issues around the implementation security should have direct analogues to a wide field of other implementations and their use cases. See initial comments on AES modes.

  5. Some encrypted backup tool that can accept a stream of named chunks with very low interactivity between the front end and back end (e.g. a 'push' model).

(After some deliberation and using Wyng with external encryption layers, this issue will be primarily concerned with an integrated solution similar to item 4.)

Types of data

Wyng keeps volume data and metadata as separate files, and the metadata validates the volume data.

See Issue #79 for specifics on metadata, which is expected to use separate encryption keys.

On commenting...

Following a core tenet of cryptography that the application must be understood thoroughly before making specific decisions, a substantial familiarity with Wyng is required to make sense of this issue (ye have been warned...).

Its suggested that making some incremental backups with Wyng and looking at the metadata under '/var/lib/wyng.backup' is a good starting point. In the source code, the classes under ArchiveSet() are instructive in addition to merge_manifests() and the places where its used.

"chunk size" error after successful initial backup

/sparsebak/sparsebak.ini:

[var]
vgname = qubes_dom0
poolname = pool00
destvm = qubes://work
destmountpoint = /home/user/mntenc-spbak
destdir = backups

[volumes]
vm-cftest-private = enable

Notes:

  • cftest is a standaloneVM that I use for booting compact flash images from embedded routers. I don't think I've ever written anything to the private volume - that could very well be the issue here.
  • mntenc-spbak is an ext4 fs on a luks volume on a file from an nfs share

Initial backup of the VM works:

sudo /usr/local/bin/sparsebak.py send:

Configured Volumes:
vm-cftest-private

Starting backup session S_20181211-211105
Preparing snapshots...
Initial snapshot created for vm-cftest-private
Current snapshot created: vm-cftest-private.tock

Scanning volume metadata...

Processing Volume : vm-cftest-private
Backing up to VM work
100% 2047 x000000007ffe0000 DATA
161 bytes sent.
Rotating snapshots for vm-cftest-private

Done.

The backup is then properly shown:

sudo /usr/local/bin/sparsebak.py list vm-cftest-private

Configured Volumes:
vm-cftest-private

Sessions for volume vm-cftest-private :
20181211-161151

Done.

... as are files in the work VM:

ls -1 mntenc-spbak/backups/sparsebak/qubes_dom0%pool00/vm-cftest-private/S_20181211-161151/

000000000
000000001
000000002
000000003
000000004
000000005
000000006
000000007
info
manifest

But then, subsequent backups fail:

sudo /usr/local/bin/sparsebak.py send:

Configured Volumes:
vm-cftest-private

Starting backup session S_20181211-211316
Preparing snapshots...
Delta map not finalized for vm-cftest-private ...recovering.
Current snapshot created: vm-cftest-private.tock

Scanning volume metadata...
Acquiring LVM delta info.

Processing Volume : vm-cftest-private
Updating block change map: file = /tmp/sparsebak/delta.vm-cftest-private
bkchunksize = 131072
dblocksize = 512
bs = 512
Traceback (most recent call last):
File "/usr//local/bin/sparsebak.py", line 1016, in
monitor_send(options.volumes, monitor_only=False)
File "/usr//local/bin/sparsebak.py", line 551, in monitor_send
= update_delta_digest()
File "/usr//local/bin/sparsebak.py", line 314, in update_delta_digest
raise ValueError("Chunk size error")
ValueError: Chunk size error

EDIT: the issue is reproducible after "resetting" everything (rm /sparsebak/... + lvremove the tick/tock volumes in dom0, and rm mntenc-bak/... in the dest VM)

EDIT2: forgot to add that sparsebak.py monitor gives the same error (only difference is the action - line 1013, monitor_send(monitory_only=True)

Monitor throws error for newly-configured volumes

In 'new4' branch, monitor doesn't work when processing a volume that has been configured but not yet sent:

Preparing snapshots...
Traceback (most recent call last):
File "./sparsebak.py", line 1465, in
monitor_send(datavols, monitor_only=True)
File "./sparsebak.py", line 839, in monitor_send
= prepare_snapshots(volumes if len(volumes) >0 else datavols)
File "./sparsebak.py", line 525, in prepare_snapshots
+snap1vol+" is missing!")
RuntimeError: ERROR: Map and snapshots in inconsistent state,
vm-printserver-private.tick is missing!

Reported by @taradiddles

Prune all pruneable sessions of all backups

This is a feature request. Hacked my way around it with this to recover 200GB of unneeded backups (last backup session not being pruneable)

sudo ~/sparsebak/sparsebak.py list |sort|tail -n +7 | while read appvm; do echo $appvm:; sudo ~/sparsebak/sparsebak.py list $appvm|tail -n -4|head -n +1|awk -F " " '{print $(NF-1)}'| while read last_deleteable_backup; do echo $last_deleteable_backup|sudo ~/sparsebak/sparsebak.py -u prune $appvm --all-before --session $last_deleteable_backup; done; done

Move reference functions into object model

Archive metadata

  • volume exists
  • volume size
  • map exists
  • map sizing
  • paths
  • deduplication

Destination system status & access

  • free space status
  • communication transport
  • rpc

Local storage model supporting different types

  • snapshot status
  • storage functions

Name change

Looking for a new, permanent name for what is currently called sparsebak.

Some current candidates I'm tossing around...

delta-v

parq

exidi

distar

Receive: Add differencing "no clobber" mode

For restoring volumes, support a mode where an existing volume on the source isn't wiped before data is received. Instead, truncate/lvresize it then write only those incoming chunks which are different.

This results in less consumption of disk space because more than just zeros are treated sparsely... useful if a user is reverting a volume to an earlier time.

add `--quiet` etc. options to control verbosity

The monitor command outputs text to stdout so on non-Qubes systems an email will likely be sent by the cron daemon each time the job is run (or /var/spool/... will silently be filled if the mta is not properly configured).

Maybe it's enough to redirect stdout to /dev/null but I couldn't find a way to trigger an error with monitor to test if errors are still shown (ie. output to stderr).

EDIT: alternatively monitor could not output anything at all (except warnings/errors of course) and there could be a --verbose option

Restoring backup when source host is down

Let's suppose that source host is down and we want to restore volume data. How this can be done ? Host is different.
I have tried this:
sparsebak arch-init --source=dummy/nothing --destination=internal:/path/to/backup

but
sparseback receive my-precious-volume --save-to=internal:./myfile.img
fails with
Destination not ready to receive commands.
I tried:
sparsebak add my-precious-volume
but that does not help

keep comments in sparsebak.ini when using delete/add commands

If feasible/easy to implement, it'd be nice to keep comments in sparsebak.ini when the file is rewritten (eg. after using add/delete commands).
If it isn't, maybe add a static top comment that user comments will be deleted when the file gets rewritten ?

And/or create a backup .ini file ?

Assert bkchunksize

$ sudo /sparsebak/sparsebak.py send --save-to sys-usb:/run/media/user/USBdrive/sparsebak

I was attempting to "send" all my VM's private sections to a drive mounted on sys-usb, and I seem to have run into a problem with one particular AppVM. When it got to my vm-vault-private it printed:

Delta map not finalized for vm-vault-private ...recovering.

and a few seconds later this exception:

Processing Volume : vm-vault-private
Updating block change map: Traceback (most recent call last):
File "/sparsebak/sparsebak.py", line 1008, in
monitor_send(options.volumes, monitor_only=False)
File "/sparsebak/sparsebak.py", line 543, in monitor_send
= update_delta_digest()
File "/sparsebak/sparsebak.py", line 307, in update_delta_digest
assert bkchunksize >= (bsdblocksize) and bkchunksize % (bsdblocksize) == 0
AssertionError

All the other AppVM's appeared to be processed/snapshot'ed Ok, but since there was this one fatal exception the backup obviously did not complete. I'm not sure what the complaint is with the blocksize because the AppVM runs fine and with no errors that I can see in the log file.

Verify entire sessions using all manifest entries

Implement a variation of verify that receives and verifies every manifest entry and chunk for a session.

Selection:

  • Default to most recent session for all volumes.
  • Accept volume list.
  • --all option could signify verification of all sessions.
  • --session could be used to specify one session or a range (could include --all-before).

Add support for 'cloud' friendly API - which one?

Although ssh with Linux shell is currently supported, this is not commonly offered by large 'cloud' storage services.

Some protocols that have been already suggested:

sftp

amazon S3

swift

webDAV

...or using FUSE to access one of the above or other storage type such as @cryfs .

Bug in chunk file path names

A bug from a recent update is causing data chunks to be sent with incorrect path names.

Will have a fix soon with instructions for correcting any incorrect paths sent to an archive.

Use manifest only to ref zero-chunks

Remove unnecessary burden on dest filesystem: zero-length files representing zero-filled chunks. The manifest entries for these chunks are sufficient.

Add delete command

Needs a volume delete command that goes beyond the current purge-metata option.

A special case exists for when the user wants to remove the most recent backup session (instead of removing the whole volume history). Prune will currently not allow this as its not a good fit for the pruning concept, and deletion doesn't require merging while still being a little tricky.

General steps for deleting last session:

  1. Recover session deltamap and merge it into the live deltamap
  2. Delete session metadata folder
  3. Delete session data folder on destination

Troubleshoot initialization errors

Hello, it's me again.

After attempting approximative migration from sparsebak to wyng, I got my qubesos installation foobar. Got smart enough to clone my disk to another one prior to mess with it even more and was able to use qubesos native backup to restore daily used VMs on fresh installation.

Now, I still have my sparsebak backups of used and unused AppVMs volumes on internal encrypted additional m2 drive, mounting the luks encrypted container. All backups are there, including the one that got stalled on the prune attempt, under /backups/sparsebak

I would love to know which steps are needed to move things around, do arch-init properly and continue monitor and send operations effectively, while trying to salvage data from the backup that caused it and locked the LVM and caused me to reinstall.

I just cloned my daily used disk to another disk and am ready to test instructions and report back any issue in a migration procedure away from sparsebak to wyng.

Thanks!

Transposition error in manifest

sparsebak was recording info for zero-length chunk files in the wrong order (the opposite order used for non-zero files). This was fixed in commit 13e6e30.

The manifest files are not yet used by any sparsebak function, so you can ignore this if you won't keep your current archive set for long. But you can fix the problem in existing archives for compatibility with future verify, restore operations with the following command:

cd sparsebak/group%pool
find . -name manifest | xargs -I++ sed -E -i 's/^(x[0-9a-f]{16}) 0/0 \1/' ++

Note that manifests are stored on both the admin host (where sparsebak.py is run, such as dom0) and the destination storage system so its recommended to run the command in both places.

Support metadata payload for VM config/other

Backup and restore additional metadata that the user or host system associates with each volume. For example, VM or container configurations.

This could require additional packaging / un-packaging steps before or after the send or receive operation runs.

"verify" always verifies the last session despite '--session ...' option

So, I've made two backups for my test VM but sparsebak.py verify verifies only the last one despite using the --session ... option.

So, I have sessions "20181212-151804" and "20181212-152201" for the VM, as shown by sparsebak.py list vm-cftestbkp-private:

Configured Volumes:
vm-cftestbkp-private

Sessions for volume vm-cftestbkp-private :
20181212-151804
20181212-152201

Done.

But trying to verify session "20181212-151804" isn't possible:

sudo /usr//local/bin/sparsebak.py --session 20181212-151804 verify vm-cftestbkp-private (or sudo /usr//local/bin/sparsebak.py verify vm-cftestbkp-private --session 20181212-151804):

Configured Volumes:
vm-cftestbkp-private

Reading manifests

Receiving volume vm-cftestbkp-private S_20181212-152201
x000000007ffe0000 OK
Received bytes : 2147352576

Done.

EDIT: using the --session ... option with 'receive' works.

Better tmp directory handling

Currently wyng may refuse to run on a remote system if that system previously ran wyng as root (in source mode). This is due to different permissions on the /tmp/wyng directory.

Actions:

  • Remove /tmp/wyng when exiting.
  • Use a different dir instead of /tmp/wyng/rpc for helper program.

Check for metadata synchronization

  • When sparsebak starts any operations on the destination, it should check that the metadata on the source and destination are in sync.

    There is an untrusted data handling aspect to this, and it may be best to wait until encryption is integrated and can verify the destination manifests before adding this check.

  • When receiving an older session, automatically resync the volume by digesting all the later manifests to the volume's deltamap. This will cause all the chunk addresses backed up since the selected session to be backed up again. (Currently the user receives a notice advising to use --remap diff after receiving the older session but this step can be saved.) Doing a differential save (always comparing blocks before writing) could be of value here.

  • When user has recovered archive metadata (e.g. after recovering or reinstalling their OS) using --from arch-init, intelligently re-check metadata to determine if its in sync with LVs and do a full remap only if absolutely necessary.

Check for interrupted send session, then resume

Check metadata for '-tmp' when starting a send, then find which chunks were sent and continue sending from that point.

Associated changes:

  • class Volume initial detection
  • prepare_snapshots() with special condition for monitor_only mode
  • monitor_send() and send_volume()
  • new function to scan/sanitize untrusted destination directory... maybe simple counter on destination that returns last chunk
  • offer user to verify the session as a unit when resumed send is completed

Explore backups for fixed LVM, non-CoW storage

Many systems are installed with fixed (non-thin) lvm volumes and documentation suggests thin snapshots can be created for fixed volumes. If so, it may be possible to use the thin lvm tools to track deltas for fixed volumes the same way as in an all-thin environment.

The challenges for implementation are:

  • Determining actual thin-tools compatibility with these hybrid snapshots.
  • Tracking the fixed nature of configured volumes during archive ops, if any special steps are needed.
  • Advising users on a straightforward way to configure their fixed-lvm system with a suitable thin pool.

Barring the above, it may be helpful to reference documentation showing the best practices for converting fixed-lvm systems over to thin pools, btrfs, etc.

Access and restore archive metadata

Reinstalled Qubes 4.0.2RC1 from scratch. Previously done a native QubesOS backup to restore needed VMs. Redeployed sparsebak.

Was expecting to be able to use sparsebak arch-init that would re-add all archived volumes present in backup subdir and be able to call send receive and prune. It doesn't seem so.

Couple of comments:

  • Why isn't there a way to restore configuration from configured archive? I would have expected to be able to sync archive.ini back and automatically add volumes for all volumes present in the backup archive directory, even the ones not being present on host to proceed with the restoration of volumes when desired (qvm-create first, then receive to /dev/qubes_dom0/vm-volume)
  • After having added back all volumes manually, the existing volumes present on both archive and host are not updating the backup, but creating a new one.
  • After having send all new backups, doing a "prune --all-before session=now" doesn't prune old backups at all, consuming space I currently don't know if it is safe to manually delete (eg: vm-whonix-gw-15-root that got replaced with new template)
  • It would be lovely to know what is stored under /var/lib/sparsebak and why it is not stored under backup subdir. Logically to me, everything should be stored in backup subdir to permit easy restoration.
  • QubesOS xml parsing should be added, so that a vmname can be specified to be backed up and restored on a fresh install.

Let me know if I need to be clearer :)

Receive to existing lv returns error

In 'new4' branch:

Failure to receive to an existing logical volume because lvresize returns an error if the volume is already the same size:

Traceback (most recent call last):
File "./sparsebak.py", line 1495, in
save_path=options.saveto)
File "./sparsebak.py", line 1210, in receive_volume
"-f", save_path])
File "/usr/lib64/python3.5/subprocess.py", line 316, in check_output
**kwargs).stdout
File "/usr/lib64/python3.5/subprocess.py", line 398, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['lvresize', '-L',
'2147483648b', '-f', '/dev/qubes_dom0/vm-xyxyxy']' returned
non-zero exit
status 5

Reported by @taradiddles

Renaming volumes

Implement a rename function to change a volume name in the archive.

Handle time zone changes

An anomaly can crop up in the archive if the computer's local time switches to a value that is less than the most recent session.

Possible ways to address the situation:

  1. Change session names to use a generation prefix -- this ruins selection by date range.
  2. Switch to UTC values in same/similar format -- this is good for range selection but challenges users' time perception. This still requires checking latest session time against new session time in case of a non-timezone cause for the discontinuity.
  3. ???

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.