Giter VIP home page Giter VIP logo

cinorid / docker-s3-cron-backup Goto Github PK

View Code? Open in Web Editor NEW

This project forked from webalexeu/docker-s3-cron-backup

0.0 0.0 0.0 53 KB

A modest little container image that periodically backups any volume mounted to /data to S3-compatible storage in the form of a timestamped, gzipped, tarball

Home Page: https://ghcr.io/cinorid/docker-s3-cron-backup:master

License: MIT License

Shell 89.22% Dockerfile 10.78%

docker-s3-cron-backup's Introduction

Docker S3 Cron Backup

โญ Github

What is it?

A modest little container image that periodically backups any volume mounted to /data to S3-compatible storage in the form of a timestamped, gzipped, tarball. By default this container is configured to work with Amazon S3 but it should work with most S3-backends.

Great, but how does it work?

An Alpine Linux instance runs nothing more than crond with a crontab that contains nothing more than one single entry that triggers the backup script. When this script is run, the volume mounted at /data gets tarred, gzipped and uploaded to a S3 bucket. Afterwards the archive gets deleted from the container. The mounted volume, of course, will be left untouched.

I invite you to check out the source of this image, it's rather simple and should be easy to understand. If this isn't the case, feel free to open an issue on github

Pull requests welcome

Now, how do I use it?

The container is configured via a set of required environment variables:

  • AWS_ACCESS_KEY_ID: Get this from Amazon IAM
  • AWS_SECRET_ACCESS_KEY: Get this from Amazon IAM, you should keep this a secret
  • S3_BUCKET_URL: in most cases this should be s3://name-of-your-bucket/
  • AWS_DEFAULT_REGION: The AWS region your bucket resides in
  • CRON_SCHEDULE: Check out crontab.guru for some examples:
  • BACKUP_NAME: A name to identify your backup among the other files in your bucket
  • BACKUP_NAME_TIMESTAMP: It will postfixed BACKUP_NAME with the current timestamp (date and time) (Optional, defaults to true)
  • EXCLUDE_FILES: Passed to tar as --exclude=$EXCLUDE_FILES treating $DATA_PATH as current dir, If you need to exclude more than one pattern, you can set it like this EXCLUDE_FILES={*/logs,*.tmp}
  • IGNORE_ERRORS: Passed to tar parameter --ignore-failed-read --ignore-command-error --warning=no-file-changed

And the following optional environment variables:

  • S3_ENDPOINT: (Optional, defaults to whatever aws-cli provides) configurable S3 endpoint URL for non-Amazon services (e.g. Wasabi or Minio)
  • S3_STORAGE_CLASS: (Optional, defaults to STANDARD) S3 storage class, see aws cli documentation for options
  • TARGET: (Optional, defaults to /data) Specifies the target location to backup. Useful for sidecar containers and to filter files.
    • Example with multiple targets: TARGET="/var/log/*.log /var/lib/mysql/*.dmp" (Arguments will be passed to tar).
  • WEBHOOK_URL: (Optional) URL to ping after successful backup, e.g. StatusCake push monitoring or healthchecks.io

Directly via Docker

docker run \
  -e AWS_ACCESS_KEY_ID=SOME8AWS3ACCESS9KEY \
  -e AWS_SECRET_ACCESS_KEY=sUp3rS3cr3tK3y0fgr34ts3cr3cy \
  -e S3_BUCKET_URL=s3://name-of-your-bucket/ \
  -e AWS_DEFAULT_REGION=your-aws-region \
  -e CRON_SCHEDULE="0 * * * *" \
  -e BACKUP_NAME=make-something-up \
  -e TARGET=/data \
  -e EXCLUDE_FILES=*/logs \
  -v /your/awesome/data:/data:ro \
  ghcr.io/cinorid/s3-cron-backup

Docker-compose

# docker-compose.yml
version: '3.8'

services:
  my-backup-unit:
    image: ghcr.io/cinorid/s3-cron-backup
    environment:
      - AWS_ACCESS_KEY_ID=SOME8AWS3ACCESS9KEY
      - AWS_SECRET_ACCESS_KEY=sUp3rS3cr3tK3y0fgr34ts3cr3cy
      - S3_BUCKET_URL=s3://name-of-your-bucket/
      - AWS_DEFAULT_REGION=your-aws-region
      - CRON_SCHEDULE=0 * * * * # run every hour
      - BACKUP_NAME=make-something-up
      - TARGET=/data
      - EXCLUDE_FILES=*/logs
    volumes:
      - /your/awesome/data:/data:ro #use ro to make sure the volume gets mounted read-only
    restart: always

S3 Bucket Policy Example

From a security perspective it is often preferable to create a dedicated IAM user that only has access to the specific bucket it needs for placing the archive in. The following IAM policy can then be attached to that user to give the minimum amount of required access.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
            ],
            "Resource": "arn:aws:s3:::docker-s3-cron-backup-test/*"
        }
    ]
}

It doesn't do X or Y!

Let this container serve as a starting point and an inspiration! Feel free to modify it and even open a PR if you feel others can benefit from these changes.

Contributors

docker-s3-cron-backup's People

Contributors

cinorid avatar f213 avatar ifolarin avatar jayesh100 avatar peterrus avatar stex79 avatar webalexeu avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.