Giter VIP home page Giter VIP logo

autorestic's Introduction

Greetings 🖖

You can read more here but the TLDR; I passionately code and build stuff 🤓

discord badge


Donate / Support

If you like my work and would like to support it, I would of course be honored ❤️
Github / Paypal / Buy me a coffee

autorestic's People

Contributors

11mariom avatar a-waider avatar beatbrot avatar bkrl avatar chancem avatar cheradenine avatar chostakovitch avatar cupcakearmy avatar david-boles avatar dbrennand avatar dependabot[bot] avatar eliotberriot avatar fariszr avatar g-a-c avatar ironicbadger avatar jin-park-dev avatar jjromannet avatar kencx avatar major avatar mikelolasagasti avatar mpfl avatar natanel-shitrit avatar ninjabenji avatar rdelaage avatar rwxd avatar somebox avatar sumnerboy12 avatar theforcer avatar themorlan avatar whysthatso avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autorestic's Issues

Flag for disabling 'Skipping %, not due yet' log lines

When defining several cron schedules in .autorestic.yml and running autorestic as a con every x minutes, the log gets filled with a log of lines stating that the current location is not due for backing up yet.

In my current use case I'm appending the log output of autorestic cron --ci into a file which is sent by mail on a daily basis.

I'd would be nice to have some kind of flag that only outputs the acual backup process and/or errors.

Restore error: Error: target 'path' is not empty

Describe the bug

Running some tests and trying to do a restore where I delete a single file in the folder and attempt a restore. When I run autorestic restore -l LOCATION --from BACKEND --to 'PATH' I get an error of Restore error: Error target 'PATH' is not empty (obviously the location/backend/path are just variables here).

If I run restic restore -r REPOSITORY latest --target 'PATH' it works just fine.

Expected behavior

Should work just like the restic command restore works and ignore that there's files in the directory.

Environment

  • OS: [e.g. iOS] Ubuntu
  • Version: [e.g. 22] 1.03

Additional context

restic not found

First thanks for creating fantastic wrapper. I keep searching every year for better way to setup restic (usually with ansible) and this is amazing! I hope this is the end of my search.

I followed instruction but found it couldn't backup to anything including local.
I been stuck for few hours with rather esoteric error "length null" or something along that line and backup not happening.
"autorestic check -a" showed error.
If I try backup via exec cli, it returned null null.

After investigation it appears restic was not installed. After manually installing restic, autorestic works!
My system is Debian Buster, zsh.
I suspect restic wasn't either installed or path wasn't correct for distro/shell.

On "https://cupcakearmy.github.io/autorestic/cli/update" - "Updates both restic and autorestic automagically." so I'm guessing under right condition restic will get installed.

Autorestic on arm

I would love to run autorestic on my arm-based Raspberry Pi, however this doesn't seem possible with the binary that install.sh is downloading. Is there any way to cross-compile it for arm or to get it running on my Pi in any other way?

S3/Wasabi backend: repository master key and config already initialized

Hi, I'm currently investigating autorestict to automatate some of my backups.

I love that you can describe everything in a yaml file. However, when trying to backup a test directory to a non AWS S3 storage (here wasabi), the first backup works, but not subsequent one.

First backup output:

eliotberriot@xxx:/etc/autorestic$ sudo autorestic backup -a

Configuring Backends
wasabi-perso : Done ✓

Backing Up
config ▶ wasabi-perso : Done ✓

Finished! 🎉

at this point, .autorestic.yml contain the proper keys.

If I relaunch a backup, I get this error:

eliotberriot@xxx etc/autorestic$ sudo autorestic backup -a

Configuring Backends
wasabi-perso : Configuring... ⏳Could not load the backend "wasabi-perso": Fatal: create key in repository at s3:s3.wasabisys.com/backups failed: repository master key and config already initialized

This is the contents of my .autorestic.yml file:

locations:
  config:
    from: /etc
    to: wasabi-perso
backends:
  wasabi-perso:
    type: s3
    path: s3.wasabisys.com/backups
    key: >-
      <redacted>
    AWS_ACCESS_KEY_ID: <redacted>
    AWS_SECRET_ACCESS_KEY: <redacted>
  local:
    type: local
    path: /var/backups/restic
    key: >-
      <redacted>

Any idea of what could be wrong?

Autorestic on Apple Silicon

It would be awesome to run Autorestic on my M1 MacBook Air. Ant plans on making autorestic work on armv8?

rclone backend error

Describe the bug
Attempting to backup to a rclone backup returns an error.

> autorestic backup -l main
Using config file: ~/.autorestic.yml
Error: backend type "rclone" is invalid

Expected behavior
I had expected it to work since restic supports rclone.

Environment

  • OS: Linux
  • Version: 1.0.5

Quickstart config fails to validate

Using the config provided in the Quickstart docs, running autorestic check results in:

Using config file: /root/.autorestic.yml
panic: 4 error(s) decoding:

* 'Backends[0][key]' expected a map, got 'string'
* 'Backends[0][name]' expected a map, got 'string'
* 'Backends[0][path]' expected a map, got 'string'
* 'Backends[0][type]' expected a map, got 'string'

goroutine 1 [running]:
github.com/cupcakearmy/autorestic/internal.GetConfig.func1()
        /home/runner/work/autorestic/autorestic/internal/config.go:39 +0x15f
sync.(*Once).doSlow(0xbbc128, 0x8dcf70)
        /opt/hostedtoolcache/go/1.16.3/x64/src/sync/once.go:68 +0xec
sync.(*Once).Do(...)
        /opt/hostedtoolcache/go/1.16.3/x64/src/sync/once.go:59
github.com/cupcakearmy/autorestic/internal.GetConfig(0xc0001426c0)
        /home/runner/work/autorestic/autorestic/internal/config.go:30 +0x65
github.com/cupcakearmy/autorestic/cmd.initConfig()
        /home/runner/work/autorestic/autorestic/cmd/root.go:60 +0xaa
github.com/spf13/cobra.(*Command).preRun(0xb83060)
        /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:882 +0x49
github.com/spf13/cobra.(*Command).execute(0xb83060, 0xbbc088, 0x0, 0x0, 0xb83060, 0xbbc088)
        /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:818 +0x14f
github.com/spf13/cobra.(*Command).ExecuteC(0xb832e0, 0xb44278, 0x948f10, 0x93bbc8)
        /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:960 +0x375
github.com/spf13/cobra.(*Command).Execute(...)
        /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:897
github.com/cupcakearmy/autorestic/cmd.Execute()
        /home/runner/work/autorestic/autorestic/cmd/root.go:33 +0x2d
main.main()
        /home/runner/work/autorestic/autorestic/main.go:41 +0x2a

I'm using the latest autorestic release on a debian 10 machine.

[Feature request] Backup from stdin

Sometimes, simply copying a directory isn't enough for a backup. This happens for instance when you're backing up a SQL database such as MySQL or PostgreSQL: to ensure the backup integrity, you need to use pg_dump and backup the output of that command.

Luckily, restic supports backing up data piped via stdin. Do you think it would be possible to support this in autorestic?

What I currently have in mind is:

locations:
  psqldata:
    command: pg_dumpall

Which autorestic would internally translate to pg_dumpall | restic backup --stdin --stdin-filename psqldata.

I'm going to experiment with that, let me know what you think about it :)

Issues with cron scheduled backups

Hi!

First of all, awesome project! Keep up with it this is my go-to solution for backups now!

So, I've configured autorestic with a cron schedule, to execute automated backups every day at 00:00 and crontab was executing the cron command as recommended on the docs, however I noticed that new snapshots are not being created, there's only 2 snapshots which I generated myself running a manual backup to test if it was an issue with the configuration, which looks fine to me.

Next I'll leave versions, output from autorestic cron execution and part of the configuration file.

Versions:

root@matchbox:~# autorestic -V
0.21
root@matchbox:~# restic version
restic 0.11.0 compiled with go1.15.3 on linux/amd64

Output:

root@matchbox:~# autorestic -c /root/.autorestic.yml cron

Configuring Backends
sftp : Done ✓

Running cron jobs
files ▶ Skipping. Scheduled for: Sat Nov 14 2020 00:00:00 GMT+0000

Finished! 🎉

Cron command:

#Ansible: Autorestic cron check
PATH="/usr/local/sbin:/usr/local/bin"
*/5 * * * * autorestic -c /root/.autorestic.yml cron >> /var/log/autorestic.log 2>&1

Config file:

locations:
  files:
    cron: '0 0 * * *'
    from: /src/media
    options:
      forget:
        keep-last: 30 
    to:
      - sftp
backends:
  sftp:
    type: sftp
    path: my-sftp-server
    key: >- this-is-my-key-dont-look-at-it->:(

"already runnnig" check not working since v0.23

I think this change has broken the "already running" check since you now do this check before init(config) runs (see line 119 above) and therefore the config location isn't known.

This means that readLock and writeLock are using /root/.autorestic.lock but unlock (on line 132 below) is using /etc/.autorestic.lock (assuming running as root with config in /etc/.autorestic.yml).

Therefore every time autorestic runs it thinks it is already running.

Getting constant message "An instance of autorestic is already running for this config file"

Void Linux 5.8.18_1, restic-0.11.0_1, autorestic 0.26, XFCE

Hello, I'd like to start saying autorestic is an amazing tool!
But since I've set up my autorestic.yml file I get this message every now and then when I try to run this command on terminal:

$ autorestic backup -a
An instance of autorestic is already running for this config file

Running the same command a second time just works!

I've set up and anacron job to run autorestic but sometimes I get this output in my log file as well (anacron runs nearly 45 min after my PC boots, there is no autorestic instance running), and snapshots are not created the days I get this message, so it won't run at all.

Here is my autorestic.yml:

locations:
  sampacloud:
    from: /home/user/cloud/
    to: cloud-b2
    hooks:
      before: notify-send "Backup of cloud has started"
      after: notify-send "Backup of cloud has finished"
backends:
  cloud-b2:
    type: b2
    path: cloud-b2
    key: >-
      key
    B2_ACCOUNT_ID: id
    B2_ACCOUNT_KEY: key

Here is the job on my anacrontab:

# /etc/anacrontab: configuration file for anacron

# See anacron(8) and anacrontab(5) for details.

SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# the maximal random delay added to the base delay of the jobs
RANDOM_DELAY=45
# the jobs will be started during the following hours only
START_HOURS_RANGE=3-22

#period in days   delay in minutes   job-identifier   command
1	5	cron.daily		nice run-parts /etc/cron.daily
7	25	cron.weekly		nice run-parts /etc/cron.weekly
@monthly 45	cron.monthly		nice run-parts /etc/cron.monthly
1	30	b.sc.2.b2	/usr/bin/autorestic -c /home/user/.autorestic.yml --ci backup -a 2>&1 | moreutils_ts >> /home/user/autorestic.log

Also, as you can see, I've tried to setup notifications for the start and the end of the backup, I get the notifications when I run the command on my terminal, but not with anacron (when the backup actually works), I know it's not autorestic fault but I would gladly accept some light over this issue.

Thank you!

Build documentation

I'm trying to build autorestic from source, and I'm not much of a ts person, so I don't really know my way around it.
I was trying to setup some sort of autobuild ci for this and I'm not able to produce the final binary.
Any help/guidelines?

'before' hook not working as intended

Describe the bug
To backup a database, I added a location to the autorestic config with a 'before' hook, which dumps & compresses the database and then backups the file, like this:

  db:
    from: /share/backup/dbs/database.sql.gz
    to:
      - hdd
      - home
    cron: '0 4 * * *'
    hooks:
      before:
        - docker exec -t app_db pg_dumpall -c -U app_user | gzip > /share/backup/dbs/database.sql.gz
        - /root/backup/restic_backup_db.sh

However, when starting the backup process, autorestic attempts to run the command, but then immediately terminates. No database dump and no backup is run. As seen above, putting the command in a bash script also did not help. Also no errors and such.

root@server ~/backup # autorestic -v backup -l db
Using config file: /root/backup/.autorestic.yml
> Executing: /usr/bin/restic snapshots
> Executing: /usr/bin/restic snapshots


    Backing up location "db"


Running hooks
> /root/backup/restic_backup_db.sh
> Executing: /bin/bash -c /root/backup/restic_backup_db.sh
root@server ~/backup #

Did I miss anything obvious? Running /bin/bash -c /root/backup/restic_backup_db.sh standalone works fine, is it the pipes maybe?

Expected behavior
Run database dump, then the backup :)

Environment

  • OS: Ubuntu 18.04
  • Version: autorestic 1.0.4

Additional context
autorestic is running with root privileges

log file doesn't contain backend/location details

Example below. All the dynamic text which is displayed in the console is not visible so you don't know what backend and backup location were processed.

Is it possible to generate more flat-file-log-friendly output via a switch/flag?

Configuring Backends

Backing Up
Files:           0 new,     0 changed,     1 unmodified
Dirs:            0 new,     0 changed,     0 unmodified
Added to the repo: 0 B

processed 1 files, 0 B in 0:04
snapshot 8a093c2a saved

Finished! ð

The `exec` command does not print restic's output.

Describe the bug
Running a command, e.g. autorestic exec -a -- snapshots does not print out the result:

Using config file: /home/deb/.autorestic.yml


    Executing on "b2"    

Expected behavior
Restic's output should be printed. For example, by adding the verbose flag, autorestic exec -a -v -- snapshots yields:

Using config file: /home/deb/.autorestic.yml
> Executing: /usr/local/bin/restic snapshots


    Executing on "b2"    

> Executing: /usr/local/bin/restic snapshots
ID        Time                 Host        Tags        Paths
--------------------------------------------------------------------------
...information about snapshots printed here...
--------------------------------------------------------------------------
n snapshots

Environment

  • OS: Debian Sid
  • Version: 1.0.6

Additional context
If this is intended behavior, the exec docs should probably have a note added to them.

No help command

The following command should print out the help:

autorestic help

Better logging on failed backups

Describe the bug

Running autorestic backup -a or even just the name of the location only seems to run the first of the "to" backends listed. The second does not run.

Expected behavior

Multiple locations are backed up.

Environment

  • OS: [e.g. iOS] Ubuntu
  • Version: [e.g. 22] 1.0.3

Additional context

Here's my config:

backends:
  synology_docker:
    type: sftp
    path: {{main_username}}@synology:/Restic/
    key: {{ secret_restic_repo_password }}
    env: {}
  b2_docker:
    type: b2
    path: '{{ secret_restic_b2_bucket }}:/'
    key: {{ secret_restic_repo_password }}
    env:
      B2_ACCOUNT_ID: {{ secret_restic_b2_account_id }}
      B2_ACCOUNT_KEY: {{ secret_restic_b2_account_key }}
locations:
  docker:
    from: '~/docker'
    to: 
      - synology_docker
      - b2_docker
    options:
      forget:
        keep-last: 30

Install autorestic fails when using pipefail

Is your feature request related to a problem? Please describe.

I have my own ansible role to install autrestic, and I have an issue when I set "set -o pipefail", it fails because the last line on the install script is "autorestic" and it results on 1 out code which causes this tasks to fail

Describe the solution you'd like

change autorestic to autorestic help to avoid 1 out code when installing

Error: config could not be loaded/found

Describe the bug
Config file can't be found/read by autorestic

Expected behavior
config file should be read.

Environment

  • OS: Unraid
  • Version: 6.9.2

Additional context
Changed permissions and checked ownership. I have also tried moving the config file to another location and trying to load it with the -c flag.

I have also checked with an ubuntu 18 install with the same issue.

The autorestic.lock.yml file is being created so it is finding the file. Just not loading it.

I am trying to run the autorestic check command after the first install.

I have removed the rclone lines and tried with just the minio backend but still the same issue.

I have also run the command as root with the same issue.

config file is bellow:

locations:
  nextclouddata:
    from: /mnt/user/NextcloudData
    to: 
      - minio
      - onedrive
    options:
      forget:
        keep-last: 5 # always keep at least 5 snapshots
        keep-weekly: 1 # keep 1 last weekly snapshots
        keep-monthly: 12 # keep 12 last monthly snapshots
        keep-yearly: 7 # keep 7 last yearly snapshots
        keep-within: '4w' # keep snapshots from the last 4 weeks      
  nextcloudweb:
    from: /mnt/disks/Docker_SSD/nextcloud
    to: 
      - minio
      - onedrive 
    options:
      forget:
        keep-last: 5 # always keep at least 5 snapshots
        keep-weekly: 1 # keep 1 last weekly snapshots
        keep-monthly: 12 # keep 12 last monthly snapshots
        keep-yearly: 7 # keep 7 last yearly snapshots
        keep-within: '4w' # keep snapshots from the last 4 weeks       
  backup:
    from: /mnt/user/Backup
    to:
      - minio
      - onedrive
    options:
      forget:
        keep-last: 5 # always keep at least 5 snapshots
        keep-weekly: 1 # keep 1 last weekly snapshots
        keep-monthly: 12 # keep 12 last monthly snapshots
        keep-yearly: 7 # keep 7 last yearly snapshots
        keep-within: '4w' # keep snapshots from the last 4 weeks 
  appdata:
    from: /mnt/disks/Docker_SSD
    to: minio
    options:
      forget:
        keep-last: 5 # always keep at least 5 snapshots
        keep-weekly: 1 # keep 1 last weekly snapshots
        keep-monthly: 12 # keep 12 last monthly snapshots
        keep-yearly: 7 # keep 7 last yearly snapshots
        keep-within: '4w' # keep snapshots from the last 4 weeks 




backends:
  name: minio
    type: s3
    path: 'minio.mydomain.com'
    key: <redacted>
    env:
      AWS_ACCESS_KEY_ID: <redacted>
      AWS_SECRET_ACCESS_KEY: <redacted>         
  name: onedrive
    type: rclone
    path: 'onedrive:/'
    key: <redacted>

Add custom tags to locations

Add the native restic feature for adding tags to snapshots.
In restic you can use the command: "--tags" to add information to your snapshots. This comes in handy to identify or group them.
I suggest to add this feature to autorestic. My suggestion ist the following:

In restic you tag a snapshot as follows:
restic -r /mnt/hdd backup --tag projectX --tag foo --tag bar /home/abc

and the same in the autorestic.yml file would be:

locations:
  homefolder:
    from: /home/abc
    to:
      - hdd
    tags:
      - projectX
      - foo
      - bar
  
backends:
  hdd:
    type: local
    path: /mnt/hdd

This is just a suggested layout by me. How you implement it is of course your choice.

autorestic check -a fails to run

Describe the bug

Running autorestic check -a results in an error "unknown shorthand flag: 'a' in -a"

Expected behavior

https://autorestic.vercel.app/quick#check

Runs a check as per docs

Environment

  • OS: [e.g. iOS] Ubuntu
  • Version: [e.g. 22] 1.0.6

Additional context

Log:

$ autorestic check -a
Error: unknown shorthand flag: 'a' in -a
Usage:
  autorestic check [flags]

Flags:
  -h, --help   help for check

Global Flags:
      --ci              CI mode disabled interactive mode and colors and enables verbosity
  -c, --config string   config file (default is $HOME/.autorestic.yml or ./.autorestic.yml)
  -v, --verbose         verbose mode

Error: unknown shorthand flag: 'a' in -a

Build process on Ubuntu 20.04 fails

Building with these version:

$ tsc --version
Version 3.8.3
npm --version
6.14.4
$ node --version
v10.19.0

says:

$ npm install .
npm WARN Invalid version: "0.27"
npm WARN autorestic No description
npm WARN autorestic No repository field.
npm WARN autorestic No README data
npm WARN autorestic No license field.

audited 1 package in 0.911s
found 0 vulnerabilities

So as a brute force attack I removed the version from the package file:

npm install
npm WARN deprecated [email protected]: Please update to v 2.2.x
npm WARN deprecated [email protected]: request has been deprecated, see https://github.com/request/request/issues/3142
npm WARN deprecated [email protected]: this library is no longer supported
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@~2.1.2 (node_modules/chokidar/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})

added 225 packages from 285 contributors and audited 229 packages in 40.304s

11 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities

Ok, it installed...

As tsc failed, missing the deleted version, I re-inserted the version line again and it compiled.
Not sure if it would work with the newer packages ...

[Feature request] forget policies

Restic supports pruning previous snapshots using a policy, and I think it could be interesting to integrate this in autorestic to ensure users don't eat storage resources indefinitely.

It could be exposed as follows (this is just suggestion):

locations:
  etc:
    from: /etc
    to: wasabi-perso
    keep:
      last: 5             # always keep at least 5 snapshots
      hourly: 3           # keep 3 last hourly shapshots
      daily: 4            # keep 4 last daily shapshots
      weekly: 1           # keep 1 last weekly shapshots
      monthly: 12         # keep 12 last monthly shapshots
      yearly: 7           # keep 7 last yearly shapshots
      duration: 2w        # keep snapshots from the last 2 weeks
      tags:
        - important
        - critical

The param names and values match what's advertised in the restic documentation, for consistency and ease of implementation.

To ensure autorestic doesn't prune anything by accident, I suggest the addition of a separate command, mimicking the backup command, but to apply pruning:

autorestic forget -l etc
autorestic forget -a

Internally, this command would call restic forget with the declared policies, and restic prune.

Let me know what you think about it :)

Backup with stale lock will silently fail

I interrupted a prune, which left a stale lock in a repo.
Then running a backup afterwards will yield a pretty fast success, even though it failed.
After running an unlock, it worked and created a snapshot.

EDIT: the hanging lock was caused by an interrupted backup

Auto update fails on Linux

OS: Debian 4.9.189-3+deb9u2 (2019-11-11) x86_64 GNU/Linux

Error trace:

ETXTBSY: text file is busy, open '/usr/local/bin/autorestic'

Multiple paths in "from:"?

With the restic CLI, you can specify multiple paths like so:

restic -r ... backup /home/a /home/b /home/c

Is there a way to do this in the autorestic config? I tried the obvious thing of specifying from: as a list:

locations:
  example:
    from:
      - /home/a
      - /home/b
      - /home/c
...

However this gives an error of The "path" argument must be of type string. Received an instance of Array

What I'm trying to do is backup both /home/ and /etc/(rather than backing up/` and excluding all other root level directories). I could have multiple locations but the options (e.g exclude patterns) would need to be duplicated for each which is cumbersome

Support for multiple folders

I read the docs but couldn't find any way to backup multiple folders in one snapshot.

This is how you would do it in restic:
restic backup /root /compose /var/lib/docker/volumes

I tried two things in .autorestic.yml:

A:

locations:
  docker-compose:
    from: /root /compose /var/lib/docker/volumes
    to: wasabi-linode1

B:

locations:
  docker-compose:
    from:
      - /root
      - /compose
      - /var/lib/docker/volumes
    to: wasabi-linode1

Any way to do this with autorestic?😊

Implement separate user/password variables for rest-server backend

Hi,

I am currently using restic in combination with a shell script, mostly with local & rest-server backens, on my servers. So far, autorestic seems perfect to replace my shell-scripts with a much nicer high-level interface.
A small improvement I would love to see is separate credentials variables in the config for the rest-server backend, as I use it with the integrated HTTP auth. Maybe something along those lines:

backends:
  name-of-backend:
    type: rest
    path: https://restserver.example.com/johndoe
    user: johndoe
    password: supersecurepw

Of course I could still put those in the URL, but https://johndoe:[email protected]/johndoe is very unwieldy and prone to typos. Whaat do you think? :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.