canonical / juju-backup-all Goto Github PK
View Code? Open in Web Editor NEWTool for backing up charms, local juju configs, and juju controllers.
Home Page: https://snapcraft.io/juju-backup-all
License: Apache License 2.0
Tool for backing up charms, local juju configs, and juju controllers.
Home Page: https://snapcraft.io/juju-backup-all
License: Apache License 2.0
There are few places in the code, which rely on running subprocess
and using juju
.
It needs to be replaced with appropriate python-libjuju
calls.
Imported from Launchpad using lp2gh.
date created: 2021-08-27T19:38:34Z
owner: smigiel-dariusz
assignee: None
the launchpad url
Whenever a run-action timeout is hit, run_with_timeout
expects to be thrown a TimeoutError
from concurrent.futures
, however the executing worker comes from asyncio
, causing the exception to unexpectedly fall through.
Example traceback is a result of gzip
taking too much time to compress a MySQL dump, due to running on a single thread.
Traceback (most recent call last):
File "/var/lib/jujubackupall/auto_backup.py", line 156, in
auto_backup.run()
File "/var/lib/jujubackupall/auto_backup.py", line 130, in run
backup_results = self.perform_backup()
File "/var/lib/jujubackupall/auto_backup.py", line 70, in perform_backup
backup_results = backup_processor.process_backups()
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/jujubackupall/process.py", line 101, in process_backups
controller_processor.backup_models()
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/jujubackupall/process.py", line 140, in backup_models
self.backup_apps(JujuModel(name=model_name, model=model))
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/jujubackupall/process.py", line 148, in backup_apps
self.backup_app(app=app, app_name=app_name, charm_name=charm_name, model_name=model_name)
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/jujubackupall/process.py", line 155, in backup_app
charm_backup_instance.backup()
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/jujubackupall/backup.py", line 83, in backup
action_output = check_output_unit_action(self.unit, self.backup_action_name)
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/jujubackupall/utils.py", line 95, in check_output_unit_action
run_with_timeout(backup_action.wait(), action_name)
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/jujubackupall/utils.py", line 141, in run_with_timeout
return run_async(wait_for(coroutine, timeout))
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/juju/loop.py", line 38, in run
raise task.exception()
File "/usr/lib/python3.8/asyncio/tasks.py", line 501, in wait_for
raise exceptions.TimeoutError()
asyncio.exceptions.TimeoutError
Imported from Launchpad using lp2gh.
date created: 2022-05-09T11:09:45Z
owner: ksdziekonski
assignee: ksdziekonski
the launchpad url
Current master (commit c8ffbea), fails one unit test. Details below:
Traceback (most recent call last):
File "/usr/lib/python3.8/unittest/mock.py", line 1325, in patched
return func(*newargs, **newkeywargs)
File "/home/redacted/canonical/repos/bootstack/juju-backup-all/tests/unit/test_utils.py", line 205, in test_ran_with_timeout
run_with_timeout(mock_coroutine, task)
AssertionError: JujuTimeoutError not raised
Ran 54 tests in 0.130s
FAILED (failures=1)
sys:1: RuntimeWarning: coroutine 'AsyncMockMixin._execute_mock_call' was never awaited
Imported from Launchpad using lp2gh.
The current version of libjuju is pinned to commit 3d54f75 (for the snap, not pinned for use as a package).
The newer versions of libjuju have several improvements that are specially useful for use as a library:
Imported from Launchpad using lp2gh.
Whenever a run-action timeout is hit, run_with_timeout
expects to be thrown a TimeoutError
from concurrent.futures
, however the executing worker comes from asyncio
, causing the exception to unexpectedly fall through.
Example traceback is a result of gzip
taking too much time to compress a MySQL dump, due to running on a single thread.
Traceback (most recent call last):
File "/var/lib/jujubackupall/auto_backup.py", line 156, in
auto_backup.run()
File "/var/lib/jujubackupall/auto_backup.py", line 130, in run
backup_results = self.perform_backup()
File "/var/lib/jujubackupall/auto_backup.py", line 70, in perform_backup
backup_results = backup_processor.process_backups()
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/jujubackupall/process.py", line 101, in process_backups
controller_processor.backup_models()
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/jujubackupall/process.py", line 140, in backup_models
self.backup_apps(JujuModel(name=model_name, model=model))
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/jujubackupall/process.py", line 148, in backup_apps
self.backup_app(app=app, app_name=app_name, charm_name=charm_name, model_name=model_name)
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/jujubackupall/process.py", line 155, in backup_app
charm_backup_instance.backup()
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/jujubackupall/backup.py", line 83, in backup
action_output = check_output_unit_action(self.unit, self.backup_action_name)
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/jujubackupall/utils.py", line 95, in check_output_unit_action
run_with_timeout(backup_action.wait(), action_name)
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/jujubackupall/utils.py", line 141, in run_with_timeout
return run_async(wait_for(coroutine, timeout))
File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/juju/loop.py", line 38, in run
raise task.exception()
File "/usr/lib/python3.8/asyncio/tasks.py", line 501, in wait_for
raise exceptions.TimeoutError()
asyncio.exceptions.TimeoutError
Imported from Launchpad using lp2gh.
date created: 2022-05-09T11:09:45Z
owner: ksdziekonski
assignee: ksdziekonski
the launchpad url
The current version of libjuju is pinned to commit 3d54f75 (for the snap, not pinned for use as a package).
The newer versions of libjuju have several improvements that are specially useful for use as a library:
Imported from Launchpad using lp2gh.
Current master (commit c8ffbea)
The main cli entrypoint (Cli) delegates the heavy lifting to BackupProcessor.process_backups
. However, this method does not return the result of performing the backups, but instead prints them (stdout). This makes it inconvenient to use BackupProcessor from external code (as a library):
It would probably be better for BackupProcessor.process_backups
to return the json (that it already produces) instead, and let the caller decide what it wants to do with it. This way Cli.run can just print it, but library users can handle it differently.
Imported from Launchpad using lp2gh.
I've found that juju-backup-all seems to fail on some clouds unless the --exclude-controller-backup flag is provided. The reason for this is because of capped collections.
Here is some example log output demonstrating what I mean:
2022-02-28 22:39:23,882 INFO jujubackupall.backup [foundations-maas] Attempt #1 for controller backup.
2022-02-28 22:39:25,635 WARNING jujubackupall.backup [foundations-maas] Attempt #1 Encountered controller backup error: while creating backup archive: while dumping juju state database: error dumping databases: error executing "/usr/bin/mongodump": 2022-02-28T22:39:24.635+0000 writing admin.system.users to ; 2022-02-28T22:39:24.636+0000 done dumping admin.system.users (4 documents); 2022-02-28T22:39:24.636+0000 writing admin.system.version to ; 2022-02-28T22:39:24.637+0000 done dumping admin.system.version (2 documents); 2022-02-28T22:39:24.639+0000 writing juju.statuseshistory to ; 2022-02-28T22:39:24.639+0000 writing juju.txns.log to ; 2022-02-28T22:39:24.639+0000 writing logs.logs.ef862f25-5b40-498c-84f3-eb1351e3ca41 to ; 2022-02-28T22:39:24.639+0000 writing logs.logs.b77c7406-5dfa-42fc-8968-e69608a00f9c to ; 2022-02-28T22:39:25.084+0000 done dumping juju.txns.log (77363 documents); 2022-02-28T22:39:25.085+0000 writing logs.logs.fdc2418f-364b-4098-80ab-4514d33d842a to ; 2022-02-28T22:39:25.236+0000 done dumping logs.logs.ef862f25-5b40-498c-84f3-eb1351e3ca41 (72964 documents); 2022-02-28T22:39:25.236+0000 writing juju.txns.prune to ; 2022-02-28T22:39:25.287+0000 done dumping juju.txns.prune (9010 documents); 2022-02-28T22:39:25.287+0000 writing juju.settings to ; 2022-02-28T22:39:25.431+0000 done dumping juju.settings (4791 documents); 2022-02-28T22:39:25.431+0000 writing juju.linklayerdevices to ; 2022-02-28T22:39:25.481+0000 done dumping juju.linklayerdevices (4172 documents); 2022-02-28T22:39:25.481+0000 writing juju.linklayerdevicesrefs to ; 2022-02-28T22:39:25.520+0000 done dumping juju.linklayerdevicesrefs (4172 documents); 2022-02-28T22:39:25.520+0000 writing juju.statuses to ; 2022-02-28T22:39:25.554+0000 done dumping juju.statuses (3484 documents); 2022-02-28T22:39:25.554+0000 writing juju.relationscopes to ; 2022-02-28T22:39:25.591+0000 done dumping juju.relationscopes (3432 documents); 2022-02-28T22:39:25.591+0000 writing blobstore.blobstore.chunks to ; 2022-02-28T22:39:25.627+0000 Failed: error writing data for collection logs.logs.b77c7406-5dfa-42fc-8968-e69608a00f9c
to disk: error reading collection: Executor error during find command: CappedPositionLost: CollectionScan died due to position in capped collection being deleted. Last seen record id: RecordId(2350072969);
Imported from Launchpad using lp2gh.
Hello,
The BootStack team recently ran into an issue with juju-backup-all
on some of our clouds. In some instances, the entire /tmp
directory is removed.
This may be due to cases where the action_output
results are an empty string which would result in the deletion of the entire /tmp directory:
https://git.launchpad.net/juju-backup-all/tree/jujubackupall/backup.py#n84
https://git.launchpad.net/juju-backup-all/tree/jujubackupall/backup.py#n87
https://git.launchpad.net/juju-backup-all/tree/jujubackupall/backup.py#n74
Thanks.
Imported from Launchpad using lp2gh.
date created: 2022-05-19T17:07:27Z
owner: nicholas-malacarne
assignee: thogarre
the launchpad url
While running the functional tests for the snap on serverstack (Openstack cloud), the tests seem to take a long time and fail due to timeout errors.
From an initial look, it seems to be coming from the fact that the tests deploy four different models for the backup of four applications (postgresql, mysql, percona-cluster, etcd)1.
While deploying each of these applications on their respective model during the execution of the fixtures in test_sanity_deployment
2, the model will be blocked until the unit becomes active and then proceeds to the deployment of the next application.
Eg: await model.block_until(lambda: percona_cluster_app.status == "active")
This needs a lot of time where instead the deployments of the units could happen in a parallel manner and checked finally if they're all active.
When purge old backups, traceback happened in /var/lib/jujubackupall/auto_backup.py.
The command tried to find all files including directory older than 30days and delete them. I think it should only delete files under '/opt/backups' (excluding directories). Should the command look like something as "find /opt/backups -mtime +30 -type f -delete"?
CRITICAL: Detected error when performing backup: 'Traceback (most recent call last):
File "/var/lib/jujubackupall/auto_backup.py", line 133, in run
self.purge_old_backups(args.purge_after_days)
File "/var/lib/jujubackupall/auto_backup.py", line 86, in purge_old_backups
subprocess.check_output(cmd)
File "/usr/lib/python3.8/subprocess.py", line 415, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['find', '/opt/backups', '-mtime', '+30', '-delete']' returned non-zero exit status 1.
'
find: cannot delete ‘/opt/backups/foundations-maas/openstack’: Directory not empty
find: cannot delete ‘/opt/backups’: Permission denied
Imported from Launchpad using lp2gh.
On some environments, the etcd backup fails due to not being able to find the juju_id_rsa SSH key needed to connect to the target machines/VMs.
Partial juju-backup-all output:
2022-02-28 23:14:14,701 INFO jujubackupall.process [foundations-maas] Processing backups.
2022-02-28 23:14:14,709 INFO jujubackupall.process [foundations-maas] Models to process ['controller', 'default', 'openstack']
2022-02-28 23:14:15,093 INFO jujubackupall.process [foundations-maas controller] Backing up apps.
2022-02-28 23:14:15,201 INFO jujubackupall.process [foundations-maas default] Backing up apps.
2022-02-28 23:14:16,251 INFO jujubackupall.process [foundations-maas openstack] Backing up apps.
2022-02-28 23:14:22,840 INFO jujubackupall.process [foundations-maas openstack etcd] Backing up app.
2022-02-28 23:14:23,645 INFO jujubackupall.process [foundations-maas openstack etcd] Downloading backup.
Warning: Identity file /home/jujumanage/snap/juju-backup-all/1/.local/share/juju/ssh/juju_id_rsa not accessible: No such file or directory.
2022-02-28 23:14:23,667 ERROR jujubackupall.process [foundations-maas openstack etcd] App backup not completed: command failed: ['scp', '-i', '/home/jujumanage/snap/juju-backup-all/1/.local/share/juju/ssh/juju_id_rsa', '-o', 'StrictHostKeyChecking=no', '-q', '-B', 'ubuntu@:/home/ubuntu/etcd-snapshots/etcd-snapshot-2022-02-28-23.14.23.tar.gz', '/home/jujumanage/bootstack-backups/foundations-maas/openstack/etcd'].
Imported from Launchpad using lp2gh.
The module uses python features that require a newer python version than provided in Ubuntu Bionic (18.04), which is 3.6.9 - In particular dataclasses, which became available in python 3.7. The stacktrace below is an example of the unit test suite failing due to this issue.
It would be convenient if the jujubackupall module had support for the python version in Ubuntu Bionic, as it is still used widely.
ImportError: Failed to import test module: test_process
Traceback (most recent call last):
File "/home/jguedez/.pyenv/versions/3.6.9/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
module = self._get_module_from_name(name)
File "/home/jguedez/.pyenv/versions/3.6.9/lib/python3.6/unittest/loader.py", line 369, in _get_module_from_name
import(name)
File "/home/jguedez/canonical/repos/bootstack/juju-backup-all/tests/unit/test_process.py", line 10, in
from jujubackupall.process import BackupProcessor, ControllerProcessor, JujuModel
File "/home/jguedez/canonical/repos/bootstack/juju-backup-all/jujubackupall/process.py", line 30, in
from jujubackupall.backup import (
File "/home/jguedez/canonical/repos/bootstack/juju-backup-all/jujubackupall/backup.py", line 20, in
import dataclasses
ModuleNotFoundError: No module named 'dataclasses'
Ran 19 tests in 0.011s
FAILED (errors=3)
Imported from Launchpad using lp2gh.
I'm trying to run:
juju-backup-all -l info -A -o <BACKUP_DIR> --exclude-controller-backup
However, the MySQL backup doesn't work:
2022-02-28 22:40:41,026 ERROR jujubackupall.process [foundations-maas openstack mysql] App backup not completed: ['action "set-pxc-strict-mode" not defined on unit "mysql/0"'].
The mysql charm deployed is percona-cluster-286, which lacks that action.
We have a different script already in place which is successfully backing up mysql. It's using this procedure to back things up successfully:
However, I can see that the charm here expects newer behavior, relying on the set-pxc-strict-mode and mysqldump actions instead.
This bug may be primarily documenting the issue; I'm not sure if we actually want to provide support for older charms such as this one, or instead encourage users to upgrade to newer versions of charms. Perhaps the fix is a version check on the percona-cluster charm to ensure it's new enough to run the actions this charm expects?
Imported from Launchpad using lp2gh.
Hello,
The BootStack team recently ran into an issue with juju-backup-all
on some of our clouds. In some instances, the entire /tmp
directory is removed.
This may be due to cases where the action_output
results are an empty string which would result in the deletion of the entire /tmp directory:
https://git.launchpad.net/juju-backup-all/tree/jujubackupall/backup.py#n84
https://git.launchpad.net/juju-backup-all/tree/jujubackupall/backup.py#n87
https://git.launchpad.net/juju-backup-all/tree/jujubackupall/backup.py#n74
Thanks.
Imported from Launchpad using lp2gh.
date created: 2022-05-19T17:07:27Z
owner: nicholas-malacarne
assignee: thogarre
the launchpad url
Currently, the functional tests are not building snap as prerequisite for functional test and use that snap in it.
There are few places in the code, which rely on running subprocess
and using juju
.
It needs to be replaced with appropriate python-libjuju
calls.
Imported from Launchpad using lp2gh.
date created: 2021-08-27T19:38:34Z
owner: smigiel-dariusz
assignee: None
the launchpad url
When running a juju-backup-all command as a user who doesn't have a valid credentials to a juju, there is a traceback.
It should be changed, and user needs to be informed about the problem in a nice way.
dasm@env-infra1:~$ juju-backup-all
2021-08-27 16:06:38,723 INFO jujubackupall.backup [config] juju client config backed up
Traceback (most recent call last):
File "/snap/juju-backup-all/x1/bin/juju-backup-all", line 33, in
sys.exit(load_entry_point('juju-backup-all==0.1.dev63+gde6c21b', 'console_scripts', 'juju-backup-all')())
File "/snap/juju-backup-all/x1/lib/python3.6/site-packages/jujubackupall/cli.py", line 77, in main
cli.run()
File "/snap/juju-backup-all/x1/lib/python3.6/site-packages/jujubackupall/cli.py", line 37, in run
backup_processor.process_backups()
File "/snap/juju-backup-all/x1/lib/python3.6/site-packages/jujubackupall/process.py", line 82, in process_backups
with connect_controller(controller_name) as controller:
File "/snap/juju-backup-all/x1/usr/lib/python3.6/contextlib.py", line 81, in enter
return next(self.gen)
File "/snap/juju-backup-all/x1/lib/python3.6/site-packages/jujubackupall/utils.py", line 44, in connect_controller
run_async(controller.connect())
File "/snap/juju-backup-all/x1/lib/python3.6/site-packages/juju/loop.py", line 38, in run
raise task.exception()
File "/snap/juju-backup-all/x1/lib/python3.6/site-packages/juju/controller.py", line 114, in connect
await self._connector.connect_controller(controller_name, **kwargs)
File "/snap/juju-backup-all/x1/lib/python3.6/site-packages/juju/client/connector.py", line 86, in connect_controller
controller_name = self.jujudata.current_controller()
File "/snap/juju-backup-all/x1/lib/python3.6/site-packages/juju/client/jujudata.py", line 137, in current_controller
return self._load_yaml('controllers.yaml', 'current-controller')
File "/snap/juju-backup-all/x1/lib/python3.6/site-packages/juju/client/jujudata.py", line 206, in _load_yaml
with io.open(filepath, 'rt') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/dasm/.local/share/juju/controllers.yaml'
Imported from Launchpad using lp2gh.
date created: 2021-08-27T19:42:21Z
owner: smigiel-dariusz
assignee: None
the launchpad url
There may be situations, where it's not desirable to include certain models as part of either a one-of or regular backup run, either because it causes exception situations or simply due to not being a requirement to have them, while they may be taxing to perform.
Imported from Launchpad using lp2gh.
date created: 2022-06-29T09:41:40Z
owner: ksdziekonski
assignee: fandanbango
the launchpad url
having a juju action to backup and then restore the vault keys and certificates between deploys can be very useful.
Specifically around #3, field will deploy the cloud many times to ensure consistency and to resolve issues found along the way. On each new deploy, today, a new CSR must be created and it signed. This can slow down deployments, and be annoying to the customer to have to submit ticket after ticket to sign a CSR. Using an auto-generated root-ca doesn't emulate the environment or process properly.
If the keys for vault and certs could be backed up and then restored, this can expedite this process.
Imported from Launchpad using lp2gh.
The functional test where deploying 3 models at once and testing one test per one model, however in #41 we switch to pytest-operator
and deploy only single model with all applications. If we parametrized this to deploy model, run a test and destroy a model it would be better. e.g.
# test_mysql_backup.py
@pytest.mark.abort_on_fail
@pytest.mark.skip_if_deployed
async def test_build_and_deploy(ops_test):
"""Deploy all applications."""
await ops_test.model.deploy(
"ch:mysql-innodb-cluster", application_name="mysql", series="jammy", channel="8.0/stable", num_units=3
)
def test_mysql_innodb_backup(ops_test, tmp_path: Path):
...
Like this we can maybe use runner from GitHub, what is preferred instead of self-hosted runner.
Current master (commit c8ffbea)
The main cli entrypoint (Cli) delegates the heavy lifting to BackupProcessor.process_backups
. However, this method does not return the result of performing the backups, but instead prints them (stdout). This makes it inconvenient to use BackupProcessor from external code (as a library):
It would probably be better for BackupProcessor.process_backups
to return the json (that it already produces) instead, and let the caller decide what it wants to do with it. This way Cli.run can just print it, but library users can handle it differently.
Imported from Launchpad using lp2gh.
Currently this snap cannot be built in a confined way. It's because it uses subprocess to communicate with juju: LP#1941918
It needs to be modified so the snap could be published in a --strict
to a snapcraft store.
Imported from Launchpad using lp2gh.
date created: 2021-08-27T19:40:24Z
owner: smigiel-dariusz
assignee: zzehring
the launchpad url
There may be situations, where it's not desirable to include certain models as part of either a one-of or regular backup run, either because it causes exception situations or simply due to not being a requirement to have them, while they may be taxing to perform.
Imported from Launchpad using lp2gh.
date created: 2022-06-29T09:41:40Z
owner: ksdziekonski
assignee: fandanbango
the launchpad url
For lxd units, backup will fail with:
"error_reason": "command failed: ['scp', '-i', '/var/lib/jujubackupall/ssh/juju_id_rsa', '-o', 'StrictHostKeyChecking=no', '-q', '-B', 'ubuntu@None:/home/ubuntu/etcd-snapshots/etcd-snapshot-2021-12-28-16.13.13.tar.gz', '/mnt/juju-backup/site-manual-juju-controller/k8s-prod/etcd']"
The root issue is caused by machine.dns_name
in python-libjuju0, which returns None.
However, before upstream is fixed (which would be slow and not in our control), we can add a patch in this repo to workaround the issue.
Imported from Launchpad using lp2gh.
Voith MODEL(controller) jujumanage@infra-1:~/bootstack-backups$ juju-backup-all -l info -A -o /home/jujumanage/bootstack-backups
2022-01-19 07:36:36,244 INFO jujubackupall.backup [config] juju client config backed up.
2022-01-19 07:36:36,344 INFO jujubackupall.process [foundations-maas] Processing backups.
2022-01-19 07:36:36,361 INFO jujubackupall.process [foundations-maas] Models to process ['controller', 'default', 'openstack']
2022-01-19 07:36:36,533 INFO jujubackupall.process [foundations-maas controller] Backing up apps.
2022-01-19 07:36:36,677 INFO jujubackupall.process [foundations-maas default] Backing up apps.
2022-01-19 07:36:37,476 INFO jujubackupall.process [foundations-maas openstack] Backing up apps.
2022-01-19 07:36:45,183 INFO jujubackupall.process [foundations-maas openstack etcd] Backing up app.
2022-01-19 07:36:47,336 INFO jujubackupall.process [foundations-maas openstack etcd] Downloading backup.
Warning: Identity file /home/jujumanage/snap/juju-backup-all/1/.local/share/juju/ssh/juju_id_rsa not accessible: No such file or directory.
2022-01-19 07:36:48,984 INFO jujubackupall.process [foundations-maas openstack etcd] Backups downloaded to /home/jujumanage/bootstack-backups/foundations-maas/openstack/etcd/etcd-snapshot-2022-01-19-07.36.45.tar.gz
2022-01-19 07:36:56,106 INFO jujubackupall.process [foundations-maas openstack mysql] Backing up app.
2022-01-19 07:37:27,725 INFO jujubackupall.process [foundations-maas openstack mysql] Downloading backup.
Warning: Identity file /home/jujumanage/snap/juju-backup-all/1/.local/share/juju/ssh/juju_id_rsa not accessible: No such file or directory.
2022-01-19 07:37:33,163 INFO jujubackupall.process [foundations-maas openstack mysql] Backups downloaded to /home/jujumanage/bootstack-backups/foundations-maas/openstack/mysql/mysqldump-all-databases-202201190736.gz
2022-01-19 07:37:33,172 INFO jujubackupall.process [foundations-maas] Backing up controller.
2022-01-19 07:37:33,172 INFO jujubackupall.backup [foundations-maas] Attempt #1 for controller backup.
Traceback (most recent call last):
File "/snap/juju-backup-all/1/bin/juju-backup-all", line 8, in
sys.exit(main())
File "/snap/juju-backup-all/1/lib/python3.8/site-packages/jujubackupall/cli.py", line 105, in main
cli.run()
File "/snap/juju-backup-all/1/lib/python3.8/site-packages/jujubackupall/cli.py", line 42, in run
backup_processor.process_backups()
File "/snap/juju-backup-all/1/lib/python3.8/site-packages/jujubackupall/process.py", line 103, in process_backups
controller_processor.backup_controller()
File "/snap/juju-backup-all/1/lib/python3.8/site-packages/jujubackupall/process.py", line 124, in backup_controller
resulting_backup_path = controller_backup.backup()
File "/snap/juju-backup-all/1/lib/python3.8/site-packages/jujubackupall/backup.py", line 185, in backup
ssh_run_on_machine(machine=controller_machine, command=chown_command)
File "/snap/juju-backup-all/1/lib/python3.8/site-packages/jujubackupall/utils.py", line 110, in ssh_run_on_machine
machine.ssh(command=command, user=user),
AttributeError: 'NoneType' object has no attribute 'ssh'
Imported from Launchpad using lp2gh.
Currently this snap cannot be built in a confined way. It's because it uses subprocess to communicate with juju: LP#1941918
It needs to be modified so the snap could be published in a --strict
to a snapcraft store.
Imported from Launchpad using lp2gh.
date created: 2021-08-27T19:40:24Z
owner: smigiel-dariusz
assignee: zzehring
the launchpad url
Tried to install juju-backup-all charm on bionic from stable channel version 27 and 28. Installation failed with:
unit-juju-backup-all-0: 23:21:12 WARNING unit.juju-backup-all/0.install from cryptography.hazmat.primitives.asymmetric import (
unit-juju-backup-all-0: 23:21:12 WARNING unit.juju-backup-all/0.install File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/cryptography/hazmat/primitives/asymmetric/utils.py", line 6, in
unit-juju-backup-all-0: 23:21:12 WARNING unit.juju-backup-all/0.install from cryptography.hazmat.bindings._rust import asn1
unit-juju-backup-all-0: 23:21:12 WARNING unit.juju-backup-all/0.install ImportError: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/cryptography/hazmat/bindings/_rust.abi3.so)
unit-juju-backup-all-0: 23:21:13 ERROR juju.worker.uniter.operation hook "install" (via hook dispatching script: dispatch) failed: exit status 1
Then, tried to install from edge channel (version 33). Installation succeeded but when running 'do-backup' action, it's stuck ( I have waited more than 1 hour). The juju deployment is small, it just has a single machine and single application. Nothing output in juju log. No backup files generated in /opt/backups/
Imported from Launchpad using lp2gh.
For lxd units, backup will fail with:
"error_reason": "command failed: ['scp', '-i', '/var/lib/jujubackupall/ssh/juju_id_rsa', '-o', 'StrictHostKeyChecking=no', '-q', '-B', 'ubuntu@None:/home/ubuntu/etcd-snapshots/etcd-snapshot-2021-12-28-16.13.13.tar.gz', '/mnt/juju-backup/site-manual-juju-controller/k8s-prod/etcd']"
The root issue is caused by machine.dns_name
in python-libjuju0, which returns None.
However, before upstream is fixed (which would be slow and not in our control), we can add a patch in this repo to workaround the issue.
Imported from Launchpad using lp2gh.
How to replicate
1 - Create a clean juju model
juju add-model --config default-series=bionic openstack
2 - Deploy the charm
juju deploy bootstack-charmers-juju-backup-all --channel edge --series bionic
3 - Watch for the juju status. After the model settled, status:
App Version Status Scale Charm Channel Rev Exposed Message
juju-backup-all error 1 bootstack-charmers-juju-backup-all edge 17 no hook failed: "install"
Unit Workload Agent Machine Public address Ports Message
juju-backup-all/0* error idle 0 10.5.1.69 hook failed: "install"
Machine State Address Inst id Series AZ Message
0 started 10.5.1.69 7dba2434-ebee-433e-8b1a-307ddcf4a92c bionic nova ACTIVE
Imported from Launchpad using lp2gh.
The backup directory is hardcoded (e.g., see here). If there is insufficient space, the backup will fail.
To resolve this, the directory should be parameterized. The next step is to add this parameter as a configuration option in the charm. We should provide separate arguments for each application.
The argument should apply to below applications:
The module uses python features that require a newer python version than provided in Ubuntu Bionic (18.04), which is 3.6.9 - In particular dataclasses, which became available in python 3.7. The stacktrace below is an example of the unit test suite failing due to this issue.
It would be convenient if the jujubackupall module had support for the python version in Ubuntu Bionic, as it is still used widely.
ImportError: Failed to import test module: test_process
Traceback (most recent call last):
File "/home/jguedez/.pyenv/versions/3.6.9/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
module = self._get_module_from_name(name)
File "/home/jguedez/.pyenv/versions/3.6.9/lib/python3.6/unittest/loader.py", line 369, in _get_module_from_name
import(name)
File "/home/jguedez/canonical/repos/bootstack/juju-backup-all/tests/unit/test_process.py", line 10, in
from jujubackupall.process import BackupProcessor, ControllerProcessor, JujuModel
File "/home/jguedez/canonical/repos/bootstack/juju-backup-all/jujubackupall/process.py", line 30, in
from jujubackupall.backup import (
File "/home/jguedez/canonical/repos/bootstack/juju-backup-all/jujubackupall/backup.py", line 20, in
import dataclasses
ModuleNotFoundError: No module named 'dataclasses'
Ran 19 tests in 0.011s
FAILED (errors=3)
Imported from Launchpad using lp2gh.
Investigate and implement backup support for k8s clouds. Perhaps it's only using the get-kubeconfig action?
https://charmhub.io/containers-kubernetes-master/actions
Imported from Launchpad using lp2gh.
Tried to install juju-backup-all charm on bionic from stable channel version 27 and 28. Installation failed with:
unit-juju-backup-all-0: 23:21:12 WARNING unit.juju-backup-all/0.install from cryptography.hazmat.primitives.asymmetric import (
unit-juju-backup-all-0: 23:21:12 WARNING unit.juju-backup-all/0.install File "/var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/cryptography/hazmat/primitives/asymmetric/utils.py", line 6, in
unit-juju-backup-all-0: 23:21:12 WARNING unit.juju-backup-all/0.install from cryptography.hazmat.bindings._rust import asn1
unit-juju-backup-all-0: 23:21:12 WARNING unit.juju-backup-all/0.install ImportError: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /var/lib/juju/agents/unit-juju-backup-all-0/charm/venv/cryptography/hazmat/bindings/_rust.abi3.so)
unit-juju-backup-all-0: 23:21:13 ERROR juju.worker.uniter.operation hook "install" (via hook dispatching script: dispatch) failed: exit status 1
Then, tried to install from edge channel (version 33). Installation succeeded but when running 'do-backup' action, it's stuck ( I have waited more than 1 hour). The juju deployment is small, it just has a single machine and single application. Nothing output in juju log. No backup files generated in /opt/backups/
Imported from Launchpad using lp2gh.
Currently in requirements.txt 1, the libjuju version is hardcoded to 2.9.9. We could replace this with:
juju ~= 2.9
which would allow the snap to install the latest 2.9.x compatible version 2 (2.9.42 currently). This would allow the snap to receive the latest libjuju improvements and fixes.
Imported from Launchpad using lp2gh.
While running the functional tests for the snap on serverstack (Openstack cloud), the tests seem to take a long time and fail due to timeout errors.
From an initial look, it seems to be coming from the fact that the tests deploy four different models for the backup of four applications (postgresql, mysql, percona-cluster, etcd)1.
While deploying each of these applications on their respective model during the execution of the fixtures in test_sanity_deployment
2, the model will be blocked until the unit becomes active and then proceeds to the deployment of the next application.
Eg: await model.block_until(lambda: percona_cluster_app.status == "active")
This needs a lot of time where instead the deployments of the units could happen in a parallel manner and checked finally if they're all active.
Imported from Launchpad using lp2gh.
Adding the option of encrypting backups with juju-backup-all would be useful, to ensure we can store sensitive data in an appropriate manner. Backups may or may not be encrypted when backed up by other charms, but encrypting at this level would ensure we have improved security if desired.
Thanks!
Imported from Launchpad using lp2gh.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.