virt-lightning / virt-lightning Goto Github PK
View Code? Open in Web Editor NEWStarts your VM on libvirt in a couple of seconds!
Starts your VM on libvirt in a couple of seconds!
Hi,
I have a problem to set a static IP address to VMs.
this is my virt-lightning.yaml:
- name: ansible
distro: debian-11-genericcloud-amd64
networks:
- ipv4: 192.168.150.1
- network: virt-lightning
- name: debian11
distro: debian-11-genericcloud-amd64
networks:
- ipv4: 192.168.150.10
- network: virt-lightning
- name: rocky
distro: Rocky-8-GenericCloud.latest.x86_64
ipv4: 192.168.150.20
- name: suse
distro: openSUSE-Leap-15.3.x86_64-1.0.1-NoCloud-Build2.186
networks:
- ipv4: 192.168.150.30
- name: ubuntu
distro: ubuntu-20.04
ipv4: 192.168.150.40
With this I tried something to show you the problems:
This is the output in the console after vl up:
⚡ ansible
⚡ debian11
⚡ rocky
⚡ suse
⚡ ubuntu
⌛ ok Waiting...
💻 ansible found at 192.168.150.1!
🛃 rocky QEMU agent found
🛃 suse QEMU agent found
💻 ubuntu found at 192.168.150.6!
💻 rocky found at 192.168.150.5!
💻 suse found at 192.168.150.30!
💻 debian11 found at 192.168.150.10!
👍 You are all set
problem 1:
in this example ansible gets 192.168.150.1. But 1 is normally the IP of the host (same problem with vangrant, the reason I try your nice program here)
problem 2:
setting IP like with ubuntu dosen't work. Would be enough to give an example in the documentation.
In release 2.3.0, it is impossible to create snapshot of running VM. It is caused by cidata cdrom created as disk, generating such device XML:
<disk type="file" device="disk">
<driver name="qemu" type="raw"/>
<source file="/var/lib/virt-lightning/pool/vl_vm-cidata.qcow2" index="1"/>
<backingStore/>
<target dev="vdb" bus="virtio"/>
<alias name="virtio-disk1"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
</disk>
Steps to reproduce:
- name: vl_vm
distro: ubuntu-20.04
memory: 512
root_disk_size: 5
virsh snapshot-create-as --domain vl_vm --name snapshot_1
As a result, we have such error message:
error: unsupported configuration: internal snapshot for disk vdb unsupported for storage type raw
Proposed PR #271 to fix an issue
Hi, first of all, thanks for your time and for sharing this library, I am trying to give it on macOS, I already installed libvirt:
brew install qemu gcc libvirt
Testing with:
virsh -c qemu:///system
return:
error: failed to connect to the hypervisor
error: Failed to connect socket to '/opt/homebrew/var/run/libvirt/virtqemud-sock': No such file or directory
libvirt
is started, any ideas?
When destroying the VMs in a specific context, hv.ntework_obj.destroy()
is always called, regardless of other VMs still being attached or not.
https://github.com/virt-lightning/virt-lightning/blob/master/virt_lightning/shell.py#L346
Should this only be called if hv.list_domains()
is empty after hv.clean_up()
is called on the domains in the provided context?
Is there a way to create and use my own images? Also, can we get debian trixie added ?
IP address defined in yaml file hasn't saved to libvirt network config. In case of VM stopped by some reason (e.g. host reboot) it loses IP lease after some time. If request address by DHCP service, it will be random from network range. Alternative to it is some universal way to define static IP
I'd like being able to quickly copy files from and to the virtual machines.
I found quite convenient to add a scp-like command which translates instance names (as does vl ssh) in a POC: https://github.com/hguemar/virt-lightning/tree/add-scp
It should be possible to add scp capability between two instances.
I'm proposing to change the CLI to have this proposed action set, which I believe will be less confusing for users:
What do you think?
User story: John wants to use virt-lightning, but he still needs to create the pool storage and he don't want to read Libvirt documentation.
We should provide a vl init
command that prepare the storage pool for the user.
Pip uses new dependency resolver since 20.3. It fixes some nasty bugs, but works very slow with unpinned dependencies. It affects tox testing - env build lasts at least tens of minutes
Steps to reproduce:
pip install -U pip
. Ensure that pip -V returns value greater then 20.3pip install -r test-requirements.txt
Possible solution: pin exact versions of flake8 packages in test-requirements.txt
First, I'd like to thank everybody involved with this project!
I'd like to use this on a hypervisor with multiple users. With use of the --context argument, this is generally possible. However, there are a few gotchas that I'd like to smooth over, if patches for these would be accepted.
Allow context to be more easily set. If I create two project directories, with two different virt-lightning.yaml configs in them, they both end up in the same default
context, unless I explicitly pass --context to all operations. This means that if I'm not careful some day, I'll vl down
in the wrong place and nuke the wrong project. I'd like to add support to set context as a top-level key in virt-lightning.yaml, and use that to help limit the scope of operations more easily.
Possibly update the output of vl status
to show the context, since it can now be read from a config file.
Add a config option to enable operating only on VMs that match our username, via vl:username. This way, even if I'm in the same context as another user, I won't be able to see their VMs. By default this would be disabled, but could be enabled via a config.ini flag.
Any additions would keep the current behavior as the default - you'd have to opt-in to these modifications via config.ini adjustments. Let me know your thoughts on this.
Ubuntu 18.04 does not allow qemu:///session
URI unless we do intensive system reconfiguration. Since we want virt-lightning to be as simple as possible, we need to try another option.
Hi,
I created new libvirt network using this file:
<network>
<name>new-network</name>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr011' stp='on' delay='0'/>
<ip address='192.168.152.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.152.2' end='192.168.152.254'/>
</dhcp>
</ip>
</network>
and I added it to virt-lightning file:
- name: tmp
distro: centos-8
networks:
- network: new-network
ipv4: 192.168.152.12
memory: 4028
vcpus: 4
Then I have noticed that I can't access the internet inside the vm, after some debugging I found that my host ip is not set in /etc/resolv.conf file
; Created by cloud-init on instance boot automatically, do not edit.
;
# Generated by NetworkManager
nameserver 192.168.122.1
nameserver 192.168.123.1
moreover, when I added new-network in ~/.config/virt-lightning/config.ini as default network
[main]
network_name = new-network
it works fine.
In order to centralize the configuration at one single place, ot would be nice to move the Flake8 configuration from the tox.ini
to the pyproject.toml
file.
Not sure how to pack new disk configuration tree to CLI argument, but at least list of sizes should be supported. API update will be very easy, i think
I use the following config.ini that overwrite only one key:
[main]
libvirt_uri=qemu:///syste
When I try to run vl
, I get the error below:
(venv) goneri@ubuntu1804:~$ vl storage_dir
Traceback (most recent call last):
File "/usr/lib/python3.6/configparser.py", line 789, in get
value = d[option]
File "/usr/lib/python3.6/collections/__init__.py", line 883, in __getitem__
return self.__missing__(key) # support subclasses that define __missing__
File "/usr/lib/python3.6/collections/__init__.py", line 875, in __missing__
raise KeyError(key)
KeyError: 'storage_pool'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/goneri/venv/bin/vl", line 11, in <module>
load_entry_point('virt-lightning', 'console_scripts', 'vl')()
File "/home/goneri/virt-lightning/virt_lightning/shell.py", line 301, in main
storage_dir(configuration)
File "/home/goneri/virt-lightning/virt_lightning/shell.py", line 212, in storage_dir
hv.init_storage_pool(configuration.storage_pool)
File "/home/goneri/virt-lightning/virt_lightning/configuration.py", line 87, in storage_pool
return self.__get("storage_pool")
File "/home/goneri/virt-lightning/virt_lightning/configuration.py", line 58, in __get
return self.data.get("main", key)
File "/usr/lib/python3.6/configparser.py", line 792, in get
raise NoOptionError(option, section)
configparser.NoOptionError: No option 'storage_pool' in section: 'main'
At this point, the image creation is handled by a collection of shell scripts. The idea is to integrate that in the main command:
$ vl image-list
debian-8
debian-9
(etc)
$ vl image-prepare debian8
Downloading debian-8 master image
100%
Preparing the image for virt-lightning
blabla
blabal
Done, image is ready!
Since we can start VMs with vl start
, it would be nice to also delete them with vl delete
, I'm also interested in the api.delete()
function.
Would you like to see a patch? That's what I currently have, it comes from the api.down()
function:
from virt_lightning import api as vl
def vm_delete(name):
conn = vl._connect_libvirt(vlconf.libvirt_uri)
hv = vl.vl.LibvirtHypervisor(conn)
hv.init_network(vlconf.network_name, vlconf.network_cidr)
hv.init_storage_pool(vlconf.storage_pool)
for domain in hv.list_domains():
if domain.name == name:
hv.clean_up(domain)
return 'deleted'
These days I'm used to gnome-boxes, which uses qemu:///session
by default on Fedora 34, and provides all things I need. VMs have volumes, snapshots, network access...
It would be awesome to be able to automate all that stuff with vl, but it seems to be tied to qemu:///system
. Is there a way to configure it to use qemu:///session
?
I'm running CentOS 9 Stream locally, and I'm unable to run vl up
due to an issue with the qxl
video model.
❯ vl up
⚡ centos-9-stream
Traceback (most recent call last):
File "/home/ooraini/.local/bin/vl", line 8, in <module>
sys.exit(main())
File "/home/ooraini/.local/lib/python3.9/site-packages/virt_lightning/shell.py", line 362, in main
action_func(configuration=configuration, **vars(args))
File "/home/ooraini/.local/lib/python3.9/site-packages/virt_lightning/api.py", line 206, in up
loop.run_until_complete(deploy())
File "/usr/lib64/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/home/ooraini/.local/lib/python3.9/site-packages/virt_lightning/api.py", line 198, in deploy
await f
File "/usr/lib64/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/ooraini/.local/lib/python3.9/site-packages/virt_lightning/api.py", line 122, in _start_domain
domain = hv.create_domain(name=host["name"], distro=distro)
File "/home/ooraini/.local/lib/python3.9/site-packages/virt_lightning/virt_lightning.py", line 111, in create_domain
dom = self.conn.defineXML(ET.tostring(root).decode())
File "/usr/lib64/python3.9/site-packages/libvirt.py", line 4414, in defineXML
raise libvirtError('virDomainDefineXML() failed')
libvirt.libvirtError: unsupported configuration: domain configuration does not support video model 'qxl'
I think the issue is with the unavailable package qemu-ui-gtk
. Is there any specific requirement for qxl
?
The following lines are relevant:
My initial thought was that VL could be used as backend for ansible-community/molecule-libvirt#13 implementation but after I realized that VL does not work on platforms like MacOS or Windows and it lacks remoting support, I decided that as pure libvirt implementation would be more appropriate. That does not mean that a molecule-vl
driver should not be created.
In fact you can easily make the library itself expose a molecule driver entry-point, avoiding the need to create a new project for the driver project. The magic line is https://github.com/ansible-community/molecule-libvirt/blob/master/setup.cfg#L87-L88
In the end is up to you to decide if you want to do it as part of the same project or not, each approach has pros and cons. If you really want a new project created under https://github.com/ansible-community/ and had CI configured, just let me know and I will do it.
Hi,
First of, wonderfull work :)
When using $ vl --help
,I've noticed that the options up
, down
, status
, and distro_list
are showing inaccurate information:
$ vl --help
usage: vl [-h] [--debug] [--config CONFIG]
{up,down,start,stop,status,distro_list,storage_dir,ansible_inventory,ssh_config,ssh,console,fetch}
...
optional arguments:
-h, --help show this help message and exit
--debug Print extra information (default: False)
--config CONFIG path to configuration file
action:
{up,down,start,stop,status,distro_list,storage_dir,ansible_inventory,ssh_config,ssh,console,fetch}
up first
down first
start Start a new VM
stop Stop a VM
status first
distro_list first
storage_dir Print the storage directory
ansible_inventory Print an ansible_inventory of the running environment
ssh_config Print a ssh config of the running environment
ssh SSH to a given host
console Open the console of a given host
fetch Fetch a VM image
It is just showing first
, not so helpful :)
I assume that is should be the information described in the README.md
file.
Thank you!
Hi!!
Do we already have a feature for mounts? Maybe we could support this: https://libvirt.org/kbase/virtiofs.html#sharing-a-host-directory-with-a-guest
I suppose uids will have to match between the host and the guest but for simple stuff it would be nice!
Issue with resolving hostname
(.venv) SSH|domik:[virt-lightning] %> vl status
[host] [username@IP]
centos7 vg@waiting -
(.venv) SSH|domik:[virt-lightning] %> vl ssh
ssh: Could not resolve hostname none: Temporary failure in name resolution
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic
I think that support of public keys extracted from ssh-add -L
output would be nice. It will help a bit to run vl on some cloud VM accessed via ssh -A user@vm
command. Currently, i need to copy id_rsa.pub
from localhost to VM. Also, unrelated, but somewhat related issue: ssh_key_file
config option is not documented yet
❯ vl up
Traceback (most recent call last):
File "/home/thalin/.local/bin/vl", line 8, in <module>
sys.exit(main())
File "/home/thalin/.local/lib/python3.10/site-packages/virt_lightning/shell.py", line 378, in main
action_func(configuration=configuration, **vars(args))
File "/home/thalin/.local/lib/python3.10/site-packages/virt_lightning/api.py", line 183, in up
_register_aio_virt_impl(loop)
File "/home/thalin/.local/lib/python3.10/site-packages/virt_lightning/api.py", line 68, in _register_aio_virt_impl
libvirtaio.virEventRegisterAsyncIOImpl(loop=loop)
File "/usr/lib/python3/dist-packages/libvirtaio.py", line 477, in virEventRegisterAsyncIOImpl
_current_impl = virEventAsyncIOImpl(loop=loop).register()
File "/usr/lib/python3/dist-packages/libvirtaio.py", line 285, in __init__
self._finished = asyncio.Event(loop=loop)
File "/usr/lib/python3.10/asyncio/locks.py", line 168, in __init__
super().__init__(loop=loop)
File "/usr/lib/python3.10/asyncio/mixins.py", line 17, in __init__
raise TypeError(
TypeError: As of 3.10, the *loop* parameter was removed from Event() since it is no longer necessary
First, let me thank a lot everyone for this handy tool. I am currently using it to run tests deployments of a k8s cluster and it really saves me a lot of time.
In my use case I need two network interfaces to simulate a 'public' network for apps and a 'private' one for k8s APIs. I also need static ipv4s for the private side of things. My virt-lightning.yml
looks like that:
- name: whatever
distro: debian-9
networks:
- network: virt-lightning
- network: private
ipv4: 10.1.1.10
[...]
vl up
fails with the error "ipv4 already set!" as a network is already defined before the private one. Removing the exit(1)
in virt-lightning.py
allows the vms to boot up successfully (although without dhcp by default on the private interface but that's fine since the vm has connectivity with the first network).
I can/will work on a PR with a better approach than just commenting the exit
call.
Not found /etc/qemu/bridge.conf
%> vl up
Starting:centos7
Traceback (most recent call last):
File "/home/vg/Devel/virt-lightning/tmp/.venv/bin/vl", line 11, in <module>
load_entry_point('virt-lightning', 'console_scripts', 'vl')()
File "/home/vg/Devel/virt-lightning/virt_lightning/shell.py", line 331, in main
globals()[args.action](configuration=configuration, **vars(args))
File "/home/vg/Devel/virt-lightning/virt_lightning/shell.py", line 61, in up
hv.start(domain)
File "/home/vg/Devel/virt-lightning/virt_lightning/virt_lightning.py", line 189, in start
domain.dom.create()
File "/home/vg/Devel/virt-lightning/tmp/.venv/lib/python3.6/site-packages/libvirt.py", line 1068, in create
if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirt.libvirtError: internal error: /usr/lib/qemu/qemu-bridge-helper --use-vnet --br=virbr0 --fd=25: failed to communicate with bridge helper: Transport endpoint is not connected
stderr=failed to parse default acl file `/etc/qemu/bridge.conf'
System:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic
Also has installed bridge-utils
.
Hi,
I am trying to run the app without packaging the it each time, I managed to do it by:
python -c "from shell import main;main()" status
the issue is it looks for the app's modules in .../venv/lib/python3.10/site-packages/virt_lightning/ path, which are the old changes
is there a way to achieve this without packaging/installing the app each time ?
For idempotency purpose, I tried to run vl stop
twice on the same virtual machine.
I expected the second run to do nothing, as the virtual machine has already been taken down by the first run, but I received this error:
+ vl stop fedora-34
No running VM.
Traceback (most recent call last):
File "/home/remote/ffloreth/.local/bin/vl", line 8, in <module>
sys.exit(main())
File "/home/remote/ffloreth/.local/lib/python3.9/site-packages/virt_lightning/shell.py", line 378, in main
action_func(configuration=configuration, **vars(args))
File "/home/remote/ffloreth/.local/lib/python3.9/site-packages/virt_lightning/api.py", line 284, in stop
raise VMNotFound(kwargs["name"])
virt_lightning.api.VMNotFound: fedora-34
Currently, we hardcode the IP range of the network in the code. We should instead retrieve the information from the libvirt bridge configuration or using the ip r
command.
The users need a couple of pre-configuration before they can start using virt-lightning:
Currently, these steps are either just documented in the README, or done during the run-time. I imagine a new action called vl config
that would handle that for the user.
$ vl config --auto
Creating the storage directory in ~/.local/share/virt-lightning/pool/
Creating the libvirt storage pool
Ensure libvirt can use the virbr0 bridge
No ssh keypair found, generating on dedicated to virt-lighting
Generate the configuration file in ~/.config/virt-lightning/config.ini
$vl config
How do you want to use libvirt:
1) qemu:///session: does not require root privilege, the network features are limited
2) qemu:///system: requires extra privilege, but provide a full access to libvirt, including the creation of new network
-> 2
Checking the access to qemu:///system:
You need some extra configuration changes to be able to use qemu:///session:
Enable Polkit password-less authentication: **Yes**/No
Add the following groups (libvirt, qemu) to the current user (foobar): **Yes**/No
Where do you want to store your data (default: /var/lib/virt-lightning/pool):
Creating the storage directory in /var/lib/virt-lightning/pool
Do you want to use a dedicated network for your VM: **Yes**/No
Creating the virt-lightning libvirt network
No ssh keypair found, generating on dedicated to virt-lighting
Generate the configuration file in ~/.config/virt-lightning/config.ini
We don't have any logging system in place. As a result, we flood the terminal with print() output.
It would be nice to have a debug system. It should allow us to redirect the log to a file and filter the messages depending on the severity:
sl --debug up
sl --log-file ~/tmp/somewhere.log down
I'm running virt-lightning on Void Linux, which does not package genisoimage. It does package makeisofs which seems to be fully compatible with the arguments passed to genisoimage. I can just duplicate the solution used to find the KVM binary, unless you'd like this as a configuration option.
Doing some tinkering with Trusty VM and found that it has ansible_python_interpreter=/usr/bin/python3
in inventory file, which is incorrect. Trusty python3 is 3.4, and current ansible 2.9 wants at least 3.5 on client host. Alternative is to change it to python2 in inventory. Could we create inventory distro-wise and ensure, that correct python version is defined?
First noticed this trying FreeBSD-14 alpha, their images are compressed: https://download.freebsd.org/releases/VM-IMAGES/14.0-BETA5/amd64/Latest/FreeBSD-14.0-BETA5-amd64-zfs.qcow2.xz
I figured so be it, however Ubuntu feels the need to play file extensions:
[ ] mantic-server-cloudimg-amd64.img 2023-10-09 15:18 743M QCow2 UEFI/GPT Bootable disk image
It's simple enough to work around:
curl -o /var/lib/virt-lightning/pool/upstream/ubuntu-23-10.qcow2 https://cloud-images.ubuntu.com/mantic/current/mantic-server-cloudimg-amd64.img
Doesn't look like it would be terribly hard to adjust:
virt-lightning/virt_lightning/api.py
Line 443 in e6e6ae2
Certainly a niche issue, but any thoughts?
It seems to me the image URL are wrong?
jimb0@fedora:~$ vl up
downloading image from: https://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-GenericCloud-9-20220330.1.x86_64.qcow2
Image not found from url: centos-9-stream
Also the centos-8-stream image as well.
root@vl-test:~/.local/bin# pip3 install --user --no-deps virt-lightning
error: externally-managed-environment
should recommended/documented fix be --break-system-packages or pipx?
❯ vl up
Traceback (most recent call last):
File "/home/jbpratt/.local/bin/vl", line 8, in <module>
sys.exit(main())
File "/home/jbpratt/.local/lib/python3.10/site-packages/virt_lightning/shell.py", line 362, in main
action_func(configuration=configuration, **vars(args))
File "/home/jbpratt/.local/lib/python3.10/site-packages/virt_lightning/api.py", line 168, in up
_register_aio_virt_impl(loop)
File "/home/jbpratt/.local/lib/python3.10/site-packages/virt_lightning/api.py", line 68, in _register_aio_virt_impl
libvirtaio.virEventRegisterAsyncIOImpl(loop=loop)
File "/home/jbpratt/.local/lib/python3.10/site-packages/libvirtaio.py", line 462, in virEventRegisterAsyncIOImpl
_current_impl = virEventAsyncIOImpl(loop=loop).register()
File "/home/jbpratt/.local/lib/python3.10/site-packages/libvirtaio.py", line 277, in __init__
self._finished = asyncio.Event(loop=loop)
File "/usr/lib/python3.10/asyncio/locks.py", line 167, in __init__
super().__init__(loop=loop)
File "/usr/lib/python3.10/asyncio/mixins.py", line 17, in __init__
raise TypeError(
TypeError: As of 3.10, the *loop* parameter was removed from Event() since it is no longer necessary
Exception thrown in the virt-python dependency. Issue has been opened here https://gitlab.com/libvirt/libvirt-python/-/issues/10
Hi!
Maybe it would be interresting to be able to create our own boxes from VMs like vCenter does, except we do it via Python API / CLI.
What do you think?
There haven't been a new release on pypi for a while.
I'm reading:
https://packaging.python.org/en/latest/tutorials/packaging-projects/
Perhaps we can publish them as part of a Github workflow? Or upload them to Github pages with each successful test run(every commit)?
I have done the following:
vl fetch centos-8
When I check:
vl distro_list
- centos-8
but not preceded by - distro:
like in the gif.
So when I do
vl distro_list > virt-lightning.yaml
vl up
I get the following error:
Traceback (most recent call last):
File "/usr/bin/vl", line 33, in <module>
sys.exit(load_entry_point('virt-lightning==2.0.1', 'console_scripts', 'vl')())
File "/usr/lib/python3.9/site-packages/virt_lightning/shell.py", line 328, in main
action_func(configuration=configuration, **vars(args))
File "/usr/lib/python3.9/site-packages/virt_lightning/api.py", line 149, in up
_ensure_image_exists(hv, virt_lightning_yaml)
File "/usr/lib/python3.9/site-packages/virt_lightning/api.py", line 117, in _ensure_image_exists
distro = host.get("distro")
AttributeError: 'str' object has no attribute 'get'
No distro
is found so it fails.
We don't use any configuration file yet. For now, the configuration structure is hardcoded at the top of shell.py. We should use something like configparser to handle that.
Steps to reproduce:
$ vl up
$ vl up
downloading image from: https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
Image ubuntu-18.04 is ready!
Traceback (most recent call last):
File "/home/vmarkov/.local/bin/vl", line 8, in <module>
sys.exit(main())
File "/home/vmarkov/.local/lib/python3.8/site-packages/virt_lightning/shell.py", line 362, in main
action_func(configuration=configuration, **vars(args))
File "/home/vmarkov/.local/lib/python3.8/site-packages/virt_lightning/api.py", line 184, in up
_ensure_image_exists(hv, virt_lightning_yaml)
File "/home/vmarkov/.local/lib/python3.8/site-packages/virt_lightning/api.py", line 150, in _ensure_image_exists
if distro not in hv.distro_available():
File "/home/vmarkov/.local/lib/python3.8/site-packages/virt_lightning/virt_lightning.py", line 639, in distro_available
path = self.get_storage_dir() / "upstream"
File "/home/vmarkov/.local/lib/python3.8/site-packages/virt_lightning/virt_lightning.py", line 195, in get_storage_dir
xml = self.storage_pool_obj.XMLDesc(0)
File "/usr/lib/python3/dist-packages/libvirt.py", line 3490, in XMLDesc
if ret is None: raise libvirtError ('virStoragePoolGetXMLDesc() failed', pool=self)
libvirt.libvirtError: Cannot write data: Broken pipe
$ vl up
one more time, it finishes successfully, and doesn't mention image download stageWhen attempting to run Virtual Lightning (vl) on Ubuntu 22.4 with Python 3.10.6 and vl 2.3.0, users encounter a traceback error. The error originates from the virEventRegisterAsyncIOImpl
function in the libvirtaio.py
file, leading to a TypeError
due to the removal of the loop parameter from the asyncio.Event()
constructor starting from Python 3.10.
Steps to Reproduce:
1.
Install Ubuntu 22.4 and Python 3.10.6 on the system.
2.
Install Virtual Lightning (vl) version 2.3.0 using the command: pip install vl==2.3.0
.
3.
Execute any vl command, such as vl up
.
Expected Behavior:
Virtual Lightning (vl) should run without any errors and perform the intended actions without issues.
Actual Behavior:
Upon running any vl command, the following traceback error is encountered:
Traceback (most recent call last):
File "/home/ali/.local/bin/vl", line 8, in
sys.exit(main())
File "/home/ali/.local/lib/python3.10/site-packages/virt_lightning/shell.py", line 382, in main
action_func(configuration=configuration, **vars(args))
File "/home/ali/.local/lib/python3.10/site-packages/virt_lightning/api.py", line 180, in up
_register_aio_virt_impl(loop)
File "/home/ali/.local/lib/python3.10/site-packages/virt_lightning/api.py", line 67, in _register_aio_virt_impl
libvirtaio.virEventRegisterAsyncIOImpl(loop=loop)
File "/usr/lib/python3/dist-packages/libvirtaio.py", line 477, in virEventRegisterAsyncIOImpl
_current_impl = virEventAsyncIOImpl(loop=loop).register()
File "/usr/lib/python3/dist-packages/libvirtaio.py", line 285, in init
self._finished = asyncio.Event(loop=loop)
File "/usr/lib/python3.10/asyncio/locks.py", line 168, in init
super().init(loop=loop)
File "/usr/lib/python3.10/asyncio/mixins.py", line 17, in init
raise TypeError(
TypeError: As of 3.10, the loop parameter was removed from Event() since it is no longer necessary
Proposed Solution:
To resolve the issue, the Virtual Lightning (vl) codebase should be updated to accommodate the changes made to the asyncio.Event()
constructor in Python 3.10. The usage of the loop parameter should be removed from the libvirtaio.py
file, and the code should be modified to work seamlessly with Python 3.10 and later versions.
Workaround:
As a temporary workaround, users can consider downgrading their Python version to one that is compatible with Virtual Lightning (vl) until an updated version of vl is released to address the issue. Alternatively, they can use Python 3.9.x or any version that still supports the loop parameter in the asyncio.Event()
constructor.
Great software!!!
Can it support openvswitch bridge interfaces?
<interface type='bridge'>
<mac address='52:54:00:71:b1:b6'/>
<source bridge='ovsbr'/>
<virtualport type='openvswitch'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
If, for the directory /var/lib/virt-lightning/pool/upstream, the underlying filesystem is BTRFS, then we should make sure the images are created with copy on write turned off.
There are several ways to do it:
chattr -C
on the empty filesI am not sure which ways should be the most suitable for the project.
I am not sure other filesystems than BTRFS would be impacted.
References:
This would correct a spelling error/typo.
See #209, which I closed because I did not want to play with GPG for the sake of a trivial drive-by contribution.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.