Comments (3)
Thank you for the fast reply :)))
My mistake it wasn't from the loop nor from virtio interface, or parallelism.
From provider documentation:
initialization - (Optional) The cloud-init configuration.
datastore_id - (Optional) The identifier for the datastore to create the cloud-init disk in (defaults to local-lvm).
I have set the datastore_id I thought it will result the cloud init disk in to vm_disks (zfs pool)
initialization {
......
datastore_id = "vm_disks"
....
}
that's why I omitted datastore_id in the disk section thinking that it will be placed into vm_disks(zfs pool) but defaults to local-lvm(LVM-thin) and placed there the disks in raw format
disk {
interface = "scsi0"
file_id = "isos:iso/ubuntu-cloud-init-2022.4-minimal-amd64.img"
discard = "on"
size = 16
}
After correcting my self
I tried with datastore_id = "local-lvm" for initialisation and disk.
Both with loop and without loop multiple vms were created and resized, correctly just as they should. (disks are placed in local-lvm(LVM-thin) :)
Next
I tried with datastore_id = "vm_disks" for initialisation and disk to see if the zfs pool or local-lvm(LVM-thin) makes difference. And YES Both with loop and without loop multiple vms => Disk resize fails with timeout.
-
Creating tasks: vms are created almost instantly => OK
-
TASK ERROR: command '/usr/bin/qemu-img resize -f ..... failed: got timeout
-
Terraform exits no errors.
-
I check the UI the disks are not resized (in the vm_disks datastore zfs pool)
-
I apply terraform second time ~ disk {
~ size = 2 -> 16
# (9 unchanged attributes hidden)
}# (8 unchanged blocks hidden)
}
Plan: 0 to add, 4 to change, 0 to destroy.
6. Almost instantly the 4 resize tasks went OK
7. I powered up one vm df -h showed 16 GB exactly as desired
Unfortunatelly I have to apply twice to correct the state and the px vm hardware disk props. and manually start them.
If the state of the vm is started(in terraform), the disks are not resized from the first terraform apply and my cloud init has to install some packages and since the disks are not resized, it ends up no space.
So you might be right there is something to do with zfs pool:
with local-lvm: io delay spiked for short period of time around 40%
with zfs : io delay spiked for short around 60%
I will focus more on your suggestion about zfs. If I have updates will post.
from terraform-provider-proxmox.
You were right there is something to do with zfs, or maybe because I stored the vms disk in "dir" type. Anyway here how I solve my problem.
I destroyed my zfs and created again here is my pve storage config /etc/pve/storage.cfg (this time I use zfspool not dir type ):
`
lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
dir: local
path /var/lib/vz
content backup,iso,vztmpl,images,rootdir,snippets
prune-backups keep-all=1
add mounts ANSIBLE MANAGED BLOCK
dir: zfs_dir_isos
path /tank/zfs_dir_isos
content iso
shared 0
dir: zfs_dir_vm_disks
path /tank/zfs_dir_vm_disks
content images,vztmpl,snippets
shared 0
dir: zfs_dir_backups
path /tank/zfs_dir_backups
content backup
shared 0
zfspool: zfs_pool_vm_disks
pool tank
content images,rootdir
mountpoint /tank
sparse 1
add mounts ANSIBLE MANAGED BLOCK
terraform:
....
disk = {
datastore_id = "zfs_pool_vm_disks"
interface = "virtio0"
file_id = "zfs_dir_isos:iso/jammy-server-cloudimg-amd64.img"
discard = "ignore"
size = 16
file_format = "qcow2"
}
initialization {
.....
datastore_id = "zfs_pool_vm_disks"
......
}
`
parallelism=2 is best at least for me.
if I put it to 3 or 4 very frequently I get this error:
module.heye_vms_s3.proxmox_virtual_environment_vm.ubuntu_cloudinit_vms_s3["703"]: Creation complete after 45s [id=703] module.heye_vms_s3.proxmox_virtual_environment_vm.ubuntu_cloudinit_vms_s3["700"]: Creation complete after 45s [id=700] ╷ │ Error: trying to acquire lock... │ can't lock file '/var/lock/pve-manager/pve-storage-zfs_pool_vm_disks' - got timeout │ 400 Parameter verification failed. │ virtio0: invalid format - format error │ virtio0.file: invalid format - unable to parse volume ID 'zfs_pool_vm_disks:' │ │ │ qm set <vmid> [OPTIONS] │ │ │ with module.heye_vms_s3.proxmox_virtual_environment_vm.ubuntu_cloudinit_vms_s3["702"], │ on ../_modules/proxmox/vms/s3/compute.tf line 28, in resource "proxmox_virtual_environment_vm" "ubuntu_cloudinit_vms_s3": │ 28: resource "proxmox_virtual_environment_vm" "ubuntu_cloudinit_vms_s3" { │ ╵ ╷ │ Error: trying to acquire lock... │ OK │ command 'zfs create -s -V 2306048k tank/vm-701-disk-0' failed: got timeout │ 400 Parameter verification failed. │ virtio0: invalid format - format error │ virtio0.file: invalid format - unable to parse volume ID 'zfs_pool_vm_disks:' │ │ │ qm set <vmid> [OPTIONS] │ │ │ with module.heye_vms_s3.proxmox_virtual_environment_vm.ubuntu_cloudinit_vms_s3["701"], │ on ../_modules/proxmox/vms/s3/compute.tf line 28, in resource "proxmox_virtual_environment_vm" "ubuntu_cloudinit_vms_s3":
But I'am happy to run it with parallelism=2 , no errors, bulletproof :)
Thank you very much for the support @bpg
from terraform-provider-proxmox.
Interesting,... there was another similar use case: #831, also on ZFS. Curious if we're hitting some IO limitation there.
Could you also try with virtio
interface for your disks? It supposed to have better performance that scsi.
from terraform-provider-proxmox.
Related Issues (20)
- Cannot use cloudinit with initialization HOT 1
- Disk resize for VM not updated in terraform state HOT 15
- CD-ROM is not attached when creating a VM from scratch HOT 1
- Not sure why I have these issue HOT 2
- Cannot Set Disk Passthrough Serial
- Terraform unable to set CIUPGRADE config, regardless of running as root or not, using API token HOT 3
- proxmox_virtual_environment_user_token is not idempotent if no expiration_date is set HOT 1
- Idempotence issues in "proxmox_virtual_environment_vm" after cloning from another VM HOT 3
- Support provider_override HOT 4
- Document which is the latest compatible version for Proxmox 7.4 HOT 1
- Align Defaults with Proxmox Defaults HOT 1
- Error: unknown time zone HOT 3
- HTTP 596 response Connection timed out in proxmox_virtual_environment_file HOT 1
- disk resize fails: error waiting for VM disk resize: All attempts fail: HOT 2
- Allow VM clone to pass with warnings HOT 3
- Missing "none" and "VirGL" to VGA type in proxmox_vm proxmox_virtual_environment_vm HOT 1
- Container volume mount backup flag defaults to true in provider but false in Proxmox, so volume mounts are created with the flag missing. HOT 1
- 0.58.0 VM resource: `expected clipboard to be one of ["vnc"], got .` HOT 1
- Selection of the components you want to be hotpluggable HOT 1
- error creating clonr vm linked clone HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from terraform-provider-proxmox.