Giter VIP home page Giter VIP logo

terraform-provider-proxmox's Introduction

Metrics

terraform-provider-proxmox's People

Contributors

abdo-farag avatar allcontributors[bot] avatar bitchecker avatar blz-ea avatar bpg avatar bpg-autobot[bot] avatar dandaolrian avatar danielhabenicht avatar danitso-dp avatar dependabot[bot] avatar forsakenharmony avatar kaje783 avatar kugo12 avatar luhahn avatar michaelze avatar michalg91 avatar otopetrik avatar qazbnm456 avatar rafsaf avatar renovate[bot] avatar rgl avatar sergelogvinov avatar simoncaron avatar svengreb avatar thedevminertv avatar thenotary avatar tseeker avatar xonvanetta avatar zharalim avatar zmingxie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-proxmox's Issues

Typing error - dvResourceVirtualEnvironmentVMAgentEnabled instead of dvResourceVirtualEnvironmentVMAgentTrim

This:

mkResourceVirtualEnvironmentVMAgentTrim: dvResourceVirtualEnvironmentVMAgentEnabled,

looks in context:

mkResourceVirtualEnvironmentVMAgentEnabled: dvResourceVirtualEnvironmentVMAgentEnabled,
mkResourceVirtualEnvironmentVMAgentTimeout: dvResourceVirtualEnvironmentVMAgentTimeout,
mkResourceVirtualEnvironmentVMAgentTrim: dvResourceVirtualEnvironmentVMAgentEnabled,
mkResourceVirtualEnvironmentVMAgentType: dvResourceVirtualEnvironmentVMAgentType,

like a typing error.

failed to determine the IP address of "<node-name>" when using `proxmox_virtual_environment_file`

Describe the bug
When using proxmox_virtual_environment_file resources, the node_name doesn't seem to resolve. When using Proxmox_virtual_environment_vm resources, the node_name works fine.

To Reproduce
Not sure what precisely about my environment is making this happen, but I currently don't have a dns server locally (I'm actually building the vm to host that with this at the moment) which might be causing this issue.

The following file will reproduce:

terraform {
  required_providers {
    proxmox = {
      source = "bpg/proxmox"
      version = "0.6.1"
    }
    cloudinit = {
      source = "hashicorp/cloudinit"
      version = "2.2.0"
    }
  }
}

provider "proxmox" {
  virtual_environment {
    endpoint = var.proxmox_endpoint
    username = var.proxmox_username
    insecure = true
  }
}


resource "proxmox_virtual_environment_file" "cloud_config" {
  content_type = "snippets"
  datastore_id = "local"
  node_name    = "moneta"

  source_raw {
    data = "blah"
    file_name = "cloud-config.yaml"
  }
}

Expected behavior
The file to get created

cicustom vendor.yml

I'm trying to add a custom cloud init config file. Currently it mentions here that the custom user_data_file_id will conflict with the cloudinit configs that proxmox creates (which should are user, network and meta). Which makes sense. Either proxmox will generate them or you can supply them.

It's possible to extend the cloudinit configuration without interfering with the user, network and meta configs that proxmox generates. I propose implementation of the vendor config by adding a vendor_data_file_id argument in the terraform config.

I've manually tested succesfully that a vendor config file not interfere with the cloudinit configs that proxmox generates. Currently I add this line to the vmid config file:
cicustom: vendor=local:snippets/vendor.yaml

Add support for "G", "M" etc. disk size unit spec

Currently the unit of virtual_environment_vm.disk.size parameter is "gigabytes". Add ability to specify different units using G/GB, M/MB notation.

│ Error: Incorrect attribute value type
│ 
│   on main.tf line 117, in resource "proxmox_virtual_environment_vm" "example_vm":
│  117:     size         = "300G"
│ 
│ Inappropriate value for attribute "size": a number is required.

Unable to create VM using raw, xz compressed cloud image

Describe the bug
When using raw, xz compressed cloud image, the VM creation fails no matter which disk format is specified

To Reproduce
With the following resource, run terraform apply:

resource "proxmox_virtual_environment_vm" "reproduce_bug" {
  name        = var.vm_name
  started     = true

  node_name = var.proxmox_node_name
  vm_id     = random_integer.vmid.result

  disk {
    datastore_id = "local"
    file_format  = "raw"
    interface    = "scsi0"
    file_id      = proxmox_virtual_environment_file.debian_cloud_image.id
    size         = "10"
  }
}

Expected behavior
VM created using the specified image (in this case : https://cloud.debian.org/images/cloud/bookworm/daily/latest/debian-12-genericcloud-amd64-daily.tar.xz)

Actual behavior
The creation fails:

  • if file_format is set to raw, there is an extension handling issue, and the output is :
│ Error: WARNING: Image format was not specified for '/tmp/vm-487-disk-0.raw' and probing guessed raw.
│          Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
│          Specify the 'raw' format explicitly to remove the restrictions.
│ Image resized.
│ unable to parse volume filename '487/vm-487-disk-0.qcow2.qcow2'
  • if file_format is set to qcow2, there is an extension handling issue, and the output is :
│ Error: WARNING: Image format was not specified for '/tmp/vm-487-disk-0.qcow2' and probing guessed raw.
│          Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
│          Specify the 'raw' format explicitly to remove the restrictions.
│ Image resized.
│ qemu-img: Could not open '/tmp/vm-487-disk-0.qcow2': Image is not in qcow2 format
│ copy failed: command '/usr/bin/qemu-img convert -p -n -f qcow2 -O qcow2 /tmp/vm-487-disk-0.qcow2 zeroinit:/var/lib/vz/images/487/vm-487-disk-0.qcow2' failed: exit code 1
│ unable to parse volume filename '.qcow2'

Additional context
Add any other context about the problem here.

[BUG] SIGSEGV if cloned VM disk is in the different storage

Describe the bug
Plugin throws a SIGSEGV error if cloned VM disk is in the different storage.

╷
│ Error: Plugin did not respond
│ 
│   with module.ubuntu_vm.proxmox_virtual_environment_vm.ubuntu_vm,
│   on ../terraform-proxmox-ubuntu2004/main.tf line 1, in resource "proxmox_virtual_environment_vm" "ubuntu_vm":
│    1: resource "proxmox_virtual_environment_vm" "ubuntu_vm" {
│ 
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.
╵

Stack trace from the terraform-provider-proxmox_v0.4.6_x4 plugin:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0xd87a19]

goroutine 22 [running]:
github.com/danitso/terraform-provider-proxmox/proxmox.(*VirtualEnvironmentClient).MoveVMDisk(0xc000678460, 0xc00082c280, 0x7, 0x67, 0xc0004c6500, 0x708, 0xc000535860, 0x2)
        /home/pasha/code/terraform-provider-proxmox/proxmox/virtual_environment_vm.go:164 +0x79
github.com/danitso/terraform-provider-proxmox/proxmoxtf.resourceVirtualEnvironmentVMCreateClone(0xc0003eb730, 0xf71380, 0xc000678460, 0xee7b40, 0xc000841f38)
        /home/pasha/code/terraform-provider-proxmox/proxmoxtf/resource_virtual_environment_vm.go:1480 +0x2808
github.com/danitso/terraform-provider-proxmox/proxmoxtf.resourceVirtualEnvironmentVMCreate(0xc0003eb730, 0xf71380, 0xc000678460, 0x2, 0x182d760)
        /home/pasha/code/terraform-provider-proxmox/proxmoxtf/resource_virtual_environment_vm.go:1056 +0x88
github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Resource).Apply(0xc0000be870, 0xc00047e140, 0xc0005f0d40, 0xf71380, 0xc000678460, 0xf43701, 0xc00016bd18, 0xc000390b10)
        /home/pasha/go/pkg/mod/github.com/hashicorp/[email protected]/helper/schema/resource.go:310 +0x375
github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Provider).Apply(0xc0003a0280, 0xc00059fa38, 0xc00047e140, 0xc0005f0d40, 0xc00030c088, 0xc00012cbf0, 0xf457c0)
        /home/pasha/go/pkg/mod/github.com/hashicorp/[email protected]/helper/schema/provider.go:294 +0x99
github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin.(*GRPCProviderServer).ApplyResourceChange(0xc000010f58, 0x1253fd0, 0xc00024c1b0, 0xc00026a000, 0xc000010f58, 0xc00024c1b0, 0xc000646ba0)
        /home/pasha/go/pkg/mod/github.com/hashicorp/[email protected]/internal/helper/plugin/grpc_provider.go:885 +0x8a5
github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0x1039780, 0xc000010f58, 0x1253fd0, 0xc00024c1b0, 0xc000264060, 0x0, 0x1253fd0, 0xc00024c1b0, 0xc000269000, 0x7a2)
        /home/pasha/go/pkg/mod/github.com/hashicorp/[email protected]/internal/tfplugin5/tfplugin5.pb.go:3305 +0x214
google.golang.org/grpc.(*Server).processUnaryRPC(0xc000311dc0, 0x125d398, 0xc000582600, 0xc000266000, 0xc0004974d0, 0x17edaa0, 0x0, 0x0, 0x0)
        /home/pasha/go/pkg/mod/google.golang.org/[email protected]/server.go:1210 +0x52b
google.golang.org/grpc.(*Server).handleStream(0xc000311dc0, 0x125d398, 0xc000582600, 0xc000266000, 0x0)
        /home/pasha/go/pkg/mod/google.golang.org/[email protected]/server.go:1533 +0xd0c
google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc00003e3a0, 0xc000311dc0, 0x125d398, 0xc000582600, 0xc000266000)
        /home/pasha/go/pkg/mod/google.golang.org/[email protected]/server.go:871 +0xab
created by google.golang.org/grpc.(*Server).serveStreams.func1
        /home/pasha/go/pkg/mod/google.golang.org/[email protected]/server.go:869 +0x1fd

Error: The terraform-provider-proxmox_v0.4.6_x4 plugin crashed!

Error when creating `proxmox_virtual_environment_file` with `source_file`: Provider produced inconsistent result after apply

Describe the bug
I'm currently playing around with the example and wanted to move the cloud-init user config to a separate file and replace the option source_raw with source_file.

When I do it, during terraform apply, I get the following error:

│ Error: Provider produced inconsistent result after apply
│ 
│ When applying changes to proxmox_virtual_environment_file.user_config, provider "provider[\"gitlab.com/bpg/proxmox\"]" produced an unexpected new value: Root resource was present, but now absent.
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

To Reproduce
In the example folder, within file resource_virtual_environment_file.tf I replaced the source_raw parameter

source_raw {
    data = <<EOF
#cloud-config
chpasswd:
  list: |
    ubuntu:example
  expire: false
hostname: terraform-provider-proxmox-example
users:
  - default
  - name: ubuntu
    groups: sudo
    shell: /bin/bash
    ssh-authorized-keys:
      - ${trimspace(tls_private_key.example.public_key_openssh)}
    sudo: ALL=(ALL) NOPASSWD:ALL
    EOF

    file_name = "terraform-provider-proxmox-example-user-config.yaml"
}

with

source_file {
    path = "files/user_config.yml"
    file_name = "terraform-provider-proxmox-example-user-config.yaml"
  }

And moved those configs to a file files/user_config.yml within the example directory.
(I applied some modifications so it doesn't use that tls private key anymore).

Expected behavior
I expected it to work the same way as before, creating the resource on the Proxmox server.

Screenshots
image

Additional info
I tried using both the absolute and relative paths, and also a URL, but it didn't help.

Add support for "iothread" disk option for VM

Is your feature request related to a problem? Please describe.
There is an additional flag in Proxmox (also other qemu systems) to allow IOThreads on disk controllers: https://pve.proxmox.com/wiki/Qemu/KVM_Virtual_Machines

Describe the solution you'd like
A boolean param for disk resource allowing to enable iothread.

Additional context
Recommended setting for best performance.

The IO thread option allows each disk image to have its own thread instead of waiting in a queue with everything else. Since disk I/O is no longer waiting due to having its own threads, it does not hold up other tasks or queues related to the VM, which in turn speeds up the VM performance besides providing increased disk performance

Enable more golangci linters and fix found issues

This story is a followup to #149

An example golangci configuration can be taken from https://freshman.tech/linting-golang/
Simply adding that file produces a trove of errors some are more important than others.
At least we should define

issues:
  new-from-rev: 0fad160ed61cf763ce294a76e35b8c0f56cd33e8

to make sure the new code is properly linted. But ultimately the goal is to fix the most important of them, and define a strategy for addressing the rest.

cannot unmarshal string into Go struct field VirtualEnvironmentVMGetResponseData.data.efidisk0

Describe the bug
Error when cloning a template with an efi disk

To Reproduce
Steps to reproduce the behavior:

  1. Create a vm/template with ovmf (uefi) as biostype and add a efi disk
  2. terraform plan:

var.tf

variable "pve" {
	type = object({
		address    = string
		username   = string
		password   = string	
		insecure   = string	
	})
	sensitive = true
}

variable "vms" {
	type = list(object({
	  id          = number
	  name        = string
          node        = optional(string)
	  description = string
          type        = string
          config      = object({
		  template_id = number
	  })
	}))
}

main.tf

terraform {
	required_providers {
          proxmox = {
            source = "bpg/proxmox"
            version = "0.9.1"
          }
	}
}

provider "proxmox" {
  virtual_environment {
    endpoint = var.pve.address
    username = var.pve.username
    password = var.pve.password
    insecure = var.pve.insecure
  }
}

resource "proxmox_virtual_environment_vm" "vms" {
    for_each            = { for vm in var.vms : vm.name => vm}

    name                = each.key
    node_name      = "proxmox01"
    started              = true
    timeout_shutdown_vm = 10

  clone  {
        vm_id = each.value.config.template_id
        retries = 20
    }    

}
  1. terraform apply
  2. See error
│ Error: failed to decode HTTP GET response (path: nodes/pve01/qemu/102/config) - Reason: json: cannot unmarshal string into Go struct field VirtualEnvironmentVMGetResponseData.data.efidisk0 of type proxmox.CustomEFIDisk

Expected behavior
Create vm without a problem

Additional context
Unfortunately, not using an efi disk does not solve the problem.
Terraform aborts the apply step because proxmox issues a warning that no efi disk exists and creates a temporary one.

# terraform:
│ Error: task "UPID:pve01:0026321B:0AB4A14F:63BBE15C:qmstart:102:root@pam:" on node "pve01" failed to complete with error: WARNINGS: 1

# proxmox:
WARN: no efidisk configured! Using temporary efivars disk.
TASK WARNINGS: 1

Add support for setting `hostpci` devices to a VM

Is your feature request related to a problem? Please describe.

When I provision VMs via terraform that require some PCI device from the host to be passed through, I have to configure said device post-provisioning with Ansible or something similar.

Describe the solution you'd like

Just like we have the disk or network blocks, we should also have hostpci block in which we can provide the device IDs to be passed through to the VM.

Describe alternatives you've considered

N/A

Additional context

N/A

Powered off VM breaks plan/apply

Describe the bug
If VM is not started at the time of terraform plan outputs ipv4_addresses, ipv6_addresses, network_interface_names are empty, even if successful terraform apply would start the VM.
That breaks plan/apply of configurations using the outputs.

To Reproduce
Steps to reproduce the behavior:
1.

resource "proxmox_virtual_environment_vm" "example" {
  name        = "terraform-provider-proxmox-example"
  started = true
  # on_boot defaults to false
  ...
}
# simulating a resource depending on VM's ip addresses
resource "local_file" "foo" {
    content  = "VM IP is ${element(element(proxmox_virtual_environment_vm.example.ipv4_addresses, index(proxmox_virtual_environment_vm.example.network_interface_names, "eth0")), 0)}"
    filename = "${path.module}/vm_ip.txt"
}
  1. terraform apply
  2. Shutdown VM using Proxmox web interface (e.g. simulating Proxmox host reboot or powering down currently unused VMs)
  3. terraform apply
  4. See error:
local_file.foo: Refreshing state... [id=e46a7970f9014e8ab439e6c519da222242f4dd2e]
╷
│ Error: Error in function call
│ 
│   on main.tf line 99, in resource "local_file" "foo":
│   99:     content  = "VM IP is ${element(element(proxmox_virtual_environment_vm.example.ipv4_addresses, index(proxmox_virtual_environment_vm.example.network_interface_names, "eth0")), 0)}"
│     ├────────────────
│     │ proxmox_virtual_environment_vm.example.network_interface_names is empty list of string
│ 
│ Call to function "index" failed: cannot search an empty list.
╵

Expected behavior
terraform plan would not fail, terraform apply would start the VM.

Supporting token auth

Is your feature request related to a problem? Please describe.
Proxmox has a concept of API Tokens that provide stateless access to the REST API. Over using password-auth for accessing the API, tokens can offer some benefits:

  • Much simpler auth workflow (as it is stateless)
  • More security as the user credentials are never exposed and tokens can have separated permissions from the user who creates them.

It would be nice to have the option to use token auth, in addition to the password based auth already present - this is not an argument for one over the other. Wanted to get a feeler out to see how you feel about it? I am open to actually adding support for it. I understand that some of the API may not be accessible using API Tokens, but the Proxmox documentation is vague about it so I am not sure where the edges are.

Describe the solution you'd like
The provider supports authenticating via API tokens. The declaration may look like:

provider "proxmox" {
  virtual_environment {
    endpoint = "https://10.0.0.2"
    username = "apiuser@pam"
    token_id = "apitokenname"
    token_secret = "337e5340-6485-43d6-a0f6-7c4d3ceda74d" # or sourced from an env var or variable
    insecure = true
  }
}

This will translate into API requests of the form:

curl -k https://10.0.0.2/ -H 'Authorization: PVEAPIToken=apiuser@pam!apitokenname=337e5340-6485-43d6-a0f6-7c4d3ceda74d

Describe alternatives you've considered
NA

Additional context
NA

Add support for multi network interface in initialization/ip_config/ipv4

Describe the bug
In the part "initialization/ip_config/ipv4" we can only configure 1 network.

The problem now arises when the VM has multiple networks and we want to configure them, this is currently not possible.

For information only:
Proxmox currently sets all other interfaces in the cloud-init part to static and the IP address remains empty.

To Reproduce

  1. terraform plan:

var.tf

variable "pve" {
	type = object({
		address    = string
		username   = string
		password   = string	
		insecure   = string	
	})
	sensitive = true
}

variable "vms" {
	type = list(object({
	  id          = number
	  name        = string
          node        = optional(string)
	  description = string
          type        = string
          config      = object({
		  template_name = string
	  })
          interfaces  = list(object({
		  id        		= number
                 name      		= string
                 network_id 		= number
                 ipv4_address  	= optional(string)
                 mac_address   	= optional(string)
          }))	
	}))
}

main.tf

terraform {
	required_providers {
        proxmox = {
            source = "bpg/proxmox"
            version = "0.8.0"
        }
	}
}

provider "proxmox" {
  virtual_environment {
    endpoint = var.pve.address
    username = var.pve.username
    password = var.pve.password
    insecure = var.pve.insecure
  }
}

resource "proxmox_virtual_environment_vm" "vms" {
    for_each            = { for vm in var.vms : vm.name => vm}

    name                = each.key
    node_name      = "proxmox01"
    started              = true
    timeout_shutdown_vm = 10


    dynamic "network_device" {
        for_each          = { for network in each.value.interfaces : network.id => network}
        content {
            model       = "e1000"
            mac_address = network_device.value.mac_address != null ? network_device.value.mac_address : ""
            bridge      = "vmbr0"
        }
    }

    initialization {
            ip_config {
                dynamic "ipv4" {
                    for_each = { for network in each.value.interfaces : network.id => network}
                    content {
                        address = ipv4.value.ipv4_address != null ? ipv4.value.ipv4_address : "dhcp"
                    }
                }
            }
    }
}
  1. See error
 Error: Too many ipv4 blocks
│ 
│   on main.tf line 106, in resource "proxmox_virtual_environment_vm" "vms":
│  106:                     content {
│ 
│ No more than 1 "ipv4" blocks are allowed

Expected behavior
Allow to configure as many ip-addresses as network interface are available.

Terraform fails to create multiple VMs when done in quick succession

Describe the bug
Terraform fails to create multiple VMs when done in quick succession. When attempting to create and provision >= 4 VMs I'm running into a bizarre error where Terraform is appearing to try and resize disks and failing due to VMs being locked. In a batch of 20 VMs, 3 will succeed, trying again may yield +/-1 successful VM. Failed VMs in Proxmox show up in the UI, but are notably missing a boot disk. Qemu Agent is enabled in Terraform, and I am using cloud init Ubuntu images. Qemu Agent appears enabled on failed VMs.

To Reproduce

locals {
    vm_mac_address_start_offset = 181124894621696
}

provider "proxmox" {
  virtual_environment {
    endpoint = "https://10.0.2.210:8006/"
    username = <configure for demo>
    password =<configure for demo>
    insecure = true
  }
}


#===============================================================================
# Cloud Config (cloud-init) -- Redis
#===============================================================================

resource "proxmox_virtual_environment_file" "redis_config" {
  content_type = "snippets"
  datastore_id = "local"
  node_name    = "pve"

  source_raw {
    data = <<EOF
#cloud-config
hostname: redis
users:
  - default
  - name: maxwlang
    groups: sudo
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL
  - name: terraform
    groups: 
      - sudo
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL
runcmd:
    - apt update
    - apt install -y qemu-guest-agent net-tools zsh
    - timedatectl set-timezone America/Chicago
    - systemctl enable qemu-guest-agent
    - systemctl start qemu-guest-agent
    - echo "done" > /tmp/vendor-cloud-init-done
    EOF

    file_name = "terraform--cloud-config--redis-config.yaml"
  }
}


#===============================================================================
# Ubuntu Focal Cloud Img
#===============================================================================

resource "proxmox_virtual_environment_file" "ubuntu_focal_fossa_cloud_image" {
  content_type = "iso"
  datastore_id = "local"
  node_name    = "pve"

  source_file {
    path = "https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img"
  }
}


#===============================================================================
# VM -- Redis
#===============================================================================

resource "proxmox_virtual_environment_vm" "virtualmachine" {
  depends_on = [
    proxmox_virtual_environment_file.redis_config,
    proxmox_virtual_environment_file.ubuntu_focal_fossa_cloud_image
  ]
  count = 20
  name        = "test-${count.index}"
  description = "test"
  node_name   = "pve"
  vm_id       = 100 + count.index
  on_boot     = true

  agent {
    enabled = true
  }

  cpu {
    cores = 2
  }

  memory {
    dedicated = 2048
  }

  disk {
    datastore_id = "local-lvm"
    file_id      = proxmox_virtual_environment_file.ubuntu_focal_fossa_cloud_image.id
    interface    = "virtio0"
    discard      = "on"
    iothread     = true
    size         = 20
  }

  initialization {
    ip_config {
      ipv4 {
        address = "10.0.2.${100 + count.index}/24"
        gateway = "10.0.2.1"
      }
    }
    user_data_file_id = proxmox_virtual_environment_file.redis_config.id
  }

  operating_system {
    type = "l26"
  }

  network_device {
    mac_address = trimsuffix(replace(format("%X", local.vm_mac_address_start_offset + count.index), "/(..)/", "$1:"), ":")
  }
}

Expected behavior
All VMs should be created, contain a boot disk, and start up.

Screenshots
End result of a run
image
VM Options
image
System
image

Additional context
TF_LOG=trace errors:

" tf_proto_version=5.3 tf_resource_type=proxmox_virtual_environment_vm tf_rpc=ApplyResourceChange timestamp=2022-12-15T20:37:36.176-0600
2022-12-15T20:37:36.176-0600 [TRACE] provider.terraform-provider-proxmox_v0.7.0: Served request: tf_req_id=244236c6-c7c3-44ff-5e3c-3550efdae3bf tf_resource_type=proxmox_virtual_environment_vm @caller=github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:831 tf_rpc=ApplyResourceChange @module=sdk.proto tf_proto_version=5.3 tf_provider_addr=registry.terraform.io/bpg/proxmox timestamp=2022-12-15T20:37:36.176-0600
2022-12-15T20:37:36.177-0600 [TRACE] maybeTainted: proxmox_virtual_environment_vm.virtualmachine[3] encountered an error during creation, so it is now marked as tainted
2022-12-15T20:37:36.177-0600 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState to workingState for proxmox_virtual_environment_vm.virtualmachine[3]
2022-12-15T20:37:36.177-0600 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState: writing state object for proxmox_virtual_environment_vm.virtualmachine[3]
2022-12-15T20:37:36.177-0600 [TRACE] evalApplyProvisioners: proxmox_virtual_environment_vm.virtualmachine[3] is tainted, so skipping provisioning
2022-12-15T20:37:36.177-0600 [TRACE] maybeTainted: proxmox_virtual_environment_vm.virtualmachine[3] was already tainted, so nothing to do
2022-12-15T20:37:36.177-0600 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState to workingState for proxmox_virtual_environment_vm.virtualmachine[3]
2022-12-15T20:37:36.177-0600 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState: writing state object for proxmox_virtual_environment_vm.virtualmachine[3]
2022-12-15T20:37:36.177-0600 [TRACE] statemgr.Filesystem: have already backed up original terraform.tfstate to terraform.tfstate.backup on a previous write
2022-12-15T20:37:36.178-0600 [TRACE] statemgr.Filesystem: state has changed since last snapshot, so incrementing serial to 59
2022-12-15T20:37:36.178-0600 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
2022-12-15T20:37:36.181-0600 [ERROR] vertex "proxmox_virtual_environment_vm.virtualmachine[3]" error: Image resized.
VM is locked (create)
400 Parameter verification failed.
virtio0: invalid format - format error
virtio0.file: invalid format - unable to parse volume ID 'local-lvm:'


qm set <vmid> [OPTIONS]
2022-12-15T20:37:36.181-0600 [TRACE] vertex "proxmox_virtual_environment_vm.virtualmachine[3]": visit complete, with errors
2022-12-15T20:37:36.193-0600 [ERROR] provider.terraform-provider-proxmox_v0.7.0: Failed to close ssh session: tf_provider_addr=registry.terraform.io/bpg/proxmox tf_req_id=da8e8ed0-a4da-954b-c2ee-323fc274e0c1 error=EOF @module=proxmox tf_resource_type=proxmox_virtual_environment_vm tf_rpc=ApplyResourceChange @caller=github.com/bpg/terraform-provider-proxmox/proxmox/virtual_environment_nodes.go:46 timestamp=2022-12-15T20:37:36.193-0600
2022-12-15T20:37:36.193-0600 [TRACE] provider.terraform-provider-proxmox_v0.7.0: Called downstream: tf_provider_addr=registry.terraform.io/bpg/proxmox tf_req_id=da8e8ed0-a4da-954b-c2ee-323fc274e0c1 tf_resource_type=proxmox_virtual_environment_vm tf_rpc=ApplyResourceChange @caller=github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:838 @module=sdk.helper_schema timestamp=2022-12-15T20:37:36.193-0600
2022-12-15T20:37:36.194-0600 [TRACE] provider.terraform-provider-proxmox_v0.7.0: Received downstream response: diagnostic_error_count=1 diagnostic_warning_count=0 tf_req_id=da8e8ed0-a4da-954b-c2ee-323fc274e0c1 tf_resource_type=proxmox_virtual_environment_vm @module=sdk.proto tf_proto_version=5.3 tf_provider_addr=registry.terraform.io/bpg/proxmox tf_req_duration_ms=3171 tf_rpc=ApplyResourceChange @caller=github.com/hashicorp/[email protected]/tfprotov5/internal/tf5serverlogging/downstream_request.go:37 timestamp=2022-12-15T20:37:36.193-0600
2022-12-15T20:37:36.194-0600 [ERROR] provider.terraform-provider-proxmox_v0.7.0: Response contains error diagnostic: diagnostic_severity=ERROR tf_proto_version=5.3 tf_req_id=da8e8ed0-a4da-954b-c2ee-323fc274e0c1 tf_resource_type=proxmox_virtual_environment_vm diagnostic_summary="Image resized.
VM is locked (create)
400 Parameter verification failed.
virtio0: invalid format - format error
virtio0.file: invalid format - unable to parse volume ID 'local-lvm:'


qm set <vmid> [OPTIONS]

versions.tf

terraform {
  required_providers {
    proxmox = {
      source  = "bpg/proxmox"
      version = "0.7.0"
    }
  }
}

CDKTF doesn't extract provider values

Describe the bug
When using CDKTF (v.0.12.1) and configuring the provider, I cannot add a "virtual_environment {}" block. An error "Argument of type virtualEnvironment is not assignable to parameter of type 'ProxmoxProviderConfig'". CDKTF only recognizes the "alias" parameter. Without the parameter, deployment seems to fail on this:

│ Error: you must specify the virtual environment details in the provider configuration
│ 
│   with proxmox_virtual_environment_pool.resource-pool,
│   on cdk.tf.json line 37, in resource.proxmox_virtual_environment_pool.resource-pool:
│   37:       }

To Reproduce
Steps to reproduce the behavior:
In a cdktf project, create a proxmox resource and declare the provider:

    // Setup provider
    new ProxmoxProvider(this, 'proxmox', {});

run cdktf deploy

Expected behavior
The provider should be generated correctly

Additional context
None at this point

VM Disks are getting reordered on apply causing VM re-creation

Describe the bug
If VM has multiple disks defined, applying the same template second time may cause VM recreation, as provider thinks that the disks have changed.

To Reproduce
Steps to reproduce the behavior:

  1. Define a VM template with multiple disks, for example
  disk {
    datastore_id = local.datastore_id
    file_id      = proxmox_virtual_environment_file.ubuntu_cloud_image.id
    interface    = "virtio0"
    iothread     = true
  }

  disk {
    datastore_id = local.datastore_id
    interface    = "scsi0"
    discard      = "on"
    ssd          = true
  }

  disk {
    datastore_id = "nfs"
    interface    = "scsi1"
    discard      = "ignore"
    file_format  = "raw"
  }
  1. Apply the template with terraform apply
  2. Without any changes to the template, run apply again
  3. Notice that terraform detected changes in the template:

Screenshot 2022-12-13 at 6 01 22 PM

Expected behavior
A second apply of the template without any changes should cause no-op from terraform.

Additional context
Order of disks in PVE UI after appy:
Screenshot 2022-12-13 at 6 17 45 PM
This seems to be a reason for the mismatch.

Workaround
Manually re-order disks in the template to match the order expected by PVE (alphabetically by interface name)

Running `make example` sporadically fails with `scsi0: invalid format` error

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Run make example
  2. If you're unlucky you may see:
tls_private_key.proxmox_virtual_environment_certificate: Creating...
tls_private_key.example: Creating...
proxmox_virtual_environment_cluster_alias.example: Creating...
proxmox_virtual_environment_role.example: Creating...
proxmox_virtual_environment_time.example: Creating...
proxmox_virtual_environment_cluster_ipset.example: Creating...
proxmox_virtual_environment_dns.example: Creating...
proxmox_virtual_environment_file.ubuntu_cloud_image: Creating...
proxmox_virtual_environment_file.ubuntu_container_template: Creating...
proxmox_virtual_environment_hosts.example: Creating...
tls_private_key.proxmox_virtual_environment_certificate: Creation complete after 0s [id=85aa38b34c0aea0f69cee4b56f0d042315b3a97b]
proxmox_virtual_environment_pool.example: Creating...
tls_private_key.example: Creation complete after 0s [id=c281cae3a7bef72c6f7e50eaa93ac65636596fa6]
tls_self_signed_cert.proxmox_virtual_environment_certificate: Creating...
tls_self_signed_cert.proxmox_virtual_environment_certificate: Creation complete after 0s [id=304105590674019447920336097746891321877]
local_sensitive_file.example_ssh_public_key: Creating...
local_sensitive_file.example_ssh_public_key: Creation complete after 0s [id=6448807b6ed81c643253b8da16b51f941ee3b938]
proxmox_virtual_environment_cluster_alias.example: Creation complete after 0s [id=example]
proxmox_virtual_environment_dns.example: Creation complete after 0s [id=pve_dns]
local_sensitive_file.example_ssh_private_key: Creating...
local_sensitive_file.example_ssh_private_key: Creation complete after 0s [id=2f8defdba7d09af60146c5d7b3c50b80b310b3b7]
proxmox_virtual_environment_file.cloud_config: Creating...
proxmox_virtual_environment_certificate.example: Creating...
data.proxmox_virtual_environment_cluster_alias.example: Reading...
proxmox_virtual_environment_role.example: Creation complete after 0s [id=terraform-provider-proxmox-example]
data.proxmox_virtual_environment_cluster_aliases.example: Reading...
proxmox_virtual_environment_pool.example: Creation complete after 0s [id=terraform-provider-proxmox-example]
data.proxmox_virtual_environment_cluster_alias.example: Read complete after 0s [id=example]
data.proxmox_virtual_environment_cluster_aliases.example: Read complete after 0s [id=aliases]
proxmox_virtual_environment_hosts.example: Creation complete after 0s [id=pve_hosts]
data.proxmox_virtual_environment_roles.example: Reading...
proxmox_virtual_environment_cluster_ipset.example: Creation complete after 0s [id=local_network]
proxmox_virtual_environment_time.example: Creation complete after 0s [id=pve_time]
data.proxmox_virtual_environment_role.example: Reading...
data.proxmox_virtual_environment_pools.example: Reading...
data.proxmox_virtual_environment_roles.example: Read complete after 0s [id=roles]
data.proxmox_virtual_environment_pool.example: Reading...
data.proxmox_virtual_environment_pools.example: Read complete after 1s [id=pools]
data.proxmox_virtual_environment_role.example: Read complete after 1s [id=terraform-provider-proxmox-example]
data.proxmox_virtual_environment_pool.example: Read complete after 1s [id=terraform-provider-proxmox-example]
proxmox_virtual_environment_file.cloud_config: Creation complete after 1s [id=local:snippets/terraform-provider-proxmox-example-cloud-config.yaml]
proxmox_virtual_environment_certificate.example: Creation complete after 2s [id=pve_certificate]
proxmox_virtual_environment_file.ubuntu_cloud_image: Still creating... [10s elapsed]
proxmox_virtual_environment_file.ubuntu_container_template: Still creating... [10s elapsed]
proxmox_virtual_environment_file.ubuntu_cloud_image: Still creating... [20s elapsed]
proxmox_virtual_environment_file.ubuntu_container_template: Still creating... [20s elapsed]
proxmox_virtual_environment_file.ubuntu_container_template: Creation complete after 26s [id=local:vztmpl/ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz]
proxmox_virtual_environment_container.example_template: Creating...
proxmox_virtual_environment_file.ubuntu_cloud_image: Still creating... [30s elapsed]
proxmox_virtual_environment_file.ubuntu_cloud_image: Creation complete after 35s [id=local:iso/bionic-server-cloudimg-amd64.img]
proxmox_virtual_environment_vm.example_template: Creating...
proxmox_virtual_environment_container.example_template: Still creating... [10s elapsed]
proxmox_virtual_environment_container.example_template: Creation complete after 10s [id=2042]
proxmox_virtual_environment_container.example: Creating...
proxmox_virtual_environment_container.example: Still creating... [10s elapsed]
proxmox_virtual_environment_container.example: Still creating... [20s elapsed]
proxmox_virtual_environment_container.example: Creation complete after 28s [id=2043]
╷
│ Error: Image resized.
│ VM is locked (create)
│ 400 Parameter verification failed.
│ scsi0: invalid format - format error
│ scsi0.file: invalid format - unable to parse volume ID 'local-lvm:'
│ 
│ 
│ qm set <vmid> [OPTIONS]
│ 
│ 
│   with proxmox_virtual_environment_vm.example_template,
│   on resource_virtual_environment_vm.tf line 1, in resource "proxmox_virtual_environment_vm" "example_template":
│    1: resource "proxmox_virtual_environment_vm" "example_template" {
│ 
╵
make: *** [example-apply] Error 1
  1. Run make example-destroy && make example again -- no errors 🤷🏻

Expected behavior
make example should not randomly throw "invalid format" error

Screenshots
N/A

Additional context
Symptoms suggest there is a race condition somewhere in the provider's code. Happens only for VM, not LXC.

Add argument "tags" for proxmox_virtual_environment_vm

Proxmox vms and container support "tags", though the provider lacks those arguments.
For time beeing I manualy add the necessary tags with qm set <vmid> -tags lab1,rke2_master,keepalived (only keys are allowed, key=value is not allowed).
The tags can be used to leverage dynamic inventories with ansible.

Please consider to add the "tags" argument to make the manual command obsolet.

Enable boot order to be manually set

I have a use case where I need to set the boot order (or boot devices) for a VM, as far as I can tell this is not yet enabled in this terraform provider but would be a useful addition

Waiting for proxmox_virtual_environment_vm's ipv4_addresses does not really work

Describe the bug
Waiting for any IP address is not enough. The fastest available one (IPv6 link-local) is usually not enough.
Even in case of actual IPv4 and IPv6 addresses, (random) one of them would be acquired first, making it impossible to reliably connect/provision.

To Reproduce
Steps to reproduce the behavior:

  1. Clone VM from a template that supports both IPv4 and IPv6
  2. Fail to connect for provisioning
  3. See error:
proxmox_virtual_environment_vm.example: Still creating... [40s elapsed]
╷
│ Error: Error in function call
│ 
│   on main.tf line 51, in resource "proxmox_virtual_environment_vm" "example":
│   51:     host = element(element(self.ipv4_addresses, index(self.network_interface_names, "eth0")), 0)
│     ├────────────────
│     │ self.ipv4_addresses is list of list of string with 2 elements
│     │ self.network_interface_names is list of string with 2 elements
│ 
│ Call to function "element" failed: cannot use element function with an empty list.
  1. Discover that
    if nic.IPAddresses == nil || (nic.IPAddresses != nil && len(*nic.IPAddresses) == 0) {
    waits for any (IPv4 or IPv6) address on non-loopback interface
  2. Realize that provisioning step
    host = element(element(self.ipv4_addresses, index(self.network_interface_names, "eth0")), 0)
    has to pick either IPv4 or IPv6 and then needs provider to wait until agent reports suitable address.

Expected behavior
The following line

host = element(element(self.ipv4_addresses, index(self.network_interface_names, "eth0")), 0)

should work...

Screenshots
excerpt from terraform.tfstate

           "ipv4_addresses": [
              [
                "127.0.0.1"
              ],
              []
            ],
            "ipv6_addresses": [
              [
                "::1"
              ],
              [
                "fe80::c88a:70ff:fe37:225f"
              ]
            ],

At this time, the machine can be connected only over IPv6, and using link-local address. Chances are even in IPv6-only scenario, there is a router between the machine running terraform and the VM.

Additional context
In

func (c *VirtualEnvironmentClient) WaitForNetworkInterfacesFromVMAgent(ctx context.Context, nodeName string, vmID int, timeout int, delay int, waitForIP bool) (*VirtualEnvironmentVMGetQEMUNetworkInterfacesResponseData, error) {
the waitForIP boolean is not enough.
There should more complex condition.
Probably adding something like the following to proxmox_virtual_environment_vm's agent block:

agent {
  enabled = true
  expect_networking {
    ipv4 = true # will wait for IPv4 address even though it already has IPv6 address
    ipv6_link = false # will wait for IPv6, and consider fe80::/10 good enough
    ipv6 = false # will wait for IPv6 address not in fe80::/10
  }
}

(defaulting only ipv4 to true seems reasonable)
The provider should wait until all condition marked true are met.

Even better solution would be to repeat expect_networking block per-interface:

agent {
  enabled = true
  expect_networking {
    interface = "eth0"
    ipv4 = true
  }
  expect_networking {
    interface = "eth2"
    ipv4 = false
    ipv6 = true
  }
}

This example would result in waiting until eth0 has an IPv4 address and eth2 has non-link-local IPv6 address (even in if eth1 interface already has IPv4 and IPv6 addresses).
This could be useful for VMs that have static addresses on some interfaces (e.g. router VMs,...)

Omitting interface field should allow any interface to fulfil the condition imposed by the block.
Then the first example would be a bit stricter - it would mean that there has to be one interface that has both IPv4 and IPv6 addresses.
Weaker condition, requiring IPv4 and IPv6 addresses, but optionally on different interfaces would look like this:

agent {
  enabled = true
  expect_networking {
    ipv4 = true
    ipv6 = false
  }
  expect_networking {
    ipv4 = false
    ipv6 = true
  }
}

(VM with single network interface that has both IPv4 and IPv6 would also fulfil this condition)

Error creating VM with multiple disks on different storages

Describe the bug
When applying a template like

resource "proxmox_virtual_environment_vm" "ubuntu_vm" {
  ...
  disk {
    datastore_id = "local-lvm"
    file_id      = proxmox_virtual_environment_file.ubuntu_cloud_image.id
    interface    = "scsi0"
  }

  disk {
    datastore_id = "data"
    interface    = "scsi1"
  }
  ...
}

TF throws an error:

╷
│ Error: Image resized.
│ importing disk '/tmp/vm-4321-disk-1.qcow2' to VM 4321 ...
│   Logical volume "vm-4321-disk-0" created.
│ transferred 0.0 B of 8.0 GiB (0.00%)
| ...
│ transferred 8.0 GiB of 8.0 GiB (100.00%)
│ transferred 8.0 GiB of 8.0 GiB (100.00%)
│ Successfully imported disk as 'unused0:local-lvm:vm-4321-disk-0'
│ update VM 4321: -scsi0 local-lvm:vm-4321-disk-1
│ no such logical volume pve/vm-4321-disk-1

In fact, the provider created 2 disks with the same name vm-4321-disk-0 in each of the specified storages, therefore can't find vm-4321-disk-1 🤷🏻
This error does not occur when both disks are in the same storage.

Expected behaviour
Able to provision VM with 2 or more disks in different storages

Screens
Screen Shot 2022-07-08 at 9 42 27 PM
hots

timeout while waiting for VM "7001" configuration to become unlocked

Problem:

  • I get timeout while waiting for VM "7001" configuration to become unlocked while cloning a template

Error:
proxmox_virtual_environment_vm.vm-test-debian: Still creating... [13m30s elapsed] proxmox_virtual_environment_vm.vm-test-debian: Still creating... [13m40s elapsed] proxmox_virtual_environment_vm.vm-test-debian: Still creating... [13m50s elapsed] 2022-10-12T10:01:44.827+0300 [ERROR] provider.terraform-provider-proxmox_v0.6.2: Response contains error diagnostic: tf_provider_addr=registry.terraform.io/example-namespace/example tf_resource_type=proxmox_virtual_environment_vm @caller=github.com/hashicorp/[email protected]/tfprotov5/internal/diag/diagnostics.go:55 diagnostic_detail= diagnostic_severity=ERROR diagnostic_summary="timeout while waiting for VM "7001" configuration to become unlocked" tf_proto_version=5.3 tf_req_id=46a764bd-ee63-bfa4-d7d7-7f63685c30d4 tf_rpc=ApplyResourceChange @module=sdk.proto timestamp=2022-10-12T10:01:44.826+0300 2022-10-12T10:01:44.854+0300 [ERROR] vertex "proxmox_virtual_environment_vm.vm-test-debian" error: timeout while waiting for VM "7001" configuration to become unlocked ╷ │ Error: timeout while waiting for VM "7001" configuration to become unlocked │ │ with proxmox_virtual_environment_vm.vm-test-debian, │ on main.tf line 46, in resource "proxmox_virtual_environment_vm" "vm-test-debian": │ 46: resource "proxmox_virtual_environment_vm" "vm-test-debian" { │

How to replicate:
`
terraform {
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "0.6.2"
}
}
}

provider "proxmox" {
virtual_environment {
endpoint = "https://XXXXXXX:8006"
username = "xxxxx@pve"
password = "xxxx"
insecure = true
}
}
resource "proxmox_virtual_environment_vm" "vm-test-debian" {
name = "terraform-vm1"
description = "Managed by Terraform"

node_name = "XX-node1"
vm_id = 7001

clone {
node_name = "XX-node1"
vm_id = 8000
datastore_id = "slow"
}
}`

Adding disk causes VM to be re-created

Describe the bug
Adding additional disk block (in addition to the existing virtio0):

disk {
  interface = "virtio1"
  size = 10
}

causes VM to be re-created.
That also means losing data on existing disks...

To Reproduce
Steps to reproduce the behavior:

  1. add disk block to existing (testing) VM
  2. see VM get destroyed and created again

Expected behavior
Adding disk would not destroy existing VM and/or cause data loss.
Adding disk would not even stop/reboot the VM

Screenshots

...
      }

      ~ cpu {
          - flags        = [] -> null
            # (6 unchanged attributes hidden)
        }

      ~ disk { # forces replacement
            # (4 unchanged attributes hidden)
        }
      + disk { # forces replacement
          + datastore_id = "local-zfs"
          + file_format  = "qcow2" # forces replacement
          + interface    = "virtio1"
          + size         = 10
        }
      ~ initialization {

...

Additional context
Looks like this is wrong:

Proxmox web-interface supports adding disks to running VMs when disk hotplug is enabled.

Disk hotplug detection seems simple: does the list veClient.GetVM(ctx, nodeName, vmID).Hotplug contains string "hotplug" ?

The following command:

curl -v --insecure \
--header 'Authorization: PVEAPIToken=root@pam!mycurltoken=...uuid...' \
--header "Content-Type: application/json" \
--request POST --data '{"virtio1": "local-zfs:20"}' \
https://test01.example.com:8006/api2/json/nodes/test01/qemu/1234/config

successfully adds a disk to running VM.

Looks like current POST call to '/config' implementation cannot be reused beacuse of the types:

func (c *VirtualEnvironmentClient) UpdateVMAsync(ctx context.Context, nodeName string, vmID int, d *VirtualEnvironmentVMUpdateRequestBody) (*string, error) {
resBody := &VirtualEnvironmentVMUpdateAsyncResponseBody{}
err := c.DoRequest(ctx, hmPOST, fmt.Sprintf("nodes/%s/qemu/%d/config", url.PathEscape(nodeName), vmID), d, resBody)
if err != nil {
return nil, err
}
if resBody.Data == nil {
return nil, errors.New("the server did not include a data object in the response")
}
return resBody.Data, nil
}

But something like the following might work:

type NewStorageDevice struct {
  PoolID *string
  Size *int
}
type VirtualEnvironmentVMCreateVMDiskRequestData struct {
SATADevice0           *NewStorageDevice          `json:"sata00,omitempty"`
...
VirtualIODevice0     *NewStorageDevice          `json:"virtio0,omitempty"`
...
VirtualIODevice15     *NewStorageDevice          `json:"virtio15,omitempty"`
}

func (c *VirtualEnvironmentClient) CreateVMDisk(ctx context.Context, nodeName string, vmID int, d *VirtualEnvironmentVMCreateVMDiskRequestData) (*string, error) {
        resBody := &VirtualEnvironmentVMUpdateAsyncResponseBody{}
        err := c.DoRequest(ctx, hmPOST, fmt.Sprintf("nodes/%s/qemu/%d/config", url.PathEscape(nodeName), vmID), d, resBody)

        if err != nil {
                return nil, err
        }

        if resBody.Data == nil {
                return nil, errors.New("the server did not include a data object in the response")
        }

        return resBody.Data, nil

}
func (r NewStorageDevice) EncodeValues(key string, v *url.Values) error {
        v.Add(key, fmt.Sprintf("%s:%d", *r.PoolID, *r.Size);
        return nil
}

(It is just a guess, I'm not used to Go language)

veClient.CreateVMDisk could then be called from resourceVirtualEnvironmentVMUpdateDiskLocationAndSize like existing disk resize and disk move operations.

Container is not created "unmarshal string into Go struct"

Describe the bug
When running terraform apply, the container is created successfully, but the plugin cannot verify this.
I get this error:

2021-10-28T02:51:49.267-0400 [DEBUG] provider.terraform-provider-proxmox_v0.4.6: 2021/10/28 02:51:49 [DEBUG] Performing HTTP GET request (path: nodes/pxe-server/lxc/101/status/current)
2021-10-28T02:51:49.274-0400 [DEBUG] provider.terraform-provider-proxmox_v0.4.6: 2021/10/28 02:51:49 [DEBUG] WARNING: Failed to decode HTTP GET response (path: nodes/pxe-server/lxc/101/status/current) - Reason: json: cannot unmarshal string into Go struct field VirtualEnvironmentContainerGetStatusResponseData.data.vmid of type int

/proxmox$ pveversion
pve-manager/6.4-13/9f411e79 (running kernel: 5.4.143-1-pve)

[INFO] CLI command args: []string{"version"}
Terraform v1.0.9
on linux_amd64

  • provider registry.terraform.io/bpg/proxmox v0.4.6

Add discard option to vm disk creation

One of the great features in proxmox is the discard option on disk volumes. Enabling the discard option on disks in comnbination with an fstrim / cron job or qemu agent fstrim inside the vm would free up discarded space automagically.

An extra option discard=<ignore|on> should be added to diskOptions when setting the virtual machine options in func resourceVirtualEnvironmentVMCreateCustomDisks.

VM disk file_format always gets set to "raw"

Describe the bug
I am unable to deploy a VM with a qcow2 or vmdk file format disk.

To Reproduce
Steps to reproduce the behavior:

  1. Define a proxmox_virtual_environment_vm resource in terraform with a disk that has the file_format set to qcow2 or vmdk
  2. terraform apply

Expected behavior
The hard disk attached matches the definition in the config.

Screenshots
The plan and apply output the following:

+ disk {
  + datastore_id = "local"
  + file_format  = "qcow2"
  + interface    = "scsi0"
  + iothread     = false
  + size         = 32
  + ssd          = false
}

But the provisioned VM has a raw formatted hard disk attached:
image

Additional context
I'm using v0.9.1 of the provider and Proxmox VE 7.3-3

Disk resize causes reboot

Describe the bug
Increasing disk size causes VM to restart.

To Reproduce
Steps to reproduce the behavior:

  1. Resize disk using Proxmox web-interface
  2. VM stays online
  3. Resize disk using terraform-proxmox-provider
  4. Notice reboot

Expected behavior
Increasing disk size should not cause reboot.

Additional context
This is wrong:


Not all disk changes should cause a reboot.

If there are only resize operations, and no move operations, the VM should not needlessly stop and start again.

if len(diskMoveBodies) > 0 || len(diskResizeBodies) > 0 {
if !template {
forceStop := proxmox.CustomBool(true)
shutdownTimeout := d.Get(mkResourceVirtualEnvironmentVMTimeoutShutdownVM).(int)
err = veClient.ShutdownVM(ctx, nodeName, vmID, &proxmox.VirtualEnvironmentVMShutdownRequestBody{
ForceStop: &forceStop,
Timeout: &shutdownTimeout,
}, shutdownTimeout+30)
if err != nil {
return diag.FromErr(err)
}
}
reboot = false
}
for _, reqBody := range diskMoveBodies {
moveDiskTimeout := d.Get(mkResourceVirtualEnvironmentVMTimeoutMoveDisk).(int)
err = veClient.MoveVMDisk(ctx, nodeName, vmID, reqBody, moveDiskTimeout)
if err != nil {
return diag.FromErr(err)
}
}
for _, reqBody := range diskResizeBodies {
err = veClient.ResizeVMDisk(ctx, nodeName, vmID, reqBody)
if err != nil {
return diag.FromErr(err)
}
}
if (len(diskMoveBodies) > 0 || len(diskResizeBodies) > 0) && started && !template {
startVMTimeout := d.Get(mkResourceVirtualEnvironmentVMTimeoutStartVM).(int)
err = veClient.StartVM(ctx, nodeName, vmID, startVMTimeout)
if err != nil {
return diag.FromErr(err)
}
}

kvm_arguments field makes it unable to create vm by non-root user

Describe the bug
Upgrading provider to version >= 0.10.0, where this pull request was merged: #205 makes it impossible to create new vm instance by non-root user.

To Reproduce
Consider following code:

  for_each = local.pve_nfs_map
  lifecycle {
    ignore_changes = [
      ipv4_addresses, ipv6_addresses, network_interface_names, disk[0].file_id,
    ]
  }
  name        = "some-name${each.key}"
  description = "Managed by Terraform"

  node_name = "some-node"
  vm_id     = "40${each.key + 30}"

  agent {
    enabled = true
  }

  cpu {
    cores = 4
  }

  memory {
    dedicated = 4096
  }

  disk {
    datastore_id = "vm_storage"
    file_id      = proxmox_virtual_environment_file.debian_cloud_image["some_file"].id
    interface    = "scsi0"
  }

  initialization {
    ip_config {
      ipv4 {
        address = "10.228.200.${each.key + 30}/16"
        gateway = "10.228.0.1"
      }
    }

    user_data_file_id = proxmox_virtual_environment_file.ox-nfs_cloud_config["dev-pve0${each.value}"].id
  }

  network_device {
    bridge = "vmbr1"
  }

  operating_system {
    type = "l26"
  }

  serial_device {}

}

output:

│ Error: received an HTTP 500 response - Reason: only root can set 'args' config
│ 
│   with proxmox_virtual_environment_vm.my-vm["1"],
│   on my-vm.tf line 1, in resource "proxmox_virtual_environment_vm" "my-vm":
│    1: resource "proxmox_virtual_environment_vm" "my-vm" {

Expected behavior
terraform apply would succeed

Enable terraform import

Hey there!
First of all nice work and thx for keeping this project alive!

I am using packer to create templates for my vms and it would be amazing, if it was possible, to import these templates into my terraform state. Therefore I wanted to ask if it was possible to add the terraform import functionality to this provider.
Another option could be to create a data resource for vms.

Thanks a lot!

unable to parse volume filename `id/vm-id-disk-x.qcow2.qcow2`

Describe the bug
As a note; this is using an upstream wrapper; and I have also filed an issue there; however, I believethe issue likely stems here.
I'm interacting with this using Pulumi, not terraform directly.

When attempting to create a VM Template with a Cloud Init drive, I am given the following error:

  proxmoxve:VM:VirtualMachine (Debian Cloud Template Template Virtual Machine):
    error: 1 error occurred:
        * creating urn:pulumi:infra::homelab-infrastructure::HomeLab:Proxmox:CloudImageTemplate$proxmoxve:VM/virtualMachine:VirtualMachine::Debian Cloud Template Template Virtual Machine: 1 error occurred:
        * Image resized.
    unable to parse volume filename '10000000/vm-10000000-disk-0.qcow2.qcow2'

I've put the relevant code here:
https://gist.github.com/0c370t/1fbaecbbbb32d18d7a9207e07c8b1e5b

To Reproduce
Steps to reproduce the behavior:

  1. Download a qcow2 disk ( i.e. Debian 10 Generic Cloud AMD64 )
  2. Load that into /your/storage/root/qcow2
  3. Create a VM with some drive scsi0
{
    "interface": "scsi0",
    "datastoreId": "local",
    "fileId": "local:qcow2/debian-genericcloud.qcow2"
}
  1. See error

Expected behavior
I expected the VM to be created without errors.

Additional context
It looks like the disk and the VM are getting created, it just isn't getting mounted / attached; see the screenshot:
image

To fix this in the UI, it is simply a matter of hitting "Add"

JSON unmarshal error when deploying LCX container

Describe the bug
make example seems to be hanging on proxmox_virtual_environment_container.example_template
deployment. Run with debug logs enabled showed that this might be due to JSON unmarshal error:

2021-09-09T13:18:13.492-0400 [DEBUG] provider.terraform-provider-proxmox_v9999.0.0_x4: 2021/09/09 13:18:13 [DEBUG] WARNING: Failed to decode HTTP GET response (path: nodes/<redacted>/lxc/2042/status/current) - Reason: json: cannot unmarshal number into Go struct field VirtualEnvironmentContainerGetStatusResponseData.data.vmid of type string
proxmox_virtual_environment_vm.example_template: Still creating... [10s elapsed]

To Reproduce
Steps to reproduce the behavior:
make example

Expected behavior
make example should pass without errors.

Screenshots
N/A

Additional context
N/A

Firewall support

Is your feature request related to a problem? Please describe.
I know it's a big request, but a support for managing firewall would be very useful.

Describe the solution you'd like
There are proxmox_virtual_environment_cluster_alias and proxmox_virtual_environment_cluster_ipset already. Most of all needed an ability to create security groups on cluster level and firewall options/rules/aliases/ipset on vm/container level. With a lower priority needed firewall rules/options on cluster and proxmox_host levels.

Describe alternatives you've considered
Managing firewall as files with tools like ansible, but it would be much better to combine managing firewall rules and creating of vms/containers.

Cloning template with cloud-init drive set fails when using initialization block

Describe the bug
When i tried to provision with vm template containing cloud init drive, provisioner is going to create the drive and Proxmox api is responding with 500 error, because file was created with clone task.
│ Error: received an HTTP 500 response - Reason: rbd create 'vm-100-cloudinit' error: rbd: create error: (17) File exists

To Reproduce
Steps to reproduce the behavior:

  1. create vm template in proxmox with cloud-init drive (ide2) attached and generate config
  2. provision vm with clone method and use initialization block
  3. See error

Expected behavior
VM clone should finish and then cloudinit drive should be recreated/skipped if it exists in template.

Sample vm

resource "proxmox_virtual_environment_vm" "k8s_master" {
    provider = proxmox.pve
    count = var.vm_count.k8s_masters
    node_name = random_shuffle.k8s_masters.result[count.index]
    description = "created by terraform - username ${var.vm_user.username}"


    name = "${var.hostname}-master${count.index+1}"
    clone {
        datastore_id = "vms"
        vm_id = 9202
        node_name = data.proxmox_virtual_environment_nodes.pve.names[0]
    }
    agent {
        enabled = true
        trim = true
    }
    cpu {
        type = "host"
        cores = 4
        sockets = 2
    }

    memory {
        dedicated = 1024*8
    }

    disk {
        datastore_id = "vms"
        interface = "scsi0"
        size = "100"
        discard = "on"
    }

    network_device {
      bridge = "vmbr0"
      model = "virtio"
      vlan_id = 3031
    }

    on_boot = true
    operating_system {
      type = "l26"
    }

    vga {
        type = "qxl"
    }

    serial_device {
      device = "socket"
    }

    initialization {
        vendor_data_file_id = "shared:snippets/one.yaml"
        user_account {
            keys = [trimspace(var.vm_user.ssh_pub_key)]
            password = var.vm_user_password
            username = var.vm_user.username
        }
        datastore_id = "vms"
        ip_config {
            ipv4 {
                address = "172.26.7.${131+count.index}/25"
                gateway = "172.26.7.129"
            }
        }
    }
    lifecycle {
        ignore_changes = [
            node_name,
            network_interface_names,
            initialization[0].user_account,
            initialization[0].user_data_file_id,
            clone
        ]
    }
}

Custom LXC Root Volume Size

Is your feature request related to a problem? Please describe.
Root volumes for LXC containers are made at the basic size that the image was created at

Describe the solution you'd like
Ability to specify a custom size root volume

Describe alternatives you've considered
Creating volumes after creation and attaching them
Making a custom container template image based on a stock one and cloning

Additional context
Has been requested previously on the pre-fork danitso repo: danitso/terraform-provider-proxmox#86

proxmox_virtual_environment_file json unmarshalling type issue

Error when refreshing state of proxmox_virtual_environment_file resources, both during creation and when planning with resources created previously.

Error: Failed to decode HTTP GET response (path: nodes/NODE_NAME/storage/local/content) - Reason: json: cannot unmarshal string into Go struct field VirtualEnvironmentDatastoreFileListResponseData.data.size of type int

Example resource:

resource "proxmox_virtual_environment_file" "iso-netboot" {
  content_type = "iso"
  datastore_id = "local"
  node_name = "NODE_NAME"
  source_file {
    path = "https://boot.netboot.xyz/ipxe/netboot.xyz-efi.iso"
  }
}

Running Proxmox 7.1 and version 0.5.0 of this provider. The old danitso 0.4.4 also encounters this bug.

I may work on fixing this myself.

P.S. thank you so much for your work on updating this provider! I switched from Telmate to danitso a year ago, and am hoping to avoid another switch.

disk file_format state is always qcow2

Describe the bug
When creating a new vm with disk.file_format as something other than "qcow2", the state of said disk will always be stored as qcow2, forcing re-create on re-apply.

To Reproduce
Steps to reproduce the behavior:

  1. Create a VM with a disk that isn't qcow2 (raw or vmdk), terrafrom apply the configuration.
  2. after it is created, terraform apply again
  3. the plan should say that the vm you just created will be replaced.

Expected behavior
disk.file_format state should be stored according to the configuration file

Thanks for this provider ! It still misses some stuff, but it's already very good !

Error creating container if LXC template specifies `datastore_id`

Describe the bug
The issue was discovered by @abdo-farag in #176, see the comment.

To Reproduce
Steps to reproduce the behavior:

  1. Use template from the link above
  2. Run terraform apply
  3. The apply fails with error
│ Error: Invalid address to set: []string{"datastore_id"}

Expected behavior
An LXC template with datastore_id should be applied without errors.

Screenshots
N/A

Additional context
N/A

Creating VM from Clone Does Not Manage New Disks

Describe the bug
When attempting to create a new VM that is cloned from a template VM, adding additional storage to the host does not seem to work in the following situations:

  1. Increasing disk space on the disk that is cloned.
  2. Adding an additional disk to the newly provisioned instance.

To Reproduce

  1. Configure a new instance to be cloned from a template Proxmox image. This image should one disk attached to it, and should have some arbitrary value for disk space provisioned.
    a. image
    b. Note that the above has 9420M assigned.
  2. Provision a cloned instance with the following configuration:
resource "proxmox_virtual_environment_vm" "instance" {
  for_each = var.hosts

  name = "${each.key}-${var.envs}"
  node_name = random_shuffle.hypervisors[each.key].result[0]
  on_boot = true
  agent {
    enabled = true
  }
  operating_system {
    type = "l26"
  }
  clone {
    vm_id     = var.colo_clone_vm_base
    node_name = var.colo_clone_vm_node
  }
  cpu {
    sockets = var.sockets
    cores   = var.cpus
  }
  memory {
    dedicated = var.memory * 1024
  }
  disk {
    interface = "scsi0"
    datastore_id = var.datastores[0]
    size = "10"
    file_format = "raw"
  }
  disk {
    interface = "scsi0"
    datastore_id = var.datastores[0]
    size = "10"
    file_format = "qcow2"
  }
  network_device {
    model = "virtio"
    bridge = "vmbr0"
    vlan_id = var.network_vlan_id
  }
  vga {
    type = "std"
  }
}

Terraform output:

  # proxmox_virtual_environment_vm.instance["host01"] will be created
  + resource "proxmox_virtual_environment_vm" "instance" {
      + acpi                    = true
      + bios                    = "seabios"
      + description             = "xxxxxxxxxxxxx"
      + id                      = (known after apply)
      + ipv4_addresses          = (known after apply)
      + ipv6_addresses          = (known after apply)
      + keyboard_layout         = "en-us"
      + mac_addresses           = (known after apply)
      + name                    = "host01-dev"
      + network_interface_names = (known after apply)
      + node_name               = (known after apply)
      + on_boot                 = true
      + reboot                  = false
      + started                 = true
      + tablet_device           = true
      + tags                    = [
          + "dev",
          + "xxxxxx",
        ]
      + template                = false
      + timeout_clone           = 1800
      + timeout_move_disk       = 1800
      + timeout_reboot          = 1800
      + timeout_shutdown_vm     = 1800
      + timeout_start_vm        = 1800
      + timeout_stop_vm         = 300
      + vm_id                   = -1

      + agent {
          + enabled = true
          + timeout = "15m"
          + trim    = false
          + type    = "virtio"
        }

      + clone {
          + full      = true
          + node_name = "xxxxxxxxx"
          + retries   = 1
          + vm_id     = 9200
        }

      + cpu {
          + architecture = "x86_64"
          + cores        = 1
          + hotplugged   = 0
          + sockets      = 2
          + type         = "qemu64"
          + units        = 1024
        }

      + disk {
          + datastore_id = "xxxxxxxxxxxxxx"
          + file_format  = "raw"
          + interface    = "scsi0"
          + iothread     = false
          + size         = 10
          + ssd          = false
        }
      + disk {
          + datastore_id = "xxxxxxxxxxxxxx"
          + file_format  = "qcow2"
          + interface    = "scsi0"
          + iothread     = false
          + size         = 10
          + ssd          = false
        }

      + memory {
          + dedicated = 4096
          + floating  = 0
          + shared    = 0
        }

      + network_device {
          + bridge     = "vmbr0"
          + enabled    = true
          + model      = "virtio"
          + mtu        = 0
          + rate_limit = 0
          + vlan_id    = ####
        }

      + operating_system {
          + type = "l26"
        }

      + vga {
          + enabled = true
          + memory  = 16
          + type    = "std"
        }
    }
  1. Receive the following machine configuration.
    a. image
    b. You can see the first hard disk on scsi0 is not 10 GB, but the same size as the template image.
    c. There is not a second hard disk attached to the new VM. A second hard disk has not been provisioned on the storage shared across the hypervisors.
    image

  2. Run terraform plan after the host has been provisioned above.

  # proxmox_virtual_environment_vm.instance["host01"] must be replaced
-/+ resource "proxmox_virtual_environment_vm" "instance" {
      ~ id                      = "105" -> (known after apply)
      ~ ipv4_addresses          = [
          - [
              - "127.0.0.1",
            ],
          - [
              - "255.255.255.XXX",
            ],
        ] -> (known after apply)
      ~ ipv6_addresses          = [
          - [
              - "::1",
            ],
          - [
              - "XXXX::XXXX",
            ],
        ] -> (known after apply)
      ~ mac_addresses           = [
          - "00:00:00:00:00:00",
          - "XX:XX:XX:XX:XX:XX",
        ] -> (known after apply)
        name                    = "host01-dev"
      ~ network_interface_names = [
          - "lo",
          - "eth0",
        ] -> (known after apply)
        tags                    = [
            "dev",
            "xxxxxxxxx",
        ]
        # (17 unchanged attributes hidden)

      ~ clone {
            # (4 unchanged attributes hidden)
        }

      ~ cpu {
          - flags        = [] -> null
            # (6 unchanged attributes hidden)
        }

      ~ disk { # forces replacement
            # (6 unchanged attributes hidden)
        }
      + disk { # forces replacement
          + datastore_id = "xxxxxxxxxxxxx"
          + file_format  = "qcow2" # forces replacement
          + interface    = "scsi0"
          + iothread     = false
          + size         = 10
          + ssd          = false
        }

      ~ network_device {
          - mac_address = "XX:XX:XX:XX:XX:XX" -> null
            # (6 unchanged attributes hidden)
        }

        # (4 unchanged blocks hidden)
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Note: Disregard the forces replacement on disks. That's likely related to #103 and/or #185

Expected behavior
A cloned VM with a resized disk that is cloned from the template image, and a new disk of the appropriate size.

Additional context
Thanks for working on this provider!

Timeout on VM's that have network interfaces without an ip assigned.

Describe the bug
Timeout on VM's that have network interfaces without an ip assigned. This can be caused when using something like k3s, podman/docker and other containerized applications.

To Reproduce
Steps to reproduce the behavior:

  1. Have a VM with an Network Interface thats missing IP
  2. Having qemu guest agent enabled on this host
  3. Trying to run terrafrom apply, will try until timeout is hit which is default 15m

Expected behavior
Should be able to handle those interfaces and not timeout.

Screenshots
image

Additional context
I tried to find a way to configure qemu guest agent to ignore certain networks but didn't find a lot, could have missed something to configure it to ignore those interfaces.

I can create a PR for this but the real question is how this should be handled, should 1 ip be enough or any other ideas?

Workaround
Setting the agent to a lower timeout

agent {
    enabled = true
    timeout = "2s"
}

proxmox_virtual_environment_vm does not use proper path from proxmox_virtual_environment_file

Describe the bug
Following the VM creation example with a .tar.xz image as a source fails, the VM creation does not use the correct path for the file.

To Reproduce
With the following TF resources, run terraform apply :

resource "proxmox_virtual_environment_file" "debian_cloud_image" {
  content_type = "vztmpl"
  datastore_id = "local"
  node_name    = var.proxmox_node_name

  source_file {
    path = "https://cloud.debian.org/images/cloud/bookworm/daily/latest/debian-12-genericcloud-amd64-daily.tar.xz"
  }
}
resource "proxmox_virtual_environment_vm" "reproduce_bug" {
  name        = "reproduce_bug"
  started     = true

  node_name = var.proxmox_node_name
  vm_id     = random_integer.vmid.result

  disk {
    datastore_id = "local"
    file_format  = "qcow2"
    interface    = "scsi0"
    file_id      = proxmox_virtual_environment_file.debian_cloud_image.id
    size         = "10"
  }
}

Expected behavior
The VM is created using the correct image.

Actual behavior
The creation fails with the following error:
Error: cp: cannot stat '/var/lib/vz/vztmpl/debian-12-genericcloud-amd64-daily.tar.xz': No such file or directory

Additional context
Looking at the created resource, it appears that proxmox_virtual_environment_vm is using the id as if it was the path relative to datastore root. However, inspecting the storage shows it is not the case:

# pvesm path local:vztmpl/debian-12-genericcloud-amd64-daily.tar.xz 
/var/lib/vz/template/cache/debian-12-genericcloud-amd64-daily.tar.xz

SSH authentication issues (but almost at the end of the provisioning process)

I'm having issues getting the plugin to work comletely. The strange thing is that i can provision pools, groups, roles, etc. But when i want to provision a vm i get a ssh handshake error. Even weirder is that a vm was actually created in proxmox, so i'm not sure which part of the process this is failing on. I am using (unfortunately) root credentials everywhere. At least while testing the plugin.

proxmox_virtual_environment_vm.ubuntu_vm: Creating...
2023-03-06T17:13:28.897Z [INFO]  Starting apply for proxmox_virtual_environment_vm.ubuntu_vm
2023-03-06T17:13:28.899Z [DEBUG] proxmox_virtual_environment_vm.ubuntu_vm: applying the planned Create change
2023-03-06T17:13:28.907Z [INFO]  provider.terraform-provider-proxmox_v0.13.0: 2023/03/06 17:13:28 [DEBUG] setting computed for "ipv4_addresses" from ComputedKeys: timestamp=2023-03-06T17:13:28.906Z
2023-03-06T17:13:28.907Z [INFO]  provider.terraform-provider-proxmox_v0.13.0: 2023/03/06 17:13:28 [DEBUG] setting computed for "ipv6_addresses" from ComputedKeys: timestamp=2023-03-06T17:13:28.906Z
2023-03-06T17:13:28.908Z [INFO]  provider.terraform-provider-proxmox_v0.13.0: 2023/03/06 17:13:28 [DEBUG] setting computed for "network_interface_names" from ComputedKeys: timestamp=2023-03-06T17:13:28.907Z
2023-03-06T17:13:28.908Z [INFO]  provider.terraform-provider-proxmox_v0.13.0: 2023/03/06 17:13:28 [DEBUG] setting computed for "mac_addresses" from ComputedKeys: timestamp=2023-03-06T17:13:28.907Z
2023-03-06T17:13:28.912Z [DEBUG] provider.terraform-provider-proxmox_v0.13.0: performing HTTP request: path=cluster/nextid tf_req_id=9c1fa9fd-0e46-7637-3509-23bb98ca605a tf_rpc=ApplyResourceChange @caller=github.com/bpg/terraform-provider-proxmox/proxmox/virtual_environment_client.go:87 @module=proxmox tf_resource_type=proxmox_virtual_environment_vm method=GET tf_provider_addr=registry.terraform.io/bpg/proxmox timestamp=2023-03-06T17:13:28.912Z
2023-03-06T17:13:28.917Z [DEBUG] provider.terraform-provider-proxmox_v0.13.0: next VM identifier: @caller=github.com/bpg/terraform-provider-proxmox/proxmox/virtual_environment_vm.go:140 tf_provider_addr=registry.terraform.io/bpg/proxmox tf_req_id=9c1fa9fd-0e46-7637-3509-23bb98ca605a tf_rpc=ApplyResourceChange @module=proxmox id=117 tf_resource_type=proxmox_virtual_environment_vm timestamp=2023-03-06T17:13:28.916Z
2023-03-06T17:13:28.917Z [DEBUG] provider.terraform-provider-proxmox_v0.13.0: performing HTTP request: @module=proxmox tf_provider_addr=registry.terraform.io/bpg/proxmox tf_req_id=9c1fa9fd-0e46-7637-3509-23bb98ca605a tf_rpc=ApplyResourceChange @caller=github.com/bpg/terraform-provider-proxmox/proxmox/virtual_environment_client.go:87 method=POST path=nodes/pve-12core-xeon/qemu tf_resource_type=proxmox_virtual_environment_vm timestamp=2023-03-06T17:13:28.917Z
2023-03-06T17:13:28.918Z [DEBUG] provider.terraform-provider-proxmox_v0.13.0: added request body to HTTP request: tf_rpc=ApplyResourceChange @caller=github.com/bpg/terraform-provider-proxmox/proxmox/virtual_environment_client.go:138 @module=proxmox encodedValues=acpi=1&agent=enabled%3D0%2Cfstrim_cloned_disks%3D0%2Ctype%3Dvirtio&arch=x86_64&balloon=0&bios=seabios&boot=c&bootdisk=scsi0&cores=1&cpu=cputype%3Dqemu64&cpuunits=1024&description=Managed+by+Terraform&keyboard=en-us&memory=512&name=terraform-provider-proxmox-ubuntu-vm&onboot=1&ostype=other&scsihw=virtio-scsi-pci&sockets=1&tablet=1&tags=terraform%3Bubuntu&template=0&vga=memory%3D16%2Ctype%3Dstd&vmid=117 tf_req_id=9c1fa9fd-0e46-7637-3509-23bb98ca605a tf_resource_type=proxmox_virtual_environment_vm method=POST path=nodes/pve-12core-xeon/qemu tf_provider_addr=registry.terraform.io/bpg/proxmox timestamp=2023-03-06T17:13:28.918Z
2023-03-06T17:13:28.948Z [WARN]  provider.terraform-provider-proxmox_v0.13.0: unhandled HTTP response body: @module=proxmox data={"data":"UPID:pve-12core-xeon:0025A963:0C50D081:64061F38:qmcreate:117:root@pam:"} tf_provider_addr=registry.terraform.io/bpg/proxmox tf_resource_type=proxmox_virtual_environment_vm @caller=github.com/bpg/terraform-provider-proxmox/proxmox/virtual_environment_client.go:218 tf_req_id=9c1fa9fd-0e46-7637-3509-23bb98ca605a tf_rpc=ApplyResourceChange timestamp=2023-03-06T17:13:28.948Z
2023-03-06T17:13:28.950Z [DEBUG] provider.terraform-provider-proxmox_v0.13.0: performing HTTP request: @caller=github.com/bpg/terraform-provider-proxmox/proxmox/virtual_environment_client.go:87 @module=proxmox path=nodes/pve-12core-xeon/network tf_provider_addr=registry.terraform.io/bpg/proxmox tf_req_id=9c1fa9fd-0e46-7637-3509-23bb98ca605a tf_resource_type=proxmox_virtual_environment_vm method=GET tf_rpc=ApplyResourceChange timestamp=2023-03-06T17:13:28.949Z
2023-03-06T17:13:29.029Z [ERROR] provider.terraform-provider-proxmox_v0.13.0: Response contains error diagnostic: tf_proto_version=5.3 tf_provider_addr=registry.terraform.io/bpg/proxmox tf_resource_type=proxmox_virtual_environment_vm tf_rpc=ApplyResourceChange diagnostic_detail= diagnostic_severity=ERROR diagnostic_summary="failed to dial 10.105.68.10:22: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none], no supported methods remain" @caller=github.com/hashicorp/[email protected]/tfprotov5/internal/diag/diagnostics.go:55 @module=sdk.proto tf_req_id=9c1fa9fd-0e46-7637-3509-23bb98ca605a timestamp=2023-03-06T17:13:29.028Z
2023-03-06T17:13:29.042Z [ERROR] vertex "proxmox_virtual_environment_vm.ubuntu_vm" error: failed to dial 10.105.68.10:22: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none], no supported methods remain
╷
│ Error: failed to dial 10.105.68.10:22: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none], no supported methods remain
│ 
│   with proxmox_virtual_environment_vm.ubuntu_vm,
│   on main.tf line 98, in resource "proxmox_virtual_environment_vm" "ubuntu_vm":
│   98: resource "proxmox_virtual_environment_vm" "ubuntu_vm" {
│ 
╵
2023-03-06T17:13:29.072Z [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2023-03-06T17:13:29.082Z [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/bpg/proxmox/0.13.0/linux_amd64/terraform-provider-proxmox_v0.13.0 pid=36
2023-03-06T17:13:29.083Z [DEBUG] provider: plugin exited

I'm running terraform in a docker container. I've bind mounted all relevant ssh credentials and tested manually that i can reach every node in the cluster. See below.

(.venv) Alains-MacBook-Pro:terraform-pve-templates alain$ make shell
docker run -it --rm \
                --platform linux/amd64 \
                --env-file secrets/credentials.env \
                --mount type=bind,source="/Users/alain/Code/Projects/terraform-pve-templates/terraform",target=/root/deployment \
                --mount type=bind,source="/Users/alain/Code/Projects/terraform-pve-templates/secrets/known_hosts",target=/root/.ssh/known_hosts \
                --mount type=bind,source="/Users/alain/.ssh/id_ed25519",target=/root/.ssh/id_ed25519,readonly \
                --mount type=bind,source="/Users/alain/.ssh/id_ed25519.pub",target=/root/.ssh/id_ed25519.pub,readonly \
                --entrypoint /bin/sh \
                registry.gitlab.i.redacted.com/images/all/terraform:latest
~/deployment # ssh 10.105.68.10
Linux pve-12core-xeon 5.15.83-1-pve #1 SMP PVE 5.15.83-1 (2022-12-15T00:00Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Mar  6 18:22:03 2023 from 10.105.68.30
root@pve-12core-xeon:~# exit
logout
Connection to 10.105.68.10 closed.
~/deployment # 

This is the terraform configuration file that i'm currently testing:

terraform {
  required_providers {
    proxmox = {
      source  = "bpg/proxmox"
      version = "0.13.0"
    }
  }
}

provider "proxmox" {
  virtual_environment {
    insecure = true
  }
}

resource "proxmox_virtual_environment_file" "ubuntu_jammy_cloud_image" {
  content_type = "iso"
  datastore_id = "local"
  node_name    = "pve-12core-xeon"

  source_file {
    path     = "https://cloud-images.ubuntu.com/jammy/20230302/jammy-server-cloudimg-amd64.img"
    checksum = "345fbbb6ec827ca02ec1a1ced90f7d40d3fd345811ba97c5772ac40e951458e1"
  }
}

resource "proxmox_virtual_environment_file" "ubuntu_focal_cloud_image" {
  content_type = "iso"
  datastore_id = "local"
  node_name    = "pve-12core-xeon"

  source_file {
    path     = "https://cloud-images.ubuntu.com/focal/20230215/focal-server-cloudimg-amd64.img"
    checksum = "786a425717f411be89c41c88420a14471e1888569f9193cfb3b7dbb56e6a538f"
  }
}

resource "proxmox_virtual_environment_file" "ubuntu_bionic_cloud_image" {
  content_type = "iso"
  datastore_id = "local"
  node_name    = "pve-12core-xeon"

  source_file {
    path     = "https://cloud-images.ubuntu.com/bionic/20230303/bionic-server-cloudimg-amd64.img"
    checksum = "75deaa0e8fa5f2751c0864251e43fb62e8da2f059c201e60f194eb9a0b43b03f"
  }
}

resource "proxmox_virtual_environment_file" "fedora_cloud_image" {
  content_type = "vztmpl"
  datastore_id = "local"
  node_name    = "pve-12core-xeon"

  source_file {
    path     = "https://fedora.mirror.wearetriple.com/linux/releases/37/Cloud/x86_64/images/Fedora-Cloud-Base-GCP-37-1.7.x86_64.tar.gz"
    checksum = "09c518afa4b5a63c19c704fcefd35e38c3b73075d16cd30a764c8b21c790e1e5"
  }
}

resource "proxmox_virtual_environment_file" "debian_cloud_image" {
  content_type = "vztmpl"
  datastore_id = "local"
  node_name    = "pve-12core-xeon"

  source_file {
    path = "https://cloud.debian.org/images/cloud/bullseye/20230124-1270/debian-11-generic-amd64-20230124-1270.tar.xz"
  }
}

resource "proxmox_virtual_environment_pool" "production_pool" {
  comment = "Managed by Terraform"
  pool_id = "Production"
}

resource "proxmox_virtual_environment_pool" "staging_pool" {
  comment = "Managed by Terraform"
  pool_id = "Staging"
}

resource "proxmox_virtual_environment_pool" "testing_pool" {
  comment = "Managed by Terraform"
  pool_id = "Testing"
}

resource "proxmox_virtual_environment_role" "operations_monitoring" {
  role_id = "operations-monitoring"

  privileges = [
    "VM.Monitor",
  ]
}

resource "proxmox_virtual_environment_group" "operations_team" {
  comment  = "Managed by Terraform"
  group_id = "operations-team"
}

resource "proxmox_virtual_environment_vm" "ubuntu_vm" {
  name        = "terraform-provider-proxmox-ubuntu-vm"
  description = "Managed by Terraform"
  tags        = ["terraform", "ubuntu"]

  node_name = "pve-12core-xeon"

  disk {
    datastore_id = "local-lvm"
    file_id      = proxmox_virtual_environment_file.ubuntu_jammy_cloud_image.id
    interface    = "scsi0"
  }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.