Giter VIP home page Giter VIP logo

community.vmware's Introduction

Ansible Collection: community.vmware

This repo hosts the community.vmware Ansible Collection.

The collection includes the VMware modules and plugins supported by Ansible VMware community to help the management of VMware infrastructure.

Releases and maintenance

Release Status End of life
4 Maintained Nov 2025
3 Maintained (bug fixes only) Nov 2024
2 Unmaintained Nov 2023

Ansible version compatibility

This collection has been tested against following Ansible versions: >=2.15.0.

Plugins and modules within a collection may be tested with only specific Ansible versions. A collection may contain metadata that identifies these versions. PEP440 is the schema used to describe the versions of Ansible.

Installation and Usage

Installing the Collection from Ansible Galaxy

Before using the VMware community collection, you need to install the collection with the ansible-galaxy CLI:

ansible-galaxy collection install community.vmware

You can also include it in a requirements.yml file and install it via ansible-galaxy collection install -r requirements.yml using the format:

collections:
- name: community.vmware

Required Python libraries

VMware community collection depends on Python 3.9+ and on following third party libraries:

Installing required libraries and SDK

Installing collection does not install any required third party Python libraries or SDKs. You need to install the required Python libraries using following command:

pip install -r ~/.ansible/collections/ansible_collections/community/vmware/requirements.txt

If you are working on developing and/or testing VMware community collection, you may want to install additional requirements using following command:

pip install -r ~/.ansible/collections/ansible_collections/community/vmware/test-requirements.txt

Testing and Development

If you want to develop new content for this collection or improve what is already here, the easiest way to work on the collection is to clone it into one of the configured COLLECTIONS_PATHS, and work on it there.

Testing with ansible-test

Refer testing for more information.

Publishing New Version

Assuming your (local) repository has set origin to your GitHub fork and this repository is added as upstream:

Prepare the release:

  • Make sure your fork is up to date: git checkout main && git pull && git fetch upstream && git merge upstream/main.
  • Run ansible-playbook tools/prepare_release.yml. The playbook tries to generate the next minor release automatically, but you can also set the version explicitly with --extra-vars "version=$VERSION". You will have to set the version explicitly when publishing a new major release.
  • Push the created release branch to your GitHub repo (git push --set-upstream origin prepare_$VERSION_release) and open a PR for review.

Push the release:

  • After the PR has been merged, make sure your fork is up to date: git checkout main && git pull && git fetch upstream && git merge upstream/main.
  • Tag the release: git tag -s $VERSION
  • Push the tag: git push upstream $VERSION

Revert the version in galaxy.yml back to null:

  • Make sure your fork is up to date: git checkout main && git pull && git fetch upstream && git merge upstream/main.
  • Run ansible-playbook tools/unset_version.yml.
  • Push the created branch to your GitHub repo (git push --set-upstream origin unset_version_$VERSION) and open a PR for review.

Communication

You can find other people interested in this in the Ansible VMware room on Matrix / in the #ansible-vmware channel on libera.chat IRC.

For general usage question, please also consider the Get Help category in the Ansible Community Forum and tag it with vmware.

License

GNU General Public License v3.0 or later

See LICENSE to see the full text.

community.vmware's People

Contributors

abikouo avatar akasurde avatar anant99-sys avatar andersson007 avatar ansible-zuul[bot] avatar carstengrohmann avatar cfiehe avatar chrisaholland avatar darksoul42 avatar exp-hc avatar felixfontein avatar goneri avatar ihumster avatar jamesmarshall24 avatar jdptechnc avatar jillr avatar jiuka avatar laidbackware avatar mariolenz avatar matletix avatar n3pjk avatar nina2244 avatar p-fruck avatar scottd018 avatar sky-joker avatar tejaswi-bachu avatar tomorrow9 avatar ultral avatar valkiriaaquatica avatar xenlo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

community.vmware's Issues

vmware_host_capability_facts: AttributeError: 'vim.host.Capability' object has no attribute 'checkpointFtSupported'

vmware_host_capability_facts fails with error in $subject:

Traceback (most recent call last):
  File "/home/zuul/.ansible/tmp/ansible-tmp-1586881939.6589217-32109-80397752196359/AnsiballZ_vmware_host_capability_facts.py", line 116, in <module>
    _ansiballz_main()
  File "/home/zuul/.ansible/tmp/ansible-tmp-1586881939.6589217-32109-80397752196359/AnsiballZ_vmware_host_capability_facts.py", line 108, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/home/zuul/.ansible/tmp/ansible-tmp-1586881939.6589217-32109-80397752196359/AnsiballZ_vmware_host_capability_facts.py", line 54, in invoke_module
    runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_host_capability_facts', init_globals=None, run_name='__main__', alter_sys=True)
  File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_vmware_host_capability_facts_payload_ogf6pqmo/ansible_vmware_host_capability_facts_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_host_capability_facts.py", line 227, in <module>
  File "/tmp/ansible_vmware_host_capability_facts_payload_ogf6pqmo/ansible_vmware_host_capability_facts_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_host_capability_facts.py", line 223, in main
  File "/tmp/ansible_vmware_host_capability_facts_payload_ogf6pqmo/ansible_vmware_host_capability_facts_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_host_capability_facts.py", line 139, in gather_host_capability_facts
AttributeError: 'vim.host.Capability' object has no attribute 'checkpointFtSupported'

See: https://dashboard.zuul.ansible.com/t/ansible/build/9f4d284f116f4ad491cd1288e4af97b5

CI: authentication failure

Time to time, the tests fail because of some auth failure:

"msg": "Unable to log on to vCenter or ESXi API at vcenter.test:443  as [email protected]: Cannot complete login due to an incorrect user name or password."

This seems to be related to the amount of memory available:

Apr 07 15:53:27 vcenter.test systemd[30543]: Failed to fork: Cannot allocate memory
Apr 07 15:53:27 vcenter.test systemd[30543]: Failed to fork: Cannot allocate memory

vmware_category: RecursionError: maximum recursion depth exceeded while calling a Python object

From @goneri on Jan 14, 2020 16:12

SUMMARY

vmware_category functonal test-suite fails ( https://github.com/ansible/ansible/blob/devel/test/integration/targets/vmware_category/tasks/associable_obj_types.yml#L5 ) randomely with the following backtrace:

e.g: https://object-storage-ca-ymq-1.vexxhost.net/v1/a0b4156a37f9453eb4ec7db5422272df/ansible_64e/66364/46bc0553c1234315607a295791592fcfa2eb2dd0/third-party-check/ansible-test-cloud-integration-vcenter_only-python36/353933c/

/home/zuul/venv/lib/python3.6/site-packages/urllib3/connectionpool.py:1004: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning,
Traceback (most recent call last):
  File "/home/zuul/venv/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
    chunked=chunked,
  File "/home/zuul/venv/lib/python3.6/site-packages/urllib3/connectionpool.py", line 421, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/home/zuul/venv/lib/python3.6/site-packages/urllib3/connectionpool.py", line 416, in _make_request
    httplib_response = conn.getresponse()
  File "/usr/lib64/python3.6/http/client.py", line 1331, in getresponse
    response.begin()
  File "/usr/lib64/python3.6/http/client.py", line 297, in begin
    version, status, reason = self._read_status()
  File "/usr/lib64/python3.6/http/client.py", line 266, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/zuul/venv/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
    timeout=timeout
  File "/home/zuul/venv/lib/python3.6/site-packages/urllib3/connectionpool.py", line 720, in urlopen
    method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
  File "/home/zuul/venv/lib/python3.6/site-packages/urllib3/util/retry.py", line 400, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/home/zuul/venv/lib/python3.6/site-packages/urllib3/packages/six.py", line 734, in reraise
    raise value.with_traceback(tb)
  File "/home/zuul/venv/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
    chunked=chunked,
  File "/home/zuul/venv/lib/python3.6/site-packages/urllib3/connectionpool.py", line 421, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/home/zuul/venv/lib/python3.6/site-packages/urllib3/connectionpool.py", line 416, in _make_request
    httplib_response = conn.getresponse()
  File "/usr/lib64/python3.6/http/client.py", line 1331, in getresponse
    response.begin()
  File "/usr/lib64/python3.6/http/client.py", line 297, in begin
    version, status, reason = self._read_status()
  File "/usr/lib64/python3.6/http/client.py", line 266, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/zuul/.ansible/tmp/ansible-tmp-1578941246.1537826-20850633902669/AnsiballZ_vmware_category.py", line 116, in <module>
    _ansiballz_main()
  File "/home/zuul/.ansible/tmp/ansible-tmp-1578941246.1537826-20850633902669/AnsiballZ_vmware_category.py", line 108, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/home/zuul/.ansible/tmp/ansible-tmp-1578941246.1537826-20850633902669/AnsiballZ_vmware_category.py", line 54, in invoke_module
    runpy.run_module(mod_name='ansible.modules.cloud.vmware.vmware_category', init_globals=None, run_name='__main__', alter_sys=True)
  File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_vmware_category_payload_ph8rwcme/ansible_vmware_category_payload.zip/ansible/modules/cloud/vmware/vmware_category.py", line 330, in <module>
  File "/tmp/ansible_vmware_category_payload_ph8rwcme/ansible_vmware_category_payload.zip/ansible/modules/cloud/vmware/vmware_category.py", line 325, in main
  File "/tmp/ansible_vmware_category_payload_ph8rwcme/ansible_vmware_category_payload.zip/ansible/modules/cloud/vmware/vmware_category.py", line 169, in __init__
  File "/tmp/ansible_vmware_category_payload_ph8rwcme/ansible_vmware_category_payload.zip/ansible/module_utils/vmware_rest_client.py", line 57, in __init__
  File "/tmp/ansible_vmware_category_payload_ph8rwcme/ansible_vmware_category_payload.zip/ansible/module_utils/vmware_rest_client.py", line 131, in connect_to_vsphere_client
  File "/home/zuul/venv/lib/python3.6/site-packages/vmware/vapi/vsphere/client.py", line 170, in create_vsphere_client
    hok_token=hok_token, private_key=private_key)
  File "/home/zuul/venv/lib/python3.6/site-packages/vmware/vapi/vsphere/client.py", line 111, in __init__
    session_id = session_svc.create()
  File "/home/zuul/venv/lib/python3.6/site-packages/com/vmware/cis_client.py", line 198, in create
    return self._invoke('create', None)
  File "/home/zuul/venv/lib/python3.6/site-packages/vmware/vapi/bindings/stub.py", line 345, in _invoke
    return self._api_interface.native_invoke(ctx, _method_name, kwargs)
  File "/home/zuul/venv/lib/python3.6/site-packages/vmware/vapi/bindings/stub.py", line 266, in native_invoke
    method_result = self.invoke(ctx, method_id, data_val)
  File "/home/zuul/venv/lib/python3.6/site-packages/vmware/vapi/bindings/stub.py", line 202, in invoke
    ctx)
  File "/home/zuul/venv/lib/python3.6/site-packages/vmware/vapi/security/client/security_context_filter.py", line 102, in invoke
    self, service_id, operation_id, input_value, ctx)
  File "/home/zuul/venv/lib/python3.6/site-packages/vmware/vapi/provider/filter.py", line 76, in invoke
    service_id, operation_id, input_value, ctx)
  File "/home/zuul/venv/lib/python3.6/site-packages/vmware/vapi/protocol/client/msg/json_connector.py", line 79, in invoke
    response = self._do_request(VAPI_INVOKE, ctx, params)
  File "/home/zuul/venv/lib/python3.6/site-packages/vmware/vapi/protocol/client/msg/json_connector.py", line 122, in _do_request
    headers=request_headers, body=request_body))
  File "/home/zuul/venv/lib/python3.6/site-packages/vmware/vapi/protocol/client/rpc/requests_provider.py", line 98, in do_request
    cookies=http_request.cookies, timeout=timeout)
  File "/home/zuul/venv/lib/python3.6/site-packages/requests/sessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/zuul/venv/lib/python3.6/site-packages/requests/sessions.py", line 646, in send
    r = adapter.send(request, **kwargs)
  File "/home/zuul/venv/lib/python3.6/site-packages/requests/adapters.py", line 498, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))
Exception ignored in: <bound method VsphereClient.__del__ of <vmware.vapi.vsphere.client.VsphereClient object at 0x7f931ce9fd68>>
Traceback (most recent call last):
  File "/home/zuul/venv/lib/python3.6/site-packages/vmware/vapi/vsphere/client.py", line 136, in __del__
  File "/home/zuul/venv/lib/python3.6/site-packages/vmware/vapi/bindings/stub.py", line 443, in __getattr__
  File "/home/zuul/venv/lib/python3.6/site-packages/vmware/vapi/bindings/stub.py", line 443, in __getattr__
  File "/home/zuul/venv/lib/python3.6/site-packages/vmware/vapi/bindings/stub.py", line 443, in __getattr__
  [Previous line repeated 329 more times]
RecursionError: maximum recursion depth exceeded while calling a Python object

Note:

  1. the server may be running out of memory like here (ansible/ansible#66378 (comment)).
  2. this is actually a dup of ansible/ansible#59928
ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_category

ANSIBLE VERSION

CONFIGURATION
devel

Copied from original issue: ansible/ansible#66475

Multiple DataCenter Names

Currently; our Environment Setup that has two Datacenter with the same name within two locations within vCenter. When trying to deploy to a Cluster Cluster-A I'm faced with the following Issue:

TASK [Virtual Machine customization] *******************************************
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Unable to find cluster \"CORP-A\""}

ansible 2.9.4


- name: Running Create Virtual Machine Playbook
  hosts: localhost
  gather_facts: false
  connection: local
  tasks:
  - name: Virtual Machine and customization
    vmware_guest:
      validate_certs: no
      hostname: "{{ vcenter }}"
      username: "{{ username }}"
      password: "{{ password }}"
      cluster : "{{ cluster }}"
      datacenter: "{{ datacenter }}"
      name: "{{ VM_Name }}"
      folder: /
      template: "{{ template }}"
      state: poweredon
      datastore: "{{ datastore }}"
      disk:
      - size_gb: 40
        type: thin
        datastore: "{{ datastore }}"
      networks:
      - name: "{{ network_name }}"
        ip: "{{ ip }}"
        netmask: "{{ netmask }}"
        gateway: "{{ gateway }}"
        dns_servers:
        - 
        type: static
      wait_for_ip_address: yes
      wait_for_customization: yes
      customization:
        hostname: "{{ VM_Name }}"
        dns_servers:
        -
        dns_suffix:
      
        domain: "{{ domain }}"
        autologon: yes
        password: "{{ local_pass }}"
        runonce:
        - powershell.exe -ExecutionPolicy Unrestricted wget https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1 -OutFile C:\Windows\Temp\ConfigureRemotingForAnsible.ps1
        - powershell.exe -ExecutionPolicy Unrestricted -File C:\Windows\Temp\ConfigureRemotingForAnsible.ps1 -EnableCredSSP -DisableBasicAuth -Verbose
      hardware:
        memory_mb: "{{ mb }}"
        num_cpus: "{{ cpu }}"
    delegate_to: localhost



VMware: vmware_content_deploy_template only supports vm_template

SUMMARY

vmware_content_deploy_template only support vm_template and not ovf.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

lib/ansible/modules/cloud/vmware/vmware_content_deploy_template.py

ANSIBLE VERSION
Devel
CONFIGURATION

NA

OS / ENVIRONMENT
STEPS TO REPRODUCE

Try deploying OVF library item from content library. Module fails with

Traceback (most recent call last):
  File \"/root/.ansible/tmp/ansible-tmp-1576825303.26255-45843491851862/AnsiballZ_vmware_content_deploy_template.py\", line 102, in <module>
    _ansiballz_main()
  File \"/root/.ansible/tmp/ansible-tmp-1576825303.26255-45843491851862/AnsiballZ_vmware_content_deploy_template.py\", line 94, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File \"/root/.ansible/tmp/ansible-tmp-1576825303.26255-45843491851862/AnsiballZ_vmware_content_deploy_template.py\", line 40, in invoke_module
    runpy.run_module(mod_name='ansible.modules.vmware_content_deploy_template', init_globals=None, run_name='__main__', alter_sys=True)
  File \"/usr/lib/python2.7/runpy.py\", line 188, in run_module
    fname, loader, pkg_name)
  File \"/usr/lib/python2.7/runpy.py\", line 82, in _run_module_code
    mod_name, mod_fname, mod_loader, pkg_name)
  File \"/usr/lib/python2.7/runpy.py\", line 72, in _run_code
    exec code in run_globals
  File \"/tmp/ansible_vmware_content_deploy_template_payload_6TWsDf/ansible_vmware_content_deploy_template_payload.zip/ansible/modules/vmware_content_deploy_template.py\", line 271, in <module>
  File \"/tmp/ansible_vmware_content_deploy_template_payload_6TWsDf/ansible_vmware_content_deploy_template_payload.zip/ansible/modules/vmware_content_deploy_template.py\", line 258, in main
  File \"/tmp/ansible_vmware_content_deploy_template_payload_6TWsDf/ansible_vmware_content_deploy_template_payload.zip/ansible/modules/vmware_content_deploy_template.py\", line 209, in deploy_vm_from_template
  File \"/usr/local/lib/python2.7/dist-packages/com/vmware/vcenter/vm_template_client.py\", line 2119, in deploy
    'spec': spec,
  File \"/usr/local/lib/python2.7/dist-packages/vmware/vapi/bindings/stub.py\", line 345, in _invoke
    return self._api_interface.native_invoke(ctx, _method_name, kwargs)
  File \"/usr/local/lib/python2.7/dist-packages/vmware/vapi/bindings/stub.py\", line 298, in native_invoke
    self._rest_converter_mode)
com.vmware.vapi.std.errors_client.InvalidArgument: {error_type : INVALID_ARGUMENT, messages : [LocalizableMessage(default_message=\"The library item 'debian10_ovf' (ID: 3f05271e-57f5-4a06-b3ea-546afde1b472) has type 'ovf', but needs to be of type 'vm-template'.\", args=['debian10_ovf', '3f05271e-57f5-4a06-b3ea-546afde1b472', 'ovf', 'vm-template'], id='com.vmware.vdcs.vmtx-main.invalid_item_type')], data : None}
/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py:851: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning)
", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1
EXPECTED RESULTS

The module should deploy OVF from the content library.

ACTUAL RESULTS

Fails with above-mentioned error message.

vmware_host_service_manager test-suite fails in CI

vmware_guest_info: Failed to gather information with vSphere Schema

SUMMARY

While working with the New Ansible Module: "vmware_guest_info" with Ansible Version
While trying to get the Mac Address or the IP Address from the guest_info

ansible 2.9.4
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.17 (default, Nov  7 2019, 10:07:09) [GCC 9.2.1 20191008]
---
- hosts: localhost
  gather_facts: false
  tasks:
  - vmware_guest_info:
      hostname: 
      username: 
      password: 
      validate_certs: no
      datacenter: Test
      name: win201
      schema: "vsphere"
      properties: ["config.hardware.device.deviceInfo.summary"]
    delegate_to: localhost
    register: vminfo
  - debug:
      var: vminfo

JSON File:

"hardware": {
                   "_vimtype": "vim.vm.VirtualHardware",
                   "device": [
                       {
                           "_vimtype": "vim.vm.device.VirtualE1000",
                           "addressType": "assigned",
                           "backing": {
                               "_vimtype": "vim.vm.device.VirtualEthernetCard.NetworkBackingInfo",
                               "deviceName": "-POC",
                               "inPassthroughMode": null,
                               "network": "vim.-7017",
                               "useAutoDetect": false
                           },
                           "controllerKey": 100,
                           "deviceInfo": {
                               "_vimtype": "vim.Description",
                               "label": "Network adapter 1",
                               "summary": "-POC"
                           },
                           "externalId": null,
                           "key": 4000,
                           "macAddress": "00"


ISSUE TYPE

I'm getting the following Error :

fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Information gathering failed with exception list indices must be integers, not str"}

_vcd_login refactoring

SUMMARY

https://github.com/ansible-collections/vmware/blob/master/plugins/module_utils/vca.py#L273 states:

def _vcd_login(vca, password, org):
    # TODO: this function needs to be refactored
    if not vca.login(password=password, org=org):
        raise VcaError("Login Failed: Please check username or password "
                       "or host parameters")

    if not vca.login(password=password, org=org):
        raise VcaError("Failed to get the token",
                       error=vca.response.content)

    if not vca.login(token=vca.token, org=org, org_url=vca.vcloud_session.org_url):
        raise VcaError("Failed to login to org", error=vca.response.content)

The comment is slightly vague, but I would like to tackle refactoring this. Any suggestions on the issue or what the comment is referring to in particular?

ISSUE TYPE

  • Bug Report
COMPONENT NAME

/vmware/blob/master/plugins/module_utils/vca.py

Support HTTP proxy with authentication

SUMMARY

vmware_vmkernel_info and vmware_host_firewall_manager fail when connecting to the vCenter should be established via proxy. I would suggest that other vmware modules have the same issue.
I've sniffed the network connection between the host where the playbook runs and the proxy while running the playbook but the were zero packets between that node. A curl from the host where the playbook runs to the vCenter via proxy works as expected.

ISSUE TYPE

Bug Report

COMPONENT NAME

vmware_vmkernel_info
vmware_host_firewall_manager

ANSIBLE VERSION

ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/kesse01/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
CONFIGURATION

ansible-config dump --only-changed #is empty

OS / ENVIRONMENT

Playbook is run on Red Hat Enterprise Linux Server release 7.7 (Maipo)
vCenter to establish connection has 6.7.0.15129973
STEPS TO REPRODUCE

info.yml file:

    hosts: clusters
    connection: local
    gather_facts: no
    tasks:
        name: Gather VMKernel info about all ESXi Host in given Cluster
        vmware_vmkernel_info:
        hostname: '{{ vcenter_ip }}'
        username: '{{ vcenter_username }}'
        password: '{{ vcenter_password }}'
        cluster_name: '{{ inventory_hostname }}'
        validate_certs: False

hosts file:

all:
children:
clusters:
hosts:
vsphere-cluster:

EXPECTED RESULTS

I would expect that the connection to the vCenter via proxy would work.

ACTUAL RESULTS

WARNING: The below traceback may not be related to the actual failure.
File "/tmp/ansible_vmware_vmkernel_info_payload_F6VSeK/ansible_vmware_vmkernel_info_payload.zip/ansible/module_utils/vmware.py", line 557, in connect_to_api
smart_stub = connect.SmartStubAdapter(**connect_args)
File "/usr/lib/python2.7/site-packages/pyVim/connect.py", line 762, in SmartStubAdapter
sslContext)
File "/usr/lib/python2.7/site-packages/pyVim/connect.py", line 718, in __FindSupportedVersion
sslContext)
File "/usr/lib/python2.7/site-packages/pyVim/connect.py", line 638, in __GetServiceVersionDescription
path + "/vimServiceVersions.xml", sslContext)
File "/usr/lib/python2.7/site-packages/pyVim/connect.py", line 604, in __GetElementTree
conn.request("GET", path)
File "/usr/lib64/python2.7/httplib.py", line 1056, in request
self._send_request(method, url, body, headers)
File "/usr/lib64/python2.7/httplib.py", line 1090, in _send_request
self.endheaders(body)
File "/usr/lib64/python2.7/httplib.py", line 1052, in endheaders
self._send_output(message_body)
File "/usr/lib64/python2.7/httplib.py", line 890, in _send_output
self.send(msg)
File "/usr/lib64/python2.7/httplib.py", line 852, in send
self.connect()
File "/usr/lib64/python2.7/httplib.py", line 1266, in connect
HTTPConnection.connect(self)
File "/usr/lib64/python2.7/httplib.py", line 833, in connect
self.timeout, self.source_address)
File "/usr/lib64/python2.7/socket.py", line 571, in create_connection
raise err

fatal: [vsphere-cluster]: FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"invocation": {
"module_args": {
"cluster_name": "vsphere-cluster",
"esxi_hostname": null,
"hostname": "10.20.30.40",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"proxy_host": "50.60.70.80",
"proxy_port": 3152,
"username": "VSPHERE.LOCAL\Administrator",
"validate_certs": false
}
},
"msg": "Unknown error while connecting to vCenter or ESXi API at 10.20.30.40:443 [proxy: 50.60.70.80:3152] : [Errno 111] Connection refused"

vmware_rest_client does not support HTTP Proxy

SUMMARY

module_utils/vmware_rest_client.py and the associated modules don't support HTTP Proxy.

The modules:

  • _vmware_category_facts
  • vmware_category_info
  • vmware_category
  • vmware_cluster_info
  • vmware_content_deploy_template
  • vmware_content_library_info
  • vmware_content_library_manager
  • vmware_guest_info
  • vmware_host_facts
  • vmware_tag_info
  • vmware_tag_manager
  • vmware_tag
  • vmware_vm_info
ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_rest_client

ANSIBLE VERSION
2.9 and devel

copy parameters in vmware_guest_file_operation module does not work

The running results to the copy parameters of the vmware_guest_file_operation module keep popping up a failure message source file not found below.

{
    "_ansible_no_log": false,
    "_ansible_delegated_vars": {
        "ansible_host": "localhost"
    },
    "invocation": {
        "module_args": {
            "username": "[email protected]",
            "vm_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "vm_username": "root",
            "proxy_port": null,
            "cluster": null,
            "vm_id_type": "vm_name",
            "copy": {
                "dest": "~/.ssh/authorized_keys",
                "src": "/var/lib/awx/.ssh/id_rsa.pub",
                "overwrite": false
            },
            "vm_id": "practice_linux_vm_created",
            "port": 443,
            "datacenter": "Datacenter",
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "proxy_host": null,
            "hostname": "10.9.45.34",
            "directory": null,
            "folder": null,
            "validate_certs": false,
            "fetch": null
        }
    },
    "changed": false,
    "msg": "Source /var/lib/awx/.ssh/id_rsa.pub not found"
}

I am sure that the file exists in the control node, is this a bug or am I missing some configs?

vmware_guest cannot remove 2 CDROMs, adds 1 CDROM with state: absent

Transferred from ansible/ansible#67952

SUMMARY

The module vmware_guest cannot remove CDROMs.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_guest

ANSIBLE VERSION
ansible 2.9.1
CONFIGURATION
DEFAULT_MODULE_PATH(/home/comigo/WORK/AnsibleBaker/ansible.cfg) = ['/home/comigo/WORK/AnsibleBaker/modules']
DEFAULT_STDOUT_CALLBACK(/home/comigo/WORK/AnsibleBaker/ansible.cfg) = yaml
DEFAULT_STRATEGY(/home/comigo/WORK/AnsibleBaker/ansible.cfg) = free
DEFAULT_VAULT_PASSWORD_FILE(/home/comigo/WORK/AnsibleBaker/ansible.cfg) = /home/comigo/WORK/AnsibleBaker/.ansible_vault
DISPLAY_SKIPPED_HOSTS(/home/comigo/WORK/AnsibleBaker/ansible.cfg) = True
HOST_KEY_CHECKING(/home/comigo/WORK/AnsibleBaker/ansible.cfg) = False
OS / ENVIRONMENT
  • vSphere Client version: 6.5.0.13000
  • Hypervisor: VMware ESXi, 6.5.0
  • VM Compatibility: ESXi 6.5 and later (VM version 13)
STEPS TO REPRODUCE
  • Have a VM with two CDROMs in a 'disconnected' state (no ISO, no datastore link or such, but still present as devices from inside VM). Power the VM off.
  • Execute vmware_guest with cdrom: {type: none, state: absent}
  • Get "OK" status.
  • Observe that both CDROMs are still present.
  • Remove both CDROMs manually.
  • Execute vmware_guest again.
  • Get "Changed" status.
  • Observe that VM now has one CDROM.
  • Further runs return "OK" and do not remove the CDROM.
- name: "{{'Delete a VM' if vm_delete else 'Clone a VM from a Windows template, or query info about it'}}"
  vmware_guest:
    hostname: "{{vSphere_hostname}}"
    username: "{{vcenter_username}}"
    password: "{{vcenter_password}}"
    cluster: "{{vSphere_cluster}}"
    datacenter: "{{vSphere_datacenter}}"

    validate_certs: no

    datastore: "{{vSphere_datastore | default('')}}"
    folder: "{{vSphere_folder}}"
    name: "{{inventory_hostname}}"
    template: "{{vcenter_template | default('')}}"
    networks:
    - name: โ€ฆ
    hardware:
      memory_mb: "{{hardware_memory}}"
    cdrom:
      type: none
      state: absent

    state: "{{'absent' if vm_delete else 'present'}}"
    force: "{{vm_delete|bool}}"
  register: vmData
  delegate_to: localhost
EXPECTED RESULTS

I expected a VM with 0 CDROM devices.

I would actually expect that cdrom: [] should have worked (it does nothing). Still,

cdrom:
  type: none
  state: absent

seems to be the official documented way to remove CDROMs. Instead, it either does nothing if there is already at least one CDROM or adds an unneeded one if 0 were present before running the task.

ACTUAL RESULTS

Starting with a VM with 2 CDROMs, you get 2 CDROMs and an OK status.
Starting with a VM with 1 CDROM, you get 1 CDROM and OK status.
Starting with a VM with 0 CDROMs, you get 1 CDROM and "Changed" status.

ok: [Win2008-TRM -> localhost] => changed=false 
  instance:
    # (nothing relevant)
  invocation:
    module_args:
      cdrom:
        state: absent
        type: none
      cluster: โ€ฆ
      convert: null
      customization: {}
      customization_spec: null
      customvalues: []
      datacenter: โ€ฆ
      datastore: โ€ฆ
      disk: []
      esxi_hostname: null
      folder: Etalon/FunctionalTemplates
      force: false
      guest_id: null
      hardware:
        memory_mb: 10240
      hostname: โ€ฆ
      is_template: false
      linked_clone: false
      name: Win2008-TRM
      name_match: first
      networks:
      - name: โ€ฆ
        type: dhcp
      password: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
      port: 443
      proxy_host: null
      proxy_port: null
      resource_pool: null
      snapshot_src: null
      state: present
      state_change_timeout: 0
      template: Win2008R2_40G
      use_instance_uuid: false
      username: [email protected]
      uuid: null
      validate_certs: false
      vapp_properties: []
      wait_for_customization: false
      wait_for_ip_address: false

`vmware_vmotion` with VMย that does not exist ends fatally with "IndexError: list index out of range"

SUMMARY

When trying a vmotion with vmware_vmotion module with a VMย name that does not exist as vm_name value, it ends with fatal error in place of a failed VMย not found.

I think that the module should first chech if the value provided as vm_name is an existing VMย on the cluster.

ISSUE TYPE
  • Bug Report
COMPONENT NAME
  • module: vmware_vmotion
ANSIBLE VERSION
ansible 2.9.6
  config file = /home/xenlo/Projects/107/etc/ansible.cfg
  configured module search path = ['/home/xenlo/Projects/107/library']
  ansible python module location = /home/xenlo/.virtualenvs/107/lib/python3.7/site-packages/ansible
  executable location = /home/xenlo/.virtualenvs/107/bin/ansible
  python version = 3.7.6 (default, Jan 30 2020, 10:29:04) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
CONFIGURATION
ANSIBLE_SSH_ARGS(/home/xenlo/Projects/107/etc/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o ServerAliveInterval=60
CACHE_PLUGIN(/home/xenlo/Projects/107/etc/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/xenlo/Projects/107/etc/ansible.cfg) = $HOME/.ansible/facts/
CACHE_PLUGIN_TIMEOUT(/home/xenlo/Projects/107/etc/ansible.cfg) = 3600
DEFAULT_ACTION_PLUGIN_PATH(env: ANSIBLE_ACTION_PLUGINS) = ['/home/xenlo/.virtualenvs/107/lib/python3.7/site-packages/ara/plugins/actions']
DEFAULT_CALLBACK_PLUGIN_PATH(env: ANSIBLE_CALLBACK_PLUGINS) = ['/home/xenlo/.virtualenvs/107/lib/python3.7/site-packages/ara/plugins/callbacks']
DEFAULT_CALLBACK_WHITELIST(/home/xenlo/Projects/107/etc/ansible.cfg) = ['profile_tasks']
DEFAULT_FILTER_PLUGIN_PATH(env: ANSIBLE_FILTER_PLUGINS) = ['/home/xenlo/Projects/107/plugins/filter']
DEFAULT_GATHERING(/home/xenlo/Projects/107/etc/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/xenlo/Projects/107/etc/ansible.cfg) = ['/home/xenlo/Projects/107/inventories/setupenv', '/home/xenlo/Projects/107/inventories/T04']
DEFAULT_LOG_PATH(/home/xenlo/Projects/107/etc/ansible.cfg) = /home/xenlo/.ansible/log/ansible.log
DEFAULT_MODULE_PATH(env: ANSIBLE_LIBRARY) = ['/home/xenlo/Projects/107/library']
DEFAULT_ROLES_PATH(env: ANSIBLE_ROLES_PATH) = ['/home/xenlo/Projects/107/roles.galaxy', '/home/xenlo/Projects/107/roles']
DEFAULT_STDOUT_CALLBACK(/home/xenlo/Projects/107/etc/ansible.cfg) = yaml
GALAXY_ROLE_SKELETON(env: ANSIBLE_GALAXY_ROLE_SKELETON) = /home/xenlo/Projects/107/etc/skel/default
HOST_KEY_CHECKING(/home/xenlo/Projects/107/etc/ansible.cfg) = False
OS / ENVIRONMENT
  • vmware version: 6.7.0
STEPS TO REPRODUCE

Ensure that there is no vm called 'vm_that_does_not_exist' then run a playbook with simple vmware_vmotion task asking to move that not existing vm.

---
- name: Generate an error with `vmware_vmotion`
  hosts: localhost
  gather_facts: no
  vars:
    domain: "my-domain.ltd"
    vcenter_host: "vcenter.{{ my-domain.ltd }}"
    vcenter_pass: "{{ ansible_ssh_pass }}"
    datastore: "datastore1"
    destination_host: "node-02"	

  tasks:
    - name: Ensure that VSP is running on the right cnode and it's disk image is stored on SAN
      vmware_vmotion:
        hostname: "{{ vcenter_host }}"
        username: "Administrator@{{ domain }}"
        password: "{{ vcenter_pass }}"
        validate_certs: no
        vm_name: "vm_that_does_not_exist"
        destination_datastore: "{{ datastore }}"
        destination_host: "{{ prefered_esxi_node }}"
      run_once: true
      delegate_to: localhost
EXPECTED RESULTS

I expect having an 'fail' result with a quite explicite message telling me that it could not found the VMย I ask to vmotion.

failed: [node-02 -> localhost] => changed=false
  msg: 'VMย vm_that_does_not_exist does not exist on vcenter.my-domain.ltd: An error occurred during host vmotion.'
ACTUAL RESULTS

In place of having a kind failed message, I get a fatal error with a traceback and an 'IndexError: list index out of range'.

Friday 17 April 2020  14:57:17 +0200 (0:00:08.502)       0:00:24.013 **********
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: IndexError: list index out of range
fatal: [node-02 -> localhost]: FAILED! => changed=false
  module_stderr: |-
    Traceback (most recent call last):
      File "/home/xenlo/.ansible/tmp/ansible-tmp-1587128237.9026952-43603983298352/AnsiballZ_vmware_vmotion.py", line 102, in <module>
        _ansiballz_main()
      File "/home/xenlo/.ansible/tmp/ansible-tmp-1587128237.9026952-43603983298352/AnsiballZ_vmware_vmotion.py", line 94, in _ansiballz_main
        invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
      File "/home/xenlo/.ansible/tmp/ansible-tmp-1587128237.9026952-43603983298352/AnsiballZ_vmware_vmotion.py", line 40, in invoke_module
        runpy.run_module(mod_name='ansible.modules.cloud.vmware.vmware_vmotion', init_globals=None, run_name='__main__', alter_sys=True)
      File "/usr/lib64/python3.7/runpy.py", line 205, in run_module
        return _run_module_code(code, init_globals, run_name, mod_spec)
      File "/usr/lib64/python3.7/runpy.py", line 96, in _run_module_code
        mod_name, mod_spec, pkg_name, script_name)
      File "/usr/lib64/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/tmp/ansible_vmware_vmotion_payload_djwftujh/ansible_vmware_vmotion_payload.zip/ansible/modules/cloud/vmware/vmware_vmotion.py", line 340, in <module>
      File "/tmp/ansible_vmware_vmotion_payload_djwftujh/ansible_vmware_vmotion_payload.zip/ansible/modules/cloud/vmware/vmware_vmotion.py", line 336, in main
      File "/tmp/ansible_vmware_vmotion_payload_djwftujh/ansible_vmware_vmotion_payload.zip/ansible/modules/cloud/vmware/vmware_vmotion.py", line 152, in __init__
      File "/tmp/ansible_vmware_vmotion_payload_djwftujh/ansible_vmware_vmotion_payload.zip/ansible/modules/cloud/vmware/vmware_vmotion.py", line 308, in get_vm
    IndexError: list index out of range
  module_stdout: ''
  msg: |-
    MODULE FAILURE
    See stdout/stderr for the exact error
  rc: 1

Please Allow User Input to Customize Variable Assignments and Filters with the VMware Inventory Plugin

@Akasurde

There are obstacles preventing folks from moving to the VMware dynamic inventory plugin from the old VMware inventory script. These obstacles include the lack of customization that was provided via options like alias_pattern, host_pattern, groupby_patterns, and host_filters that were available for the old inventory script (https://github.com/ansible/ansible/blob/devel/contrib/inventory/vmware_inventory.ini).

It appears as if, at minimum line 507 https://github.com/ansible-collections/vmware/blob/197125094a03bd311a1a74afe9db6fe3c1814718/plugins/inventory/vmware_vm_inventory.py#L507 could be modified to allow an input to be passed to the script. This would let the user choose what they wanted the host's name to be. This would be especially helpful to at least allow us to choose not to append the UUID. In many environments, we self enforce no duplicate names and don't want our UUIDs as part of the ansible host's name in the inventory.

Line 515 https://github.com/ansible-collections/vmware/blob/197125094a03bd311a1a74afe9db6fe3c1814718/plugins/inventory/vmware_vm_inventory.py#L515 could also be modified to take input from the user and let them choose if they would rather have something like the hostname's FQDN set as the ansible_host value. This is critical for those of us that manage Windows systems in addition to Linux systems via ansible. Kerberos authentication against a WinRM https listener will not work if ansible targets the IP address.

vmware_vm_shell does not successfully run nmcli commands

From @kobaltfox on Mar 24, 2020 15:15

SUMMARY

In a playbook to clone a VM guest, the final task is to use vmware_vm_shell to run three commands to update the DNS, add search domain, take down the interface and then bring it back up to allow NetworkManager to update resolv.conf. The command work fine directing on the target VM guest. However, then the task runs, no failures are returned, but no changes are made to the guest.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_vm_shell

ANSIBLE VERSION
ansible 2.9.2
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/tjwork/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.6.8 (default, Oct 11 2019, 15:04:54) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
CONFIGURATION
DEFAULT_INVENTORY_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/inventory']
OS / ENVIRONMENT

Target guest is RHEL 8

STEPS TO REPRODUCE

Adding the tasks to a playbook role. This is run using Ansible Tower.

# Fix network confiruration - DNS
- name: Add DNS to network interface
  vmware_vm_shell:
    hostname: "{{ vcenter_hostname }}"
    username: "{{ vcenter_user }}"
    password: "{{ vcenter_pass }}"
    datacenter: "{{ datacenter_name }}"
    folder: "/LinuxAdmin/ansible_RHv7/"
    vm_id: "{{ env_name }}"
    vm_username: 'root'
    vm_password: 'supersecretpassword'
    vm_shell: "/bin/bash"
    vm_shell_args: "nmcli con mod 'ens192' ipv4.dns '192.168.10.12 192.168.10.15'"
    vm_shell_env:
      - "PATH=/bin/bash"
    vm_shell_cwd: "/root"
    validate_certs: False
  register: shell_command_dns
  tags:
    - dns_refreash

# Pause for 30 seconds to allow command to execute
- pause:
    seconds: 30
  tags:
    - dns_refreash

  # Fix network configuration - Search Domain
- name: Add search domain to network interface
  vmware_vm_shell:
    hostname: "{{ vcenter_hostname }}"
    username: "{{ vcenter_user }}"
    password: "{{ vcenter_pass }}"
    datacenter: "{{ datacenter_name }}"
    folder: "/LinuxAdmin/ansible_RHv7/"
    vm_id: "{{ env_name }}"
    vm_username: 'root'
    vm_password: 'supersecretpassword'
    vm_shell: "/bin/bash"
    vm_shell_args: "nmcli con mod 'ens192' ipv4.dns-search 'domain.com'"
    vm_shell_env:
      - "PATH=/bin/bash"
    vm_shell_cwd: "/root"
    validate_certs: False
  register: shell_command_search
  tags:
    - dns_refreash

# Pause for 30 seconds to allow command to execute
- pause:
    seconds: 30
  tags:
    - dns_refreash

  # Fix network configuration - shut down interface
- name: Add search domain to network interface
  vmware_vm_shell:
    hostname: "{{ vcenter_hostname }}"
    username: "{{ vcenter_user }}"
    password: "{{ vcenter_pass }}"
    datacenter: "{{ datacenter_name }}"
    folder: "/LinuxAdmin/ansible_RHv7/"
    vm_id: "{{ env_name }}"
    vm_username: 'root'
    vm_password: 'supersecretpassword'
    vm_shell: "/bin/bash"
    vm_shell_args: "nmcli con down 'ens192'"
    vm_shell_env:
      - "PATH=/bin/bash"
    vm_shell_cwd: "/root"
    validate_certs: False
  register: shell_command_down
  tags:
    - dns_refreash

# Pause for 30 seconds to allow command to execute
- pause:
    seconds: 30
  tags:
    - dns_refreash

# Fix network configuration - power up interface
- name: Add search domain to network interface
  vmware_vm_shell:
    hostname: "{{ vcenter_hostname }}"
    username: "{{ vcenter_user }}"
    password: "{{ vcenter_pass }}"
    datacenter: "{{ datacenter_name }}"
    folder: "/LinuxAdmin/ansible_RHv7/"
    vm_id: "{{ env_name }}"
    vm_username: 'root'
    vm_password: 'supersecretpassword'
    vm_shell: "/bin/bash"
    vm_shell_args: "nmcli con up 'ens192'"
    vm_shell_env:
      - "PATH=/bin/bash"
    vm_shell_cwd: "/root"
    validate_certs: False
  register: shell_command_up
  tags:
    - dns_refreash
EXPECTED RESULTS

After running the tasks the resolv.conf should contain the search domain and nameservers

ACTUAL RESULTS

The resolve.conf is empty

SSH password: 
Vault password: 
PLAY [all] *********************************************************************
TASK [install_rhelvm : set_fact] ***********************************************
ok: [192.168.22.29]
TASK [install_rhelvm : Get VM "server-an11" uuid] ******************************
ok: [192.168.22.29]
TASK [install_rhelvm : Add DNS to network interface] ***************************
changed: [192.168.22.29]
TASK [install_rhelvm : pause] **************************************************
Pausing for 30 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [192.168.22.29]
TASK [install_rhelvm : Add search domain to network interface] *****************
changed: [192.168.22.29]
TASK [install_rhelvm : pause] **************************************************
Pausing for 30 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [192.168.22.29]
TASK [install_rhelvm : Add search domain to network interface] *****************
changed: [192.168.22.29]
TASK [install_rhelvm : pause] **************************************************
Pausing for 30 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [192.168.22.29]
TASK [install_rhelvm : Add search domain to network interface] *****************
changed: [192.168.22.29]
PLAY RECAP *********************************************************************
192.168.22.29               : ok=9    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Copied from original issue: ansible/ansible#68437

vmware_guest: virt_based_security switch does not work as expected

From @benjaminrein on Apr 01, 2020 12:34

SUMMARY

virt_based_security: False does not work as expected, because it enabled Virtualization Based Security (VBS). Disabling VBS on a vm already present is not possible.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_guest

ANSIBLE VERSION
ansible 2.9.3
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/home/username/.ansible/plugins/modules                                           ', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (R                                           ed Hat 4.8.5-39)]
CONFIGURATION
-
OS / ENVIRONMENT

RHEL 7.7, vCenter 6.7 U1

STEPS TO REPRODUCE

Create a new VM configured as below. First error.
Execute task a second time where the vm is already present. Second error.

  - name: create a virtual machine on given ESXi hostname
    vmware_guest:
      hostname:       '{{ vcenter_hostname }}'
      username:       '{{ vcenter_username }}'
      password:       '{{ vcenter_password }}'
      datacenter:     '{{ vcenter_datacenter }}'
      folder:         '{{ vmware_settings.folder }}'
      resource_pool:  '{{ vmware_settings.resource_pool }}'
      name:           'TestVM'
      guest_id:       'windows9Server64Guest'
      cluster:        '{{ vmware_settings.cluster }}'
      disk:
      - size_gb:                    "1"
        type:                       "thin"
        datastore:                  "Linux-Windows"
      hardware:       
        version:                    "14"
        boot_firmware:              "efi"
        memory_mb:                  "1024"
        num_cpus:                   "1"
        virt_based_security:        "False"      
      state:          'present'
      force:          true
      validate_certs: 'no'
EXPECTED RESULTS

When creating a new VM VBS should be disabled.
When the VM is already present with VBS enabled, VBS should be switched off.

ACTUAL RESULTS

If the switch virt_based_security is present in hardware configuration, VBS is enabled regardless of the switch value.


Copied from original issue: ansible/ansible#68614

vmware_guest_network fails to support multiple distributed virtual switches with identical port group names

SUMMARY

I have a system with 2 datacenters each have their own set of DVS. These DVS have different names but the port groups defined in each DVS are identical.

i.e.
Datacenter1 has DVSwitch0 that has Portgroup: portgroup1
Datacenter2 has Other_DVSwitch0 that has Portgroup: portgroup1

if I pass vmware_guest_network the network name portgroup1 and the subvar dvswitch_name: Other_DVSwitch0 if attempts to bind the virtual nic to datacenter1's DVSwitch0.

this is using the version of vmware_guest_network available here: ansible/ansible#65994

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_guest_network

ANSIBLE VERSION
ansible 2.9.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.5.8-39)]
CONFIGURATION
ANSIBLE_FORCE_COLOR(/etc/ansible/ansible.cfg) = True
DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = [u'profile_tasks']
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 100
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /tmp/ansible.log
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = root
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles']
DEFAULT_SCP_IF_SSH(/etc/ansible/ansible.cfg) = true
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
OS / ENVIRONMENT

Linux CentOS 7 docker container
VMware Version 6.5

STEPS TO REPRODUCE

In VMware:

  • create 2 datacenters
  • create a Distributed Virtual Switch in each named differently
  • create portgroups under each DVS with the exact same portgroup name
  • attempt to use vmware_guest_network to assign a VM a nic connection to the portgroup on the DVS in the second datacenter
EXPECTED RESULTS

The VM would get the nic assigned to the portgroup on the correct DVS

ACTUAL RESULTS

The code attempts to assign the nic to the portgroup on the first datacenter's DVS and fails


vmware_guest_network: fails with a backtrace if static MAC starts with '00:50:56'

SUMMARY

hi @pgbidkar and @Tomorrow9.

If I use the following task:

  - name: add new network adapters to virtual machine
    vmware_guest_network:
      validate_certs: False
      hostname: "{{ vcenter_hostname }}"
      username: "{{ vcenter_username }}"
      password: "{{ vcenter_password }}"
      name: test_vm1
      networks:
        - name: "VM Network"
          state: new
          device_type: e1000e
          manual_mac: "00:50:56:58:59:60"
        - name: "VM Network"
          state: new
          device_type: vmxnet3
          manual_mac: "00:50:56:58:59:61"
    register: add_netadapter

I get the following error: add_nic.txt

If I remove the manual_mac keys, everything works fine.

I create the Fedora 30 VM with:

  - name: Create VMs
    vmware_guest:
      hostname: "{{ vcenter_hostname }}"
      username: "{{ vcenter_username }}"
      password: "{{ vcenter_password }}"
      datacenter: "{{ dc1 }}"
      validate_certs: no
      folder: '/DC0/vm/F0'
      name: test_vm1
      state: poweredon
      guest_id: centos7_64Guest
      disk:
      - size_gb: 1
        type: thin
        datastore: '{{ ds2 }}'
      hardware:
        version: latest
        memory_mb: 1024
        num_cpus: 1
        scsi: paravirtual
      cdrom:
        type: iso
        iso_path: "[{{ ds1 }}] fedora.iso"
      networks:
      - name: VM Network

I've tried several hardware.version/guest_id combination, without any difference. My lab is set-up as described here: https://docs.ansible.com/ansible/devel/dev_guide/platforms/vmware_guidelines.html

My vcenter set-up:

$ govc ls '/**/**/**'
/DC0/vm/F0/test_vm1
/DC0/network/VM Network
/DC0/host/DC0_C0/Resources
/DC0/host/DC0_C0/esxi1.test
/DC0/host/DC0_C0/esxi2.test
/DC0/datastore/LocalDS_0
/DC0/datastore/LocalDS_1
/DC0/datastore/datastore1 (1)
/DC0/datastore/datastore1
ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_guest_network

ANSIBLE VERSION
devel

vmware_tag_manager module fails -

From @dexterc on Apr 03, 2020 04:36

Yes

Not tested

SUMMARY

I have a Ansible Tower v 3.5.4 and vcentre v 6.7. I have a playbook to create and tag a VM using module" vm_tag_manager" . Vsphere Automation SDK-python is installed on ansible machine. along with other python libraries. playbook fails w/ error described.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_tag_manager

ANSIBLE VERSION

ansible --version
ansible 2.9.6
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /bin/ansible
  python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
CONFIGURATION
[root@cfmdcsvlinf09 a111322]# ansible-config dump --only-changed
OS / ENVIRONMENT

OS version - RHEL 7.7,

installed vsphere Automation-SDK python on venv /var/lib/awx/ansible

pip freeze output
[a111322@cfmdcsvlinf09 ~]$ /var/lib/awx/venv/ansible/bin/pip freeze
adal==1.2.1
ansible==2.9.6
apache-libcloud==2.4.1.dev0
appdirs==1.4.3
applicationinsights==0.11.1
argcomplete==1.9.4
asn1crypto==0.24.0
azure-cli-core==2.0.35
azure-cli-nspkg==3.0.2
azure-common==1.1.11
azure-graphrbac==0.40.0
azure-keyvault==1.0.0a1
azure-mgmt-authorization==0.51.1
azure-mgmt-batch==5.0.1
azure-mgmt-cdn==3.0.0
azure-mgmt-compute==4.4.0
azure-mgmt-containerinstance==1.4.0
azure-mgmt-containerregistry==2.0.0
azure-mgmt-containerservice==4.4.0
azure-mgmt-cosmosdb==0.5.2
azure-mgmt-devtestlabs==3.0.0
azure-mgmt-dns==2.1.0
azure-mgmt-hdinsight==0.1.0
azure-mgmt-keyvault==1.1.0
azure-mgmt-loganalytics==0.2.0
azure-mgmt-marketplaceordering==0.1.0
azure-mgmt-monitor==0.5.2
azure-mgmt-network==2.3.0
azure-mgmt-nspkg==2.0.0
azure-mgmt-rdbms==1.4.1
azure-mgmt-redis==5.0.0
azure-mgmt-resource==2.1.0
azure-mgmt-servicebus==0.5.3
azure-mgmt-sql==0.10.0
azure-mgmt-storage==3.1.0
azure-mgmt-trafficmanager==0.50.0
azure-mgmt-web==0.41.0
azure-nspkg==2.0.0
azure-storage==0.35.1
Babel==0.9.6
backports.ssl-match-hostname==3.5.0.1
bcrypt==3.1.4
boto==2.47.0

boto3==1.6.2

botocore==1.9.3

cachetools==3.0.0

certifi==2018.1.18

cffi==1.11.5

chardet==3.0.4

colorama==0.3.9

configobj==4.7.2

cryptography==2.6.1

decorator==4.2.1

deprecation==2.0

dnspython==1.12.0

docutils==0.14

dogpile.cache==0.6.5

enum34==1.1.6

ethtool==0.8

futures==3.2.0

google-auth==1.6.2

httplib2==0.9.2

humanfriendly==4.8

idna==2.6

iniparse==0.4

insights-client==3.0.13

ipaddr==2.1.11

ipaddress==1.0.19

iso8601==0.1.12

isodate==0.6.0

Jinja2==2.10.1

jmespath==0.9.3

jsonpatch==1.21

jsonpointer==2.0

keystoneauth1==3.11.2

kitchen==1.1.1

knack==0.3.3

langtable==0.0.31

lxml==4.1.1

M2Crypto==0.21.1

Magic-file-extensions==0.2

MarkupSafe==1.1.1

meld3==0.6.10

mercurial==2.6.2

monotonic==1.4

msrest==0.6.1

msrestazure==0.5.0

munch==2.2.0

ncclient==0.6.3

ndg-httpsclient==0.5.1

netaddr==0.7.19

netifaces==0.10.6

nsx-policy-python-sdk==2.5.1.0.1.15419398

nsx-python-sdk==2.5.1.0.1.15419398

nsx-vmc-aws-integration-python-sdk==2.5.1.0.1.15419398

nsx-vmc-policy-python-sdk==2.5.1.0.1.15419398

ntlm-auth==1.0.6

oauthlib==2.0.6

openstacksdk==0.23.0

os-service-types==1.2.0

ovirt-engine-sdk-python==4.3.0

packaging==17.1

paramiko==2.4.2

pbr==3.1.1

pciutils==1.7.3

perf==0.1

pexpect==4.6.0

ply==3.4

psutil==5.4.3

psycopg2==2.7.5

ptyprocess==0.5.2

pulp-common==2.18.1.2

pyasn1==0.4.2

pyasn1-modules==0.2.3

pycparser==2.18

pycrypto==2.6.1

pycurl==7.19.0

Pygments==2.2.0

pygobject==3.22.0

pygpgme==0.3

pyinotify==0.9.4

PyJWT==1.6.0

pykerberos==1.2.1

pyliblzma==0.5.3

PyNaCl==1.2.1

pyOpenSSL==17.5.0

pyparsing==2.2.0

python-augeas==0.5.0

python-dateutil==2.6.1

python-dmidecode==3.10.13

python-keyczar==0.71rc0

python-linux-procfs==0.4.9

pyudev==0.15

pyvmomi==6.5

pywinrm==0.3.0

pyxattr==0.5.1

PyYAML==5.1

requests==2.20.0

requests-credssp==1.0.2

requests-kerberos==0.12.0

requests-ntlm==1.1.0

requests-oauthlib==0.8.0

requestsexceptions==1.4.0

rhnlib==2.5.65

s3transfer==0.1.13

schedutils==0.4

selectors2==2.0.1

six==1.12.0

slip==0.4.0

slip.dbus==0.4.0

stevedore==1.28.0

subscription-manager==1.24.13

suds==0.4

supervisor==3.1.4

syspurpose==1.24.13

tabulate==0.7.7

typing==3.7.4.1

urlgrabber==3.10

urllib3==1.24.3

vapi-client-bindings==3.2.0

vapi-common-client==2.14.0

vapi-runtime==2.14.0

virtualenv==15.1.0

vmc-client-bindings==1.24.0

vmc-draas-client-bindings==1.4.0

vSphere-Automation-SDK==1.23.0

xmltodict==0.11.0

yum-langpacks==0.4.2

yum-metadata-parser==1.1.4


You are using pip version 9.0.1, however version 20.0.2 is available.

You should consider upgrading via the 'pip install --upgrade pip' command.
STEPS TO REPRODUCE

playbook extra variables:

ansible_python_interpreter: /var/lib/awx/venv/ansible/bin/python

environment

/var/lib/awx/venv/ansible

- name: Set up a new VM Playbook
  hosts: localhost

- name: Set up a new VM

      delegate_to: localhost

      vmware_guest:

        validate_certs: false

        name: "{{ var_hostname }}"

        datacenter: "{{ vmware_dc }}"

        folder: "{{ vmware_vm_folder }}"

        template: "{{ vmware_template }}"

        cluster: "{{ vmware_cluster }}"

        networks:

          - name: "{{ vmware_portgroup }}"

    # resource_pool: "{{ vmware_resource_pool }}"

        state: poweredoff

      register: vm_facts

 

    # this add tagging to a virtual machine, example tag are for backup, location, etc.

    - name: Add tags to virtual machine

      vmware_tag_manager:

        hostname: "{{ vmware_vcenter }}"

        username: "{{ vsphere_copy_username }}"

        password: "{{ vsphere_copy_password }}"

        validate_certs: no

        tag_names:

          - Backup:"{{ var_backuppolicy }}"

        object_name: "{{ var_hostname }}"

        object_type: VirtualMachine

        state: add

      delegate_to: localhost
EXPECTED RESULTS

playbook fails while running vmware_tag_manager module

ACTUAL RESULTS

Error:

{

"exception": "Traceback (most recent call last):\n  File \"/tmp/ansible_vmware_tag_manager_payload_qYW_IW/ansible_vmware_tag_manager_payload.zip/ansible/module_utils/vmware_rest_client.py\", line 30, in <module>\n    from com.vmware.vapi.std_client import DynamicID\nImportError: No module named com.vmware.vapi.std_client\n",

"_ansible_no_log": false,

"_ansible_delegated_vars": {

    "ansible_host": "localhost"

},

"changed": false,

"invocation": {

    "module_args": {

        "username": "[email protected]",

        "object_type": "VirtualMachine",

        "protocol": "https",

        "hostname": "cfmdcavlinf02",

        "object_name": "cfmdcsvltst01",

        "state": "add",

        "tag_names": [

            "Backup:\"13M-Repl\""

        ],

        "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",

        "validate_certs": false

    }

},

"msg": "Failed to import the required Python library (vSphere Automation SDK) on cfmdcsvlinf09.customfleet.org's Python /usr/bin/python2. See https://code.vmware.com/web/sdk/65/vsphere-automation-python for more info. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"

}


ansible-playbook 2.9.6

  config file = /etc/ansible/ansible.cfg

  configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']

  ansible python module location = /usr/lib/python2.7/site-packages/ansible

  executable location = /usr/bin/ansible-playbook

  python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

Using /etc/ansible/ansible.cfg as config file

SSH password:

setting up inventory plugins

host_list declined parsing /var/lib/awx/job_status/awx_1836_4de7on2m/tmpinahao0l as it did not pass its verify_file() method

Parsed /var/lib/awx/job_status/awx_1836_4de7on2m/tmpinahao0l inventory source with script plugin

[WARNING]: provided hosts list is empty, only localhost is available. Note that

the implicit localhost does not match 'all'

Loading callback plugin awx_display of type stdout, v2.0 from /var/lib/awx/venv/awx/lib/python3.6/site-packages/ansible_runner/callbacks/awx_display.py

[

[

 

PLAYBOOK: provision-stage1-vmware.yml ******************************************

Positional arguments: provision-stage1-vmware.yml

ask_pass: True

remote_user: root

become_method: sudo

inventory: (u'/var/lib/awx/job_status/awx_1836_4de7on2m/tmpinahao0l',)

forks: 5

tags: (u'all',)

extra_vars: (u'@/var/lib/awx/job_status/awx_1836_4de7on2m/tmphp1ooyt7', u'@/var/lib/awx/job_status/awx_1836_4de7on2m/tmp_orong6p', u'@/var/lib/awx/job_status/awx_1836_4de7on2m/tmptrnnzabr', u'@/var/lib/awx/job_status/awx_1836_4de7on2m/env/extravars')

verbosity: 4

connection: smart

timeout: 10

[

1 plays in provision-stage1-vmware.yml

[

 

PLAY [Set up a new VM Playbook] ************************************************

 

TASK [Gathering Facts] *********************************************************

task path: /var/lib/awx/projects/_10__proj_provisioning/provision-stage1-vmware.yml:2

[

 

<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: awx

[

<127.0.0.1> EXEC /bin/sh -c 'echo ~awx && sleep 0'

[

<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1585724109.45-92361891228936 `" && echo ansible-tmp-1585724109.45-92361891228936="` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1585724109.45-92361891228936 `" ) && sleep 0'

[

Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py

[

<127.0.0.1> PUT /var/lib/awx/.ansible/tmp/ansible-local-55622qHt7l_/tmpaSve3D TO /var/lib/awx/.ansible/tmp/ansible-tmp-1585724109.45-92361891228936/AnsiballZ_setup.py

[

<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /var/lib/awx/.ansible/tmp/ansible-tmp-1585724109.45-92361891228936/ /var/lib/awx/.ansible/tmp/ansible-tmp-1585724109.45-92361891228936/AnsiballZ_setup.py && sleep 0'

[

<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /var/lib/awx/.ansible/tmp/ansible-tmp-1585724109.45-92361891228936/AnsiballZ_setup.py && sleep 0'

[

<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /var/lib/awx/.ansible/tmp/ansible-tmp-1585724109.45-92361891228936/ > /dev/null 2>&1 && sleep 0'

[

ok: [localhost]

[

META: ran handlers

[

 

TASK [Set up a new VM] *********************************************************

task path: /var/lib/awx/projects/_10__proj_provisioning/provision-stage1-vmware.yml:35

[

<localhost> ESTABLISH LOCAL CONNECTION FOR USER: awx

[

 

<localhost> EXEC /bin/sh -c 'echo ~awx && sleep 0'

[

<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1585724111.71-55476320384381 `" && echo ansible-tmp-1585724111.71-55476320384381="` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1585724111.71-55476320384381 `" ) && sleep 0'

[

Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/vmware/vmware_guest.py

[

<localhost> PUT /var/lib/awx/.ansible/tmp/ansible-local-55622qHt7l_/tmpetUHBb TO /var/lib/awx/.ansible/tmp/ansible-tmp-1585724111.71-55476320384381/AnsiballZ_vmware_guest.py

[

<localhost> EXEC /bin/sh -c 'chmod u+x /var/lib/awx/.ansible/tmp/ansible-tmp-1585724111.71-55476320384381/ /var/lib/awx/.ansible/tmp/ansible-tmp-1585724111.71-55476320384381/AnsiballZ_vmware_guest.py && sleep 0'

[

<localhost> EXEC /bin/sh -c '/usr/bin/python2 /var/lib/awx/.ansible/tmp/ansible-tmp-1585724111.71-55476320384381/AnsiballZ_vmware_guest.py && sleep 0'

[

<localhost> EXEC /bin/sh -c 'rm -f -r /var/lib/awx/.ansible/tmp/ansible-tmp-1585724111.71-55476320384381/ > /dev/null 2>&1 && sleep 0'

[

changed: [localhost -> localhost] => {

    "changed": true,

    "instance": {

        "annotation": "",

        "current_snapshot": null,

        "customvalues": {},

        "guest_consolidation_needed": false,

        "guest_question": null,

        "guest_tools_status": "guestToolsNotRunning",

        "guest_tools_version": "0",

        "hw_cluster": "MDC",

        "hw_cores_per_socket": 1,

        "hw_datastores": [

            "MDC_tmp_vSAN_01"

        ],

        "hw_esxi_host": "cfmdcspesx06.customfleet.org",

        "hw_eth0": {

            "addresstype": "assigned",

            "ipaddresses": null,

            "label": "Network adapter 1",

            "macaddress": "00:50:56:b8:a4:b9",

            "macaddress_dash": "00-50-56-b8-a4-b9",

            "portgroup_key": null,

            "portgroup_portkey": null,

            "summary": "MDC-Core-Staging-Presentation-3009"

        },

        "hw_files": [

            "[MDC_tmp_vSAN_01] d13a845e-a851-710a-82be-0cc47ae3e96a/cfmdcsvltst01.vmx",

            "[MDC_tmp_vSAN_01] d13a845e-a851-710a-82be-0cc47ae3e96a/cfmdcsvltst01.nvram",

            "[MDC_tmp_vSAN_01] d13a845e-a851-710a-82be-0cc47ae3e96a/cfmdcsvltst01.vmsd",

            "[MDC_tmp_vSAN_01] d13a845e-a851-710a-82be-0cc47ae3e96a/cfmdcsvltst01.vmdk"

        ],

        "hw_folder": "/CustomFleet/vm",

        "hw_guest_full_name": null,

        "hw_guest_ha_state": null,

        "hw_guest_id": null,

        "hw_interfaces": [

            "eth0"

        ],

        "hw_is_template": false,

        "hw_memtotal_mb": 4096,

        "hw_name": "cfmdcsvltst01",

        "hw_power_status": "poweredOff",

        "hw_processor_count": 2,

        "hw_product_uuid": "4238a54f-1980-bfa1-5327-97f8c0e9652a",

        "hw_version": "vmx-15",

        "instance_uuid": "503893dd-0eee-6dce-5947-8b9220dbac34",

        "ipv4": null,

        "ipv6": null,

        "module_hw": true,

        "moid": "vm-8505",

        "snapshots": [],

        "vimref": "vim.VirtualMachine:vm-8505",

        "vnc": {}

    },

    "invocation": {

        "module_args": {

            "annotation": null,

            "cdrom": [],

            "cluster": "MDC",

            "convert": null,

            "customization": {},

            "customization_spec": null,

            "customvalues": [],

            "datacenter": "CustomFleet",

            "datastore": null,

            "disk": [],

            "esxi_hostname": null,

            "folder": "/",

            "force": false,

            "guest_id": null,

            "hardware": {},

            "hostname": "cfmdcavlinf02",

            "is_template": false,

            "linked_clone": false,

            "name": "cfmdcsvltst01",

            "name_match": "first",

            "networks": [

                {

                    "name": "MDC-Core-Staging-Presentation-3009",

                    "type": "dhcp"

                }

            ],

            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",

            "port": 443,

            "proxy_host": null,

            "proxy_port": null,

            "resource_pool": null,

            "snapshot_src": null,

            "state": "poweredoff",

            "state_change_timeout": 0,

            "template": "Template-RHEL-Automation-MDC-vSAN",

            "use_instance_uuid": false,

            "username": "[email protected]",

            "uuid": null,

            "validate_certs": false,

            "vapp_properties": [],

            "wait_for_customization": false,

            "wait_for_ip_address": false

        }

    }

}

[

 

TASK [Add tags to virtual machine] *********************************************

task path: /var/lib/awx/projects/_10__proj_provisioning/provision-stage1-vmware.yml:51

[

 

<localhost> ESTABLISH LOCAL CONNECTION FOR USER: awx

[

<localhost> EXEC /bin/sh -c 'echo ~awx && sleep 0'

[

<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1585724117.81-62965011526988 `" && echo ansible-tmp-1585724117.81-62965011526988="` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1585724117.81-62965011526988 `" ) && sleep 0'

[

Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/vmware/vmware_tag_manager.py

[

<localhost> PUT /var/lib/awx/.ansible/tmp/ansible-local-55622qHt7l_/tmpjz5VjK TO /var/lib/awx/.ansible/tmp/ansible-tmp-1585724117.81-62965011526988/AnsiballZ_vmware_tag_manager.py

[

<localhost> EXEC /bin/sh -c 'chmod u+x /var/lib/awx/.ansible/tmp/ansible-tmp-1585724117.81-62965011526988/ /var/lib/awx/.ansible/tmp/ansible-tmp-1585724117.81-62965011526988/AnsiballZ_vmware_tag_manager.py && sleep 0'

[

<localhost> EXEC /bin/sh -c '/usr/bin/python2 /var/lib/awx/.ansible/tmp/ansible-tmp-1585724117.81-62965011526988/AnsiballZ_vmware_tag_manager.py && sleep 0'

[

<localhost> EXEC /bin/sh -c 'rm -f -r /var/lib/awx/.ansible/tmp/ansible-tmp-1585724117.81-62965011526988/ > /dev/null 2>&1 && sleep 0'

[

The full traceback is:

Traceback (most recent call last):

  File "/tmp/ansible_vmware_tag_manager_payload_nxwReX/ansible_vmware_tag_manager_payload.zip/ansible/module_utils/vmware_rest_client.py", line 30, in <module>

    from com.vmware.vapi.std_client import DynamicID

ImportError: No module named com.vmware.vapi.std_client

fatal: [localhost -> localhost]: FAILED! => {

    "changed": false,

    "invocation": {

        "module_args": {

            "hostname": "cfmdcavlinf02",

            "object_name": "cfmdcsvltst01",

            "object_type": "VirtualMachine",

            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",

            "protocol": "https",

            "state": "add",

            "tag_names": [

                "Backup:\\"13M-Repl\\""

            ],

            "username": "[email protected]",

            "validate_certs": false

        }

    },

    "msg": "Failed to import the required Python library (vSphere Automation SDK) on cfmdcsvlinf09.customfleet.org's Python /usr/bin/python2. See https://code.vmware.com/web/sdk/65/vsphere-automation-python for more info. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"

}

[

 

PLAY RECAP *********************************************************************

localhost                  : ok=2    changed=1    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0  

 

 

Copied from original issue: ansible/ansible#68653

vmware_content_deploy_template doesn't support creating VM from OVF templates

SUMMARY

vmware_content_deploy_template doesn't support creating VM from OVF templates

ISSUE TYPE
  • Feature Idea
ADDITIONAL INFORMATION
$ ansible --version
ansible 2.9.6.post0
  config file = None
  configured module search path = [u'/home/lgonchar/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /home/lgonchar/ansible/lib/ansible
  executable location = /home/lgonchar/ansible/bin/ansible
  python version = 2.7.5 (default, Aug  7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

$ ansible-playbook ../some_repo/ansible/playbooks/sandbox/create_vm_vcenter.yml

....

id='com.vmware.vdcs.vmtx-main.invalid_item_type',
default_message="The library item 'Packer_centos7' (ID: 4f31f87a-9ebd-4b64-97a3-2ed77ca87fcb) has type 'ovf', but needs to be of type 'vm-template'.",

vmware_guest: Not idempotent when state: absent

SUMMARY

Removes VM in another datacenters and/or clusters when not found in specified datacenter and cluster.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_guest

ANSIBLE VERSION
ansible 2.8.5
  config file = /home/blacke/.ansible.cfg
  configured module search path = ['/home/blacke/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/blacke/.local/lib/python3.8/site-packages/ansible
  executable location = /home/blacke/.local/bin/ansible
  python version = 3.8.2 (default, Feb 26 2020, 22:21:03) [GCC 9.2.1 20200130]

ansible 2.9.6
  config file = /home/blacke/.ansible.cfg
  configured module search path = ['/home/blacke/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/blacke/python-envs/ansible2.9/lib/python3.8/site-packages/ansible
  executable location = ./python-envs/ansible2.9/bin/ansible
  python version = 3.8.2 (default, Feb 26 2020, 22:21:03) [GCC 9.2.1 20200130]
CONFIGURATION
DEFAULT_ACTION_PLUGIN_PATH(env: ANSIBLE_ACTION_PLUGINS) = ['/home/blacke/.local/lib/python3.6/site-packages/ansible_mitogen/plugins/action', '/home/blacke/.ansible/plugins/action', '/usr/share/ansible/plugins/action']
DEFAULT_CALLBACK_WHITELIST(env: ANSIBLE_CALLBACK_WHITELIST) = ['timer', 'profile_tasks']
DEFAULT_CONNECTION_PLUGIN_PATH(env: ANSIBLE_CONNECTION_PLUGINS) = ['/home/blacke/.local/lib/python3.6/site-packages/ansible_mitogen/plugins/connection', '/home/blacke/.ansible/plugins/connection', '/usr/share/ansible/plugins/connection']
DEFAULT_STRATEGY_PLUGIN_PATH(env: ANSIBLE_STRATEGY_PLUGINS) = ['/home/blacke/.local/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy', '/home/blacke/.ansible/plugins/strategy', '/usr/share/ansible/plugins/strategy']
INVENTORY_ENABLED(env: ANSIBLE_INVENTORY_ENABLED) = ['host_list', 'script', 'yaml', 'ini', 'auto', 'vmware_vm_inventory', 'netbox']
OS / ENVIRONMENT
STEPS TO REPRODUCE

Create a VM and then remove it but specify a different (or even non existent) datacenter, cluster, and/or folder.

- hosts: test-ubuntu18
  connection: local
  gather_facts: no

  tasks:
    - name: Copy VM
      vmware_guest:
        datacenter: "{{ vcenter_datacenter }}"
        cluster: "{{ vcenter_cluster }}"
        folder: "{{ vcenter_datacenter }}/vm/{{ vcenter_folder }}"
        name: "{{ guest_name }}"
        state: present
        template: "{{ vcenter_template }}"

    - name: Remove from another place
      vmware_guest:
        datacenter: "{{ vcenter_datacenter }}-non"
        cluster: "{{ vcenter_cluster }}-non"
        folder: "{{ vcenter_datacenter }}-non/vm/{{ vcenter_folder }}-non"
        name: "{{ guest_name }}"
        state: absent
EXPECTED RESULTS

That the removal task gives OK status (or potentially fails for not being able to access location) since there is no VM to remove.

ACTUAL RESULTS

VM from original location (not as specified) is removed.

Command output

VM ip is not set using ansible vmware_guest

SUMMARY

I created a vm using ansible and installed RHEL 7.6. I am trying to set the ipv4 address of the vm, but it is not getting set, even though the playbook successfully runs.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_guest networks

ANSIBLE VERSION

ansible 2.9.0
config file = path/ansible/ansible.cfg
configured module search path = ['path/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = path/ansible/ansible-2.9.0/lib/ansible
executable location = path/bin/ansible
python version = 3.6.5 (default, Mar 31 2018, 19:45:04) [GCC]

CONFIGURATION
OS / ENVIRONMENT

RHEL 7.6 on vm

STEPS TO REPRODUCE
- name: Create a virtual machine on the ESXI host
      vmware_guest:
        hostname: "{{ vcenter_ip }}"
        username: "{{ vcenter_username }}"
        password: "{{ vcenter_password }}"
        validate_certs: no
        datacenter: "{{ datacenter }}"
        folder: /Datacenter/vm/
        #name: "my_vm_{{ item }}"
        name: test_vm_00
        guest_id: rhel7_64Guest
        state: poweredon
        datastore: "{{ datastore }}"
        esxi_hostname: "{{ esxhost }}"
        disk:
        - size_gb: 10
          type: thin
          datastore: "{{ datastore }}"
        hardware:
          memory_mb: 1024
          num_cpus: 2
          scsi: paravirtual
        cdrom:
        - type: iso
          iso_path: "{{ OS_ISO_path }}"
          controller_type: ide
          controller_number: 0
          unit_number: 0
          state: present
        - type: iso
          iso_path: "{{ kickstart iso_path }}"
          controller_type: ide
          controller_number: 0
          unit_number: 1
          state: present
        networks:
        - name: VM Network
          ip: "{{ ip_address }}"
          netmask: "{{ netmask }}"
          dns_servers:
          - ****
          start_connected: yes
        wait_for_ip_address: yes
        customization:
          dns_servers:
          - ****
          domain: ****
          #wait_for_customization: yes
        wait_for_ip_address: yes
Expected Results

IP should be set on the vm in vsphere.

ACTUAL RESULTS
changed: [localhost] => {
    "changed": true,
    "instance": {
        "annotation": "",
        "current_snapshot": null,
        "customvalues": {},
        "guest_consolidation_needed": false,
        "guest_question": null,
        "guest_tools_status": "guestToolsNotRunning",
        "guest_tools_version": "0",
        "hw_cluster": null,
        "hw_cores_per_socket": 1,
        "hw_datastores": [
            "datastore1"
        ],
        "hw_esxi_host": "****",
        "hw_eth0": {
            "addresstype": "assigned",
            "ipaddresses": null,
            "label": "Network adapter 1",
            "macaddress": "00:50:56:ad:ce:83",
            "macaddress_dash": "00-50-56-ad-ce-83",
            "portgroup_key": null,
            "portgroup_portkey": null,
            "summary": "VM Network"
        },
        "hw_files": [
            "[datastore1] test_vm_00/test_vm_00.vmx",
            "[datastore1] test_vm_00/test_vm_00.vmsd",
            "[datastore1] test_vm_00/test_vm_00.nvram",
            "[datastore1] test_vm_00/test_vm_00.vmdk"
        ],
        "hw_folder": "/Datacenter/vm",
        "hw_guest_full_name": null,
        "hw_guest_ha_state": null,
        "hw_guest_id": null,
        "hw_interfaces": [
            "eth0"
        ],
        "hw_is_template": false,
        "hw_memtotal_mb": 1024,
        "hw_name": "test_vm_00",
        "hw_power_status": "poweredOn",
        "hw_processor_count": 2,
        "hw_product_uuid": "422dafa8-c0dc-e50e-e07f-90c15c622118",
        "hw_version": "vmx-14",
        "instance_uuid": "502d3587-7f01-4fc7-ee81-87c98707428d",
        "ipv4": null,
        "ipv6": null,
        "module_hw": true,
        "moid": "vm-102",
        "snapshots": [],
        "vimref": "vim.VirtualMachine:vm-102",
        "vnc": {}
    },
    "invocation": {
        "module_args": {
            "annotation": null,
            "cdrom": [
                {
                    "controller_number": 0,
                    "controller_type": "ide",
                    "iso_path": "***",
                    "state": "present",
                    "type": "iso",
                    "unit_number": 0
                },
                {
                    "controller_number": 0,
                    "controller_type": "ide",
                    "iso_path": "***",
                    "state": "present",
                    "type": "iso",
                    "unit_number": 1
                }
            ],
            "cluster": null,
            "convert": null,
            "customization": {
                "dns_servers": [
                    "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                ],
                "domain": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
            },
            "customization_spec": null,
            "customvalues": [],
            "datacenter": "Datacenter",
            "datastore": "datastore1",
            "disk": [
                {
                    "datastore": "datastore1",
                    "size_gb": 10,
                    "type": "thin"
                }
            ],
            "esxi_hostname": "****",
            "folder": "/Datacenter/vm/",
            "force": false,
            "guest_id": "rhel7_64Guest",
            "hardware": {
                "memory_mb": 1024,
                "num_cpus": 2,
                "scsi": "paravirtual"
            },
            "hostname": "***",
            "is_template": false,
            "linked_clone": false,
            "name": "test_vm_00",
            "name_match": "first",
            "networks": [
                {
                    "dns_servers": [
                        "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                    ],
                    "ip": "{{ vm ip }}",
                    "name": "VM Network",
                    "netmask": "{{ netmask }}",
                    "start_connected": true,
                    "type": "static"
                }
            ],
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "port": ***,
            "proxy_host": null,
            "proxy_port": null,
            "resource_pool": null,
            "snapshot_src": null,
            "state": "poweredon",
            "state_change_timeout": 0,
            "template": null,
            "use_instance_uuid": false,
            "username": "{{ username }}",
            "uuid": null,
            "validate_certs": false,
            "vapp_properties": [],
            "wait_for_customization": false,
            "wait_for_ip_address": true
        }
    }
}

Now it is setting ipv6 ip, instead of ipv4

VMWare tools sets vm.guest.guestFamily as None for any Windows newer than Windows 7

From @AlexDaciuk on Sep 02, 2019 17:46

SUMMARY

I'm trying to upgrade VMWare Tools on our Windows guests using vmware_guest_tools_upgrade and it fails with the error msg: Guest Operating System is other than Linux and Windows.

This just happens with guests with outdated tools because of the way the python code is written

ISSUE TYPE
  • Bug Report
COMPONENT NAME
  • vmware_vm_inventory
ANSIBLE VERSION
ansible 2.8.4
  config file = /home/alex/Source_Code/CommitIt/Playbooks/vmware-playbooks/ansible.cfg
  configured module search path = ['/home/alex/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.7.4 (default, Jul 16 2019, 07:12:58) [GCC 9.1.0]
CONFIGURATION
DEFAULT_HOST_LIST(/home/alex/Source_Code/CommitIt/Playbooks/vmware-playbooks/ansible.cfg) = ['/home/alex/Source_Code/CommitIt/Playbooks/vmware-playbooks/inventory/commit.vmware.yml']
DEFAULT_STDOUT_CALLBACK(/home/alex/Source_Code/CommitIt/Playbooks/vmware-playbooks/ansible.cfg) = yaml
INVENTORY_ENABLED(/home/alex/Source_Code/CommitIt/Playbooks/vmware-playbooks/ansible.cfg) = ['vmware_vm_inventory']

inventory file:

plugin: vmware_vm_inventory
strict: False
hostname: 172.30.0.11
username: [email protected]
password: genericPassword
validate_certs: False
with_tags: True
OS / ENVIRONMENT

Ansible control node : Arch Linux with latest updates

Managed node: ESXi 6.7.0 with vCenter 6.7.0 on a IBM x3250 M4

STEPS TO REPRODUCE

Use a dynamic VMWare inventory using vmware_vm_inventory as described here

Power on a Windows guest newer than 7 and use vmware_guest_tools_upgrade to upgrade vmtools with:

vmware_guest_tools_upgrade:
    hostname: vcsa6.7.commit.local
    username: [email protected]
    password: genericPasswrd
    datacenter: Datacenter
    validate_certs: yes
    folder: "/Datacenter/vm/"
    name: "Windows-2012-R2"
EXPECTED RESULTS

To get VMTools upgraded in case of outdated guest version or nothing in case of updated version already running

ACTUAL RESULTS

If you edit ansible/modules/cloud/vmware/vmware_guest_tools_upgrade.py in line 126, concatenating vm.guest.guestFamily to the result mesage you get None

TASK [Upgrade VMTools] ***********************************************************************************************************************************************
task path: /home/alex/Source_Code/CommitIt/Playbooks/vmware-playbooks/tasks/upgrade-vmtools.yml:25
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: alex
<127.0.0.1> EXEC /bin/sh -c 'echo ~alex && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp/ansible-tmp-1567446047.572839-35641239422065 `" && echo ansible-tmp-1567446047.572839-35641239422065="` echo /home/alex/.ansible/tmp/ansible-tmp-1567446047.572839-35641239422065 `" ) && sleep 0'                                                      
Using module file /usr/lib/python3.7/site-packages/ansible/modules/cloud/vmware/vmware_guest_tools_upgrade.py
<127.0.0.1> PUT /home/alex/.ansible/tmp/ansible-local-22830mm9hnw15/tmpd4ggw0p8 TO /home/alex/.ansible/tmp/ansible-tmp-1567446047.572839-35641239422065/AnsiballZ_vmware_guest_tools_upgrade.py                                                                                                                                             
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1567446047.572839-35641239422065/ /home/alex/.ansible/tmp/ansible-tmp-1567446047.572839-35641239422065/AnsiballZ_vmware_guest_tools_upgrade.py && sleep 0'                                                                                                       
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/alex/.ansible/tmp/ansible-tmp-1567446047.572839-35641239422065/AnsiballZ_vmware_guest_tools_upgrade.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1567446047.572839-35641239422065/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => changed=false 
  invocation:
    module_args:
      datacenter: Datacenter
      folder: /Datacenter/vm
      hostname: vcsa6.7.commit.local
      name: Windows-2012-R2
      name_match: first
      password: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
      port: 443
      username: [email protected]
      uuid: null
      validate_certs: true
  msg: Guest Operating System is other than Linux and Windows. None

Copied from original issue: ansible/ansible#61690

vmware_dvswitch_lacp, map vmnic to lag

From @Aglidic on Apr 09, 2020 20:26

SUMMARY

Really would like to be able to map vmnic to lag memeber during creation of dvswitch.

ISSUE TYPE

Feature Idea

COMPONENT NAME

vmware_dvswitch_lacp

ADDITIONAL INFORMATION

If we enable the enhanced vmotion and create a lag we have no possibility to map our vmnic to this lag so we need to do it manually. Would be helpful to add option in the module for that

Copied from original issue: ansible/ansible#68824

vmware_host_datastore - option to resignature

SUMMARY

Currently, if the VMFS datastore is cloned from a snapshot, the module automatically formats the datastore when mounting. I'd like to be able to re-signature and mount the datastore so that I can use the module for disaster recovery operations.

ISSUE TYPE

Feature Idea

COMPONENT NAME

vmware_host_datastore

ADDITIONAL INFORMATION

Creating cloned datastores for file recovery or disaster recovery. It would be something along the lines of "resignature: yes" and/or "reformat: no"

name: Mount VMFS pure-ds0-restore to ChasApps
vmware_host_datastore:
hostname: "{{ vcenter_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
datastore_name: pure-ds0-restore
datastore_type: vmfs
vmfs_version: 6
vmfs_device_name: "{{ ds0vmfs }}"
esxi_hostname: "{{ esxi_host }}"
state: present
validate_certs: no

Document `vmware_cluster` parameters that will deprecated and removed in 2.12

SUMMARY

Since 2.9 (I think), we face a deprecated message when using the vmware_cluster module.

[DEPRECATION WARNING]: Configuring HA using vmware_cluster module is deprecated and will be removed in version 2.12. Please use vmware_cluster_ha module for the new functionality.. This feature will be removed in version 2.12. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: Configuring DRS using vmware_cluster module is deprecated and will be removed in version 2.12. Please use vmware_cluster_drs module for the new functionality.. This feature will be removed in version 2.12. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.

As far as I understand the messages and the discussion ansible/ansible#58023 and associated PR ansible/ansible#58468, it's not the entire module vmware_cluster that will become deprecated but only the arguments used to configure the HA, DRS and vSAN.
Like it's done for ip_address param of vmware_vmkernel, I think that it could be great to add comment for each nearly deprecated parameter. Like that it will be easier for everybody to identify if you need to update your playbook.

I assume that all parameters starting with [drs_|ha_|vsan] will be deprecated, but it's not clear to me for the enable_[drs|ha|vsan] โ€ฆ

ISSUE TYPE
  • Documentation Report
COMPONENT NAME
  • vmware_cluster
  • vmware_cluster_drs
  • vmware_cluster_ha
  • vmware_cluster_vsan
ANSIBLE VERSION
ansible 2.9.5
  config file = /home/myuser/Projects/313/etc/ansible.cfg
  configured module search path = ['/home/myuser/Projects/313/library']
  ansible python module location = /home/myuser/.virtualenvs/313/lib/python3.7/site-packages/ansible
  executable location = /home/myuser/.virtualenvs/313/bin/ansible
  python version = 3.7.6 (default, Dec 19 2019, 22:50:01) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
CONFIGURATION
CACHE_PLUGIN(/home/myuser/Projects/313/etc/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/myuser/Projects/313/etc/ansible.cfg) = $HOME/.ansible/facts/
CACHE_PLUGIN_TIMEOUT(/home/myuser/Projects/313/etc/ansible.cfg) = 3600
DEFAULT_ACTION_PLUGIN_PATH(env: ANSIBLE_ACTION_PLUGINS) = ['/home/myuser/.virtualenvs/313/lib/python3.7/site-packages/ara/plugins/actions']
DEFAULT_CALLBACK_PLUGIN_PATH(env: ANSIBLE_CALLBACK_PLUGINS) = ['/home/myuser/.virtualenvs/313/lib/python3.7/site-packages/ara/plugins/callbacks']
DEFAULT_CALLBACK_WHITELIST(/home/myuser/Projects/313/etc/ansible.cfg) = ['profile_tasks']
DEFAULT_FILTER_PLUGIN_PATH(env: ANSIBLE_FILTER_PLUGINS) = ['/home/myuser/Projects/313/plugins/filter']
DEFAULT_GATHERING(/home/myuser/Projects/313/etc/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/myuser/Projects/313/etc/ansible.cfg) = ['/home/myuser/Projects/313/inventories/T40', '/home/myuser/Projects/313/inventories/setupenv']
DEFAULT_LOG_PATH(/home/myuser/Projects/313/etc/ansible.cfg) = /home/myuser/.ansible/log/ansible.log
DEFAULT_MODULE_PATH(env: ANSIBLE_LIBRARY) = ['/home/myuser/Projects/313/library']
DEFAULT_ROLES_PATH(env: ANSIBLE_ROLES_PATH) = ['/home/myuser/Projects/313/roles.galaxy', '/home/myuser/Projects/313/roles']
DEFAULT_STDOUT_CALLBACK(/home/myuser/Projects/313/etc/ansible.cfg) = yaml
GALAXY_ROLE_SKELETON(env: ANSIBLE_GALAXY_ROLE_SKELETON) = /home/myuser/Projects/313/etc/skel/default
HOST_KEY_CHECKING(/home/myuser/Projects/313/etc/ansible.cfg) = False
OS / ENVIRONMENT

na

ADDITIONAL INFORMATION

na

vmware_tag_manager - certificate validation fails

SUMMARY

If validate_certs: yes is set for the module vmware_tag_manager, the task fails.

FAILED! => {"changed": false, "msg": "Failed to login to <vCenter FQDN>: HTTPSConnectionPool(host='<vCenter FQDN>', port=443): Max retries exceeded with url: /api (Caused by SSLError(SSLError(\"bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)\",),))"}

OpenSSL itself and other modules like vmware_guest_disk, do validate the certificate succenfully.

If validate_certs: no is set for the module vmware_tag_manager, the task executes successfully.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_tag_manager

/lib/ansible/modules/cloud/vmware/vmware_tag_manager.py

ANSIBLE VERSION
ansible 2.9.5
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.6.8 (default, Oct 11 2019, 15:04:54) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
CONFIGURATION
-
OS / ENVIRONMENT
  • RHEL 8.1
  • VMware vCenter 6.7 Update 3
STEPS TO REPRODUCE
- name: Add additional Disk to {{ vm_name }}
  vmware_guest_disk:
      hostname: "{{ vcenter_hostname }}"
      username: "{{ vcenter_username }}"
      password: "{{ vcenter_password }}"
      datacenter: "{{ datacenter_name }}"
      validate_certs: yes
      uuid: "{{ deploy_vm.instance.hw_product_uuid }}"
      disk:
        - size_gb: 40
          datastore: "{{ deploy_vm.instance.hw_datastores.0 }}"
          scsi_controller: 1
          unit_number: 0
          scsi_type: paravirtual
          state: present
  delegate_to: localhost
- name: Add WTS vSphere Tag to {{ vm_name }}
  vmware_tag_manager:
      hostname: "{{ vcenter_hostname }}"
      username: "{{ vcenter_username }}"
      password: "{{ vcenter_password }}"
      validate_certs: yes
      tag_names:
        - VM_App:WTS 
      object_name: "{{ vm_name }}"
      object_type: VirtualMachine
      state: add
  delegate_to: localhost 
EXPECTED RESULTS

Execution of the Task with certificate validation.

ACTUAL RESULTS
FAILED! => {"changed": false, "msg": "Failed to login to <vCenter FQDN>: HTTPSConnectionPool(host='<vCenter FQDN>', port=443): Max retries exceeded with url: /api (Caused by SSLError(SSLError(\"bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)\",),))"}

vmware_host_active_directory: Problems with corner cases

SUMMARY

When there's a problem with the domain membership of a host, the module always reports a change but doesn't do anything (lines 160ff).

We ran into this issue when we accidentally removed the computer account of an existing ESXi host. Our playbook reported changing the AD membership over and over again which was a bit weird. After all, after the first change I'd expect an OK on all later runs.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_host_active_directory

ANSIBLE VERSION
ansible 2.9.6
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.7.5 (default, Nov 14 2019, 00:43:46) [GCC 7.3.0]
CONFIGURATION
STEPS TO REPRODUCE

Join an ESXi host to AD, remove the computer account and run the task to join AD several times. It always reports changed but nothing happens.

EXPECTED RESULTS

I'm not sure... there are so many different problems that can occur (clientTrustBroken, inconsistentTrust, noServers, serverTrustBroken or even otherProblem or unknown) that it's probably impossible to deal with them. Maybe the module should simply fail in these cases with an appropriate error message. Something like "The ESXi host seems to be an AD member but reports an unkown problem". This would make it pretty obvious that manual troubleshooting is needed.

ACTUAL RESULTS

The module reports a change but doesn't do anything.

Unable to deploy VM, ESX hosts in maintenance

From @fkempers on Mar 24, 2020 11:27

SUMMARY

VMs are being deployed on a datastore that is on a host in maintenance.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware-guest

ANSIBLE VERSION

$ ansible --version
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/username/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

CONFIGURATION

$ ansible-config dump --only-changed
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 30
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = [u'/admin/etc/ansible/inventory']
DEFAULT_LOCAL_TMP(/etc/ansible/ansible.cfg) = /home/aa101825/.ansible_local/ansible-local-889UrQv3D
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = root
DEFAULT_VAULT_PASSWORD_FILE(/etc/ansible/ansible.cfg) = /admin/var/.adm/.vlt
TRANSFORM_INVALID_GROUP_CHARS(/etc/ansible/ansible.cfg) = true

OS / ENVIRONMENT

RHEL 7.7, ESX 6.7

STEPS TO REPRODUCE

1 Datacenter
2 clusters, all nodes have their own datastore.

  • 1st with 2active ESX-hosts
  • 2nd with 3 hosts in maintenance (even datastores can be in maintenance)
EXPECTED RESULTS

I would expect a VM deplyed on a host in the 1st cluster

ACTUAL RESULTS

Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/vmware/vmware_guest.py
Pipelining is enabled.

<localhost> EXEC /bin/sh -c '/usr/bin/python2 && sleep 0'
FAILED - RETRYING: Create VMs (1 retries left).Result was: {
    "attempts": 3,
    "changed": false,
    "invocation": {
        "module_args": {
            "annotation": "Deployed by username in Ansible 2020-03-24-12-21-26",
            "cdrom": {},
            "cluster": "cluster-001",
            "convert": null,
            "customization": {
                "dns_servers": [
                    "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                    "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                    "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                ],
                "domain": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                "hostname": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
            },
            "customization_spec": null,
            "customvalues": [],
            "datacenter": "HIT_DCOS_DCA",
            "datastore": null,
            "disk": [
                {
                    "autoselect_datastore": true,
                    "size_mb": 500,
                    "type": "thick"
                },
                {
                    "autoselect_datastore": true,
                    "size_gb": 32,
                    "type": "thick"
                },
                {
                    "autoselect_datastore": true,
                    "size_gb": 72,
                    "type": "thick"
                },
                {
                    "autoselect_datastore": true,
                    "size_gb": 96,
                    "type": "thick"
                }
            ],
            "esxi_hostname": null,
            "folder": "DCOS-PROD3T-01",
            "force": false,
            "guest_id": null,
            "hardware": {
                "memory_mb": 32768,
                "num_cpus": 6
            },
            "hostname": "vc103",
            "is_template": false,
            "linked_clone": false,
            "name": "vm-name",
            "name_match": "first",
            "networks": [
                {
                    "device_type": "vmxnet3",
                    "gateway": "gw",
                    "ip": "ip",
                    "name": "network",
                    "netmask": "255.255.224.0",
                    "type": "static"
                }
            ],
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "port": 443,
            "resource_pool": null,
            "snapshot_src": null,
            "state": "poweredon",
            "state_change_timeout": 0,
            "template": "Template_RHEL7x_x64_v1.2",
            "use_instance_uuid": false,
            "username": "username",
            "uuid": null,
            "validate_certs": false,
            "vapp_properties": [],
            "wait_for_customization": false,
            "wait_for_ip_address": true
        }
    },
    "retries": 4
}
MSG:

Failed to create a virtual machine : Unable to access the virtual machine configuration: Unable to access file [Datastore_1_esx002]

Copied from original issue: ansible/ansible#68426

vmware_guest_disk: Add support for floating point for disk size

From @deepakramanath on Apr 17, 2020 01:54

SUMMARY

The current module does not support floating point for disk size specification, in particular for size_tb, where in a value of 1.5 would translate to 1500 GB

The full traceback is:
WARNING: The below traceback may *not* be related to the actual failure.
  File "/tmp/ansible_vmware_guest_disk_payload_Kf_wHm/ansible_vmware_guest_disk_payload.zip/ansible/modules/cloud/vmware/vmware_guest_disk.py", line 693, in main
  File "/tmp/ansible_vmware_guest_disk_payload_Kf_wHm/ansible_vmware_guest_disk_payload.zip/ansible/modules/cloud/vmware/vmware_guest_disk.py", line 344, in ensure_disks
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 663, in __setattr__
    CheckField(self._GetPropertyInfo(name), val)
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 1098, in CheckField
    % (info.name, info.type.__name__, valType.__name__))

fatal: [server -> localhost]: FAILED! => {
    "changed": false, 
    "invocation": {
        "module_args": {
            "datacenter": "datacenter", 
            "disk": [
                {
                    "datastore": "datastore", 
                    "scsi_controller": 1, 
                    "size_tb": 1.5, 
                    "state": "present", 
                    "type": "thin", 
                    "unit_number": 1
                }
            ], 
            "folder": null, 
            "hostname": "1.1.1.1", 
            "moid": null, 
            "name": "example.com", 
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", 
            "port": 443, 
            "proxy_host": null, 
            "proxy_port": null, 
            "use_instance_uuid": false, 
            "username": "[email protected]", 
            "uuid": null, 
            "validate_certs": false
        }
    }, 
    "msg": "Failed to manage disks for virtual machine 'example.com' with exception : For \"capacityInKB\" expected type long, but got float"
}
ISSUE TYPE
  • Bug Report
COMPONENT NAME

/usr/lib/python2.7/site-packages/ansible/modules/cloud/vmware/vmware_guest_disk.py

ANSIBLE VERSION
ansible 2.9.3
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Aug  7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

CONFIGURATION

OS / ENVIRONMENT
CentOS Linux release 7.7.1908
STEPS TO REPRODUCE
vmware_guest_disk:
  hostname: "{{ vcenter_host }}"
  username: "{{ vcenter_username }}"
  password: "{{ vcenter_password }}"
  datacenter: "{{ vcenter_datacenter }}"
  validate_certs: no
  name: "{{ ansible_host_name }}"
  disk:
    - size_tb: 1.5
      type: thick
      datastore: "{{ vcenter_datastore }}"
      state: present
      unit_number: 1
      scsi_controller: 1
delegate_to: localhost

EXPECTED RESULTS

Adds the disk successfully without complaining about the floating point error and, when the size is specified in TB, floating point should be allowed, for example (1.5 TB or 2.5 TB)

ACTUAL RESULTS

The module expects the size_<> should always be a non floating point integer, otherwise throws an error when say, size_tb: 1.5.

"msg": "Failed to manage disks for virtual machine 'example.com' with exception : For \"capacityInKB\" expected type long, but got float"

Copied from original issue: ansible/ansible#68990

vmware_host: folder parameter is not consistent

SUMMARY

Hi!

I create a Staging folder in my dc1 datacenter with the following tasK

  - name: Create host folder
    vcenter_folder:
      hostname: "{{ vcenter_hostname }}"
      username: "{{ vcenter_username }}"
      password: "{{ vcenter_password }}"
      validate_certs: no
      datacenter: dc1
      folder_name: "Staging"
      folder_type: host
      state: present

vsphere

Then I try to create a host in it:

  - name: Add host to folder
    vmware_host:
      hostname: "{{ vcenter_hostname }}"
      username: "{{ vcenter_username }}"
      password: "{{ vcenter_password }}"
      validate_certs: no
      esxi_hostname: "{{ hostvars[esxi1].ansible_host }}"
      esxi_username: "{{ hostvars[esxi1].ansible_user }}"
      esxi_password: "{{ hostvars[esxi1].ansible_pass }}"
      datacenter_name: "dc1"
      folder_name: "/dc1/Staging"
      state: present

I get the following error: fatal: [localhost]: FAILED! => {"changed": false, "msg": "Folder '/dc1/Staging' not found"}

I also tried:

  • Staging
  • /dc1/Staging/host

I get the same error all the time. Am I doing something wrong?

note: I started a PR to simplify the access to the folders ( ansible/ansible#54620 ). It allows me to create my host in the folder as expected. But since I'm not sure how to use the vmware_host module, I'm also afraid I may have introduce an unexpected behaviour too.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_host

ANSIBLE VERSION
ansible 2.8.0.dev0
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/goneri/git_repos/ansible/library']
  ansible python module location = /home/goneri/git_repos/ansible/lib/ansible
  executable location = /home/goneri/.virtualenvs/ansible/bin/ansible
  python version = 3.7.2 (default, Mar 21 2019, 10:09:12) [GCC 8.3.1 20190223 (Red Hat 8.3.1-2)]

vmware_dvswitch_lacp: not supported between instances of 'str' and 'int'

From @Aglidic on Apr 09, 2020 13:21

SUMMARY

Module vmware_dvswitch_lacp failed

ISSUE TYPE

Bug

COMPONENT NAME

vmware_dvswitch_lacp

ANSIBLE VERSION
2.9.5

Here is my role:

- name: Enable lacp
  vmware_dvswitch_lacp:
    hostname: '{{ vcenter_hostname }}'
    username: '{{ vcenter_username }}'
    password: '{{ vcenter_password }}'
    switch: '{{ item.0.name }}'
    support_mode: enhanced
    validate_certs: 'no'
    link_aggregation_groups:
        - name: '{{ item.1.lag_name }}'
          uplink_number: '{{ item.1.lag_nb | int }}'
          mode: '{{ item.1.lag_mode }}'
          load_balancing_mode: '{{ item.1.lag_lb}}'
  loop: "{{ lookup('subelements', dvswitch, 'lag', wantlist=True) }}"
  register: dvswitchlag_result

Here is my varsยจ:

dvswitch:
  - name: Research_dvswitch
    uplink_name: uplink_
    switch_version: 6.6.0
    mtu: 1500
    nb_uplink: 2
    discovery_protocol: lldp
    discovery_operation: both
    check_mtu: true
    check_mtu_interval: 1
    check_teaming: true
    check_teaming_interval: 1
    state: present
    lag:
      - lag_name: 'lag_research'
        lag_nb: 2
        lag_mode: 'passive'
        lag_lb: 'srcDestIpTcpUdpPortVlan'
        lag_state: present

Here is the error:

The full traceback is:
Traceback (most recent call last):
  File "/root/.ansible/tmp/ansible-tmp-1586437874.379692-39635382078453/AnsiballZ_vmware_dvswitch_lacp.py", line 102, in <module>
    _ansiballz_main()
  File "/root/.ansible/tmp/ansible-tmp-1586437874.379692-39635382078453/AnsiballZ_vmware_dvswitch_lacp.py", line 94, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/root/.ansible/tmp/ansible-tmp-1586437874.379692-39635382078453/AnsiballZ_vmware_dvswitch_lacp.py", line 40, in invoke_module
    runpy.run_module(mod_name='ansible.modules.cloud.vmware.vmware_dvswitch_lacp', init_globals=None, run_name='__main__', alter_sys=True)
  File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_vmware_dvswitch_lacp_payload_n5fkj63q/ansible_vmware_dvswitch_lacp_payload.zip/ansible/modules/cloud/vmware/vmware_dvswitch_lacp.py", line 404, in <module>
  File "/tmp/ansible_vmware_dvswitch_lacp_payload_n5fkj63q/ansible_vmware_dvswitch_lacp_payload.zip/ansible/modules/cloud/vmware/vmware_dvswitch_lacp.py", line 400, in main
  File "/tmp/ansible_vmware_dvswitch_lacp_payload_n5fkj63q/ansible_vmware_dvswitch_lacp_payload.zip/ansible/modules/cloud/vmware/vmware_dvswitch_lacp.py", line 194, in ensure
  File "/tmp/ansible_vmware_dvswitch_lacp_payload_n5fkj63q/ansible_vmware_dvswitch_lacp_payload.zip/ansible/modules/cloud/vmware/vmware_dvswitch_lacp.py", line 316, in get_lacp_lag_options
TypeError: '>' not supported between instances of 'str' and 'int'
failed: [localhost] (item=[{'name': 'Research_dvswitch', 'uplink_name': 'uplink_', 'switch_version': '6.6.0', 'mtu': 1500, 'nb_uplink': 2, 'discovery_protocol': 'lldp', 'discovery_operation': 'both', 'check_mtu': True, 'check_mtu_interval': 1, 'check_teaming': True, 'check_teaming_interval': 1, 'state': 'present'}, {'lag_name': 'lag_research', 'lag_nb': 2, 'lag_mode': 'passive', 'lag_lb': 'srcDestIpTcpUdpPortVlan', 'lag_state': 'present'}]) => {
    "ansible_loop_var": "item",
    "changed": false,
    "item": [
        {
            "check_mtu": true,
            "check_mtu_interval": 1,
            "check_teaming": true,
            "check_teaming_interval": 1,
            "discovery_operation": "both",
            "discovery_protocol": "lldp",
            "mtu": 1500,
            "name": "Research_dvswitch",
            "nb_uplink": 2,
            "state": "present",
            "switch_version": "6.6.0",
            "uplink_name": "uplink_"
        },
        {
            "lag_lb": "srcDestIpTcpUdpPortVlan",
            "lag_mode": "passive",
            "lag_name": "lag_research",
            "lag_nb": 2,
            "lag_state": "present"
        }
    ],
    "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1586437874.379692-39635382078453/AnsiballZ_vmware_dvswitch_lacp.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1586437874.379692-39635382078453/AnsiballZ_vmware_dvswitch_lacp.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1586437874.379692-39635382078453/AnsiballZ_vmware_dvswitch_lacp.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible.modules.cloud.vmware.vmware_dvswitch_lacp', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_vmware_dvswitch_lacp_payload_n5fkj63q/ansible_vmware_dvswitch_lacp_payload.zip/ansible/modules/cloud/vmware/vmware_dvswitch_lacp.py\", line 404, in <module>\n  File \"/tmp/ansible_vmware_dvswitch_lacp_payload_n5fkj63q/ansible_vmware_dvswitch_lacp_payload.zip/ansible/modules/cloud/vmware/vmware_dvswitch_lacp.py\", line 400, in main\n  File \"/tmp/ansible_vmware_dvswitch_lacp_payload_n5fkj63q/ansible_vmware_dvswitch_lacp_payload.zip/ansible/modules/cloud/vmware/vmware_dvswitch_lacp.py\", line 194, in ensure\n  File \"/tmp/ansible_vmware_dvswitch_lacp_payload_n5fkj63q/ansible_vmware_dvswitch_lacp_payload.zip/ansible/modules/cloud/vmware/vmware_dvswitch_lacp.py\", line 316, in get_lacp_lag_options\nTypeError: '>' not supported between instances of 'str' and 'int'\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

Copied from original issue: ansible/ansible#68816

VMware: vmware_content_deploy_template use resourcepool_id for cluster_id

From @ultral on Apr 09, 2020 10:13

SUMMARY

I'm trying to create a VM from the template from the content library. but it fails with the error id='com.vmware.vdcs.core-services-helper.invalid_cluster_id_format', default_message='The provided cluster ID resgroup-478:3919714f-4f54-4fbe-8086-e09eef3cef3a is invalid.'

it's happening because there is the error in the ansible/module_utils/vmware_rest_client.py:

self.cluster_id = self.get_resource_pool_by_name(self.datacenter, self.resourcepool)

instead of

self.cluster_id = self.get_cluster_by_name(self.datacenter, self.cluster)
ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_content_deploy_template

ANSIBLE VERSION
ansible 2.9.6
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.6.8 (default, Aug  7 2019, 17:28:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
CONFIGURATION
OS / ENVIRONMENT
  • centos 7
  • vSphere Client version 6.7.0.42000
STEPS TO REPRODUCE
---
    - name: Create a VM from a template from content library
      hosts: localhost
      connection: local
      gather_facts: false
      vars:
        vcenter_username: domain\login
        vcenter_vmname: some_vm_name
        vcenter_cluster: Some Cluster 01
        vcenter_datacenter: Some Datacenter
        vcenter_datastore: Shared Storage 01
        vcenter_folder: Sandbox
        vcenter_resource_pool: Sandbox
        vcenter_host: esx09.somehost.local
        vcenter_hostname: vcenter01.somehost.local
      vars_prompt:
        - name: vcenter_password
          prompt: 'Enter remote user password'
          private: true
      tasks:
        - name: Deploy Virtual Machine from template in content library with PowerON State
          vmware_content_deploy_template:
            hostname: "{{ vcenter_hostname }}"
            username: '{{ vcenter_username }}'
            password: '{{ vcenter_password }}'
            cluster: "{{ vcenter_cluster }}"
            datacenter: "{{ vcenter_datacenter }}"
            datastore: "{{ vcenter_datastore }}"
            folder: "{{ vcenter_folder }}"
            resource_pool: "{{ vcenter_resource_pool }}"
            host: "{{ vcenter_host }}"
            name: "{{ vcenter_vmname }}"
            template: Packer_centos7
            state: poweredon
            validate_certs: false
EXPECTED RESULTS

vm is created

ACTUAL RESULTS
Traceback (most recent call last):
  File "/home/lgonchar/.ansible/tmp/ansible-tmp-1586424395.102075-278095560089205/AnsiballZ_vmware_content_deploy_template.py", line 102, in <module>
    _ansiballz_main()
  File "/home/lgonchar/.ansible/tmp/ansible-tmp-1586424395.102075-278095560089205/AnsiballZ_vmware_content_deploy_template.py", line 94, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/home/lgonchar/.ansible/tmp/ansible-tmp-1586424395.102075-278095560089205/AnsiballZ_vmware_content_deploy_template.py", line 40, in invoke_module
    runpy.run_module(mod_name='ansible.modules.cloud.vmware.vmware_content_deploy_template', init_globals=None, run_name='__main__', alter_sys=True)
  File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_vmware_content_deploy_template_payload_rwoe36ma/ansible_vmware_content_deploy_template_payload.zip/ansible/modules/cloud/vmware/vmware_content_deploy_template.py", line 271, in <module>
  File "/tmp/ansible_vmware_content_deploy_template_payload_rwoe36ma/ansible_vmware_content_deploy_template_payload.zip/ansible/modules/cloud/vmware/vmware_content_deploy_template.py", line 267, in main
  File "/tmp/ansible_vmware_content_deploy_template_payload_rwoe36ma/ansible_vmware_content_deploy_template_payload.zip/ansible/modules/cloud/vmware/vmware_content_deploy_template.py", line 209, in deploy_vm_from_template
  File "/usr/local/lib/python3.6/site-packages/com/vmware/vcenter/vm_template_client.py", line 2110, in deploy
    'spec': spec,
  File "/usr/local/lib/python3.6/site-packages/vmware/vapi/bindings/stub.py", line 345, in _invoke
    return self._api_interface.native_invoke(ctx, _method_name, kwargs)
  File "/usr/local/lib/python3.6/site-packages/vmware/vapi/bindings/stub.py", line 298, in native_invoke
    self._rest_converter_mode)
com.vmware.vapi.std.errors_client.InvalidArgument: {messages : [LocalizableMessage(id='com.vmware.vdcs.core-services-helper.invalid_cluster_id_format', default_message='The provided cluster ID resgroup-478:3919714f-4f54-4fbe-8086-e09eef3cef3a is invalid.', args=['resgroup-478:3919714f-4f54-4fbe-8086-e09eef3cef3a'], params=None, localized=None)], data : None, error_type : None}
/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py:1004: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vcenter01.somehost.local'

Copied from original issue: ansible/ansible#68808

vmware modules not support similar tag names

SUMMARY

Looks like some of the python submodules of the ansible vmware modules can't support similar vmware tag name in different categories. And it's not producing an error.
For example vmware_tag_info module is returning list which has tag_name as a key, and a category as a subkey. So if I have similar tag names, it returns only one of them.

I'd checked the code. Looks like search_svc_object_by_name() method is too common, it uses Tag.list() but vmware lib has Tag.list_tags_for_category() method, which can be useful here.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_tag_manager
vmware_tag_info

ANSIBLE VERSION
ansible 2.9.4
  config file = ~/.ansible.cfg
  configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/Cellar/ansible/2.9.4_1/libexec/lib/python3.8/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.8.1 (default, Dec 27 2019, 18:06:00) [Clang 11.0.0 (clang-1100.0.33.16)]
CONFIGURATION
ANSIBLE_PIPELINING(~/.ansible.cfg) = True
ANSIBLE_SSH_ARGS(~/.ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s
DEFAULT_CALLBACK_WHITELIST(~/.ansible.cfg) = ['profile_tasks']
DEFAULT_GATHERING(~/.ansible.cfg) = smart
DEFAULT_REMOTE_USER(~/.ansible.cfg) = root
DEFAULT_ROLES_PATH(~/.ansible.cfg) = ['/usr/local/Cellar/ansible/2.9.4_1/libexec/lib/python3.8/site-pack
DEFAULT_STDOUT_CALLBACK(~/.ansible.cfg) = yaml
HOST_KEY_CHECKING(~/.ansible.cfg) = False
RETRY_FILES_ENABLED(~/.ansible.cfg) = False
OS / ENVIRONMENT

Darwin home.local 19.3.0 Darwin Kernel Version 19.3.0: Thu Jan 9 20:58:23 PST 2020; root:xnu-6153.81.5~1/RELEASE_X86_64 x86_64

STEPS TO REPRODUCE

run vmware_tag_info task, and check output

EXPECTED RESULTS

output should contain all tags from vsphere

ACTUAL RESULTS

output contains only one tag per name, and skip others

VMware_guest does not support hardware version 15

From @qadv on Apr 06, 2020 21:04

SUMMARY

Logic in VMware_guest needs to be updated to include Esxi 6.7U2 Virtual Machine Hardware Version 15.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_guest.py

ANSIBLE VERSION

CONFIGURATION

OS / ENVIRONMENT
STEPS TO REPRODUCE
EXPECTED RESULTS
ACTUAL RESULTS

Copied from original issue: ansible/ansible#68727

vmware_host_powermgmt_policy: final state not always consistent

SUMMARY

Time to time, vmware_host_powermgmt_policy returns inconsistent value, this may
be a problem with the ESXi itself. e.g: https://dashboard.zuul.ansible.com/t/ansible/build/f7e8dd896d234b50b675e4c924d43562

TASK [vmware_host_powermgmt_policy : Reset all the hosts to balanced] **********
task path: /home/zuul/.ansible/collections/ansible_collections/community/vmware/tests/integration/targets/vmware_host_powermgmt_policy/tasks/main.yml:12
<testhost> ESTABLISH LOCAL CONNECTION FOR USER: zuul
<testhost> EXEC /bin/sh -c 'echo ~zuul && sleep 0'
<testhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/zuul/.ansible/tmp/ansible-tmp-1585158855.3779352-249227109670810 `" && echo ansible-tmp-1585158855.3779352-249227109670810="` echo /home/zuul/.ansible/tmp/ansible-tmp-1585158855.3779352-249227109670810 `" ) && sleep 0'
Using module file /home/zuul/.ansible/collections/ansible_collections/community/vmware/plugins/modules/vmware_host_powermgmt_policy.py
<testhost> PUT /home/zuul/.ansible/tmp/ansible-local-13895rq0ejhwa/tmpa559o2lr TO /home/zuul/.ansible/tmp/ansible-tmp-1585158855.3779352-249227109670810/AnsiballZ_vmware_host_powermgmt_policy.py
<testhost> EXEC /bin/sh -c 'chmod u+x /home/zuul/.ansible/tmp/ansible-tmp-1585158855.3779352-249227109670810/ /home/zuul/.ansible/tmp/ansible-tmp-1585158855.3779352-249227109670810/AnsiballZ_vmware_host_powermgmt_policy.py && sleep 0'
<testhost> EXEC /bin/sh -c '/home/zuul/venv/bin/python3.6 /home/zuul/.ansible/tmp/ansible-tmp-1585158855.3779352-249227109670810/AnsiballZ_vmware_host_powermgmt_policy.py && sleep 0'
<testhost> EXEC /bin/sh -c 'rm -f -r /home/zuul/.ansible/tmp/ansible-tmp-1585158855.3779352-249227109670810/ > /dev/null 2>&1 && sleep 0'
ok: [testhost] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "cluster_name": "DC0_C0",
            "esxi_hostname": null,
            "hostname": "vcenter.test",
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "policy": "balanced",
            "port": 443,
            "proxy_host": null,
            "proxy_port": null,
            "username": "[email protected]",
            "validate_certs": false
        }
    },
    "result": {
        "esxi1.test": {
            "changed": false,
            "current_state": "balanced",
            "desired_state": "balanced",
            "msg": "Power policy is already configured",
            "previous_state": "balanced"
        }
    }
}

TASK [vmware_host_powermgmt_policy : Set the Power Management Policy for esxi1] ***
task path: /home/zuul/.ansible/collections/ansible_collections/community/vmware/tests/integration/targets/vmware_host_powermgmt_policy/tasks/main.yml:22
<testhost> ESTABLISH LOCAL CONNECTION FOR USER: zuul
<testhost> EXEC /bin/sh -c 'echo ~zuul && sleep 0'
<testhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/zuul/.ansible/tmp/ansible-tmp-1585158856.6203048-89465510807110 `" && echo ansible-tmp-1585158856.6203048-89465510807110="` echo /home/zuul/.ansible/tmp/ansible-tmp-1585158856.6203048-89465510807110 `" ) && sleep 0'
Using module file /home/zuul/.ansible/collections/ansible_collections/community/vmware/plugins/modules/vmware_host_powermgmt_policy.py
<testhost> PUT /home/zuul/.ansible/tmp/ansible-local-13895rq0ejhwa/tmp603gc9sy TO /home/zuul/.ansible/tmp/ansible-tmp-1585158856.6203048-89465510807110/AnsiballZ_vmware_host_powermgmt_policy.py
<testhost> EXEC /bin/sh -c 'chmod u+x /home/zuul/.ansible/tmp/ansible-tmp-1585158856.6203048-89465510807110/ /home/zuul/.ansible/tmp/ansible-tmp-1585158856.6203048-89465510807110/AnsiballZ_vmware_host_powermgmt_policy.py && sleep 0'
<testhost> EXEC /bin/sh -c '/home/zuul/venv/bin/python3.6 /home/zuul/.ansible/tmp/ansible-tmp-1585158856.6203048-89465510807110/AnsiballZ_vmware_host_powermgmt_policy.py && sleep 0'
<testhost> EXEC /bin/sh -c 'rm -f -r /home/zuul/.ansible/tmp/ansible-tmp-1585158856.6203048-89465510807110/ > /dev/null 2>&1 && sleep 0'
changed: [testhost] => {
    "changed": true,
    "invocation": {
        "module_args": {
            "cluster_name": null,
            "esxi_hostname": "esxi1.test",
            "hostname": "vcenter.test",
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "policy": "high-performance",
            "port": 443,
            "proxy_host": null,
            "proxy_port": null,
            "username": "[email protected]",
            "validate_certs": false
        }
    },
    "result": {
        "esxi1.test": {
            "changed": true,
            "current_state": "high-performance",
            "desired_state": "high-performance",
            "msg": "Power policy changed",
            "previous_state": "balanced"
        }
    }
}

TASK [vmware_host_powermgmt_policy : debug] ************************************
task path: /home/zuul/.ansible/collections/ansible_collections/community/vmware/tests/integration/targets/vmware_host_powermgmt_policy/tasks/main.yml:31
ok: [testhost] => {
    "host_result": {
        "changed": true,
        "failed": false,
        "result": {
            "esxi1.test": {
                "changed": true,
                "current_state": "high-performance",
                "desired_state": "high-performance",
                "msg": "Power policy changed",
                "previous_state": "balanced"
            }
        }
    }
}

TASK [vmware_host_powermgmt_policy : Ensure Power Management Policy for esxi1] ***
task path: /home/zuul/.ansible/collections/ansible_collections/community/vmware/tests/integration/targets/vmware_host_powermgmt_policy/tasks/main.yml:32
ok: [testhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [vmware_host_powermgmt_policy : Reset all the hosts to balanced] **********
task path: /home/zuul/.ansible/collections/ansible_collections/community/vmware/tests/integration/targets/vmware_host_powermgmt_policy/tasks/main.yml:37
<testhost> ESTABLISH LOCAL CONNECTION FOR USER: zuul
<testhost> EXEC /bin/sh -c 'echo ~zuul && sleep 0'
<testhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/zuul/.ansible/tmp/ansible-tmp-1585158858.242634-89395982687071 `" && echo ansible-tmp-1585158858.242634-89395982687071="` echo /home/zuul/.ansible/tmp/ansible-tmp-1585158858.242634-89395982687071 `" ) && sleep 0'
Using module file /home/zuul/.ansible/collections/ansible_collections/community/vmware/plugins/modules/vmware_host_powermgmt_policy.py
<testhost> PUT /home/zuul/.ansible/tmp/ansible-local-13895rq0ejhwa/tmpvk36jvxc TO /home/zuul/.ansible/tmp/ansible-tmp-1585158858.242634-89395982687071/AnsiballZ_vmware_host_powermgmt_policy.py
<testhost> EXEC /bin/sh -c 'chmod u+x /home/zuul/.ansible/tmp/ansible-tmp-1585158858.242634-89395982687071/ /home/zuul/.ansible/tmp/ansible-tmp-1585158858.242634-89395982687071/AnsiballZ_vmware_host_powermgmt_policy.py && sleep 0'
<testhost> EXEC /bin/sh -c '/home/zuul/venv/bin/python3.6 /home/zuul/.ansible/tmp/ansible-tmp-1585158858.242634-89395982687071/AnsiballZ_vmware_host_powermgmt_policy.py && sleep 0'
<testhost> EXEC /bin/sh -c 'rm -f -r /home/zuul/.ansible/tmp/ansible-tmp-1585158858.242634-89395982687071/ > /dev/null 2>&1 && sleep 0'
ok: [testhost] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "cluster_name": "DC0_C0",
            "esxi_hostname": null,
            "hostname": "vcenter.test",
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "policy": "balanced",
            "port": 443,
            "proxy_host": null,
            "proxy_port": null,
            "username": "[email protected]",
            "validate_certs": false
        }
    },
    "result": {
        "esxi1.test": {
            "changed": false,
            "current_state": "balanced",
            "desired_state": "balanced",
            "msg": "Power policy is already configured",
            "previous_state": "balanced"
        }
    }
}

TASK [vmware_host_powermgmt_policy : debug] ************************************
task path: /home/zuul/.ansible/collections/ansible_collections/community/vmware/tests/integration/targets/vmware_host_powermgmt_policy/tasks/main.yml:46
ok: [testhost] => {
    "all_hosts_result": {
        "changed": false,
        "failed": false,
        "result": {
            "esxi1.test": {
                "changed": false,
                "current_state": "balanced",
                "desired_state": "balanced",
                "msg": "Power policy is already configured",
                "previous_state": "balanced"
            }
        }
    }
}

TASK [vmware_host_powermgmt_policy : Ensure Power Management Policy is changed for all hosts of DC0_C0] ***
task path: /home/zuul/.ansible/collections/ansible_collections/community/vmware/tests/integration/targets/vmware_host_powermgmt_policy/tasks/main.yml:47
fatal: [testhost]: FAILED! => {
    "assertion": "all_hosts_result is changed",
    "changed": false,
    "evaluated_to": false,
    "msg": "Assertion failed"
}

TASK [vmware_host_powermgmt_policy : Reset all the hosts to balanced] **********
task path: /home/zuul/.ansible/collections/ansible_collections/community/vmware/tests/integration/targets/vmware_host_powermgmt_policy/tasks/main.yml:96
<testhost> ESTABLISH LOCAL CONNECTION FOR USER: zuul
<testhost> EXEC /bin/sh -c 'echo ~zuul && sleep 0'
<testhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/zuul/.ansible/tmp/ansible-tmp-1585158859.2202597-177702964044221 `" && echo ansible-tmp-1585158859.2202597-177702964044221="` echo /home/zuul/.ansible/tmp/ansible-tmp-1585158859.2202597-177702964044221 `" ) && sleep 0'
Using module file /home/zuul/.ansible/collections/ansible_collections/community/vmware/plugins/modules/vmware_host_powermgmt_policy.py
<testhost> PUT /home/zuul/.ansible/tmp/ansible-local-13895rq0ejhwa/tmp09iqiojf TO /home/zuul/.ansible/tmp/ansible-tmp-1585158859.2202597-177702964044221/AnsiballZ_vmware_host_powermgmt_policy.py
<testhost> EXEC /bin/sh -c 'chmod u+x /home/zuul/.ansible/tmp/ansible-tmp-1585158859.2202597-177702964044221/ /home/zuul/.ansible/tmp/ansible-tmp-1585158859.2202597-177702964044221/AnsiballZ_vmware_host_powermgmt_policy.py && sleep 0'
<testhost> EXEC /bin/sh -c '/home/zuul/venv/bin/python3.6 /home/zuul/.ansible/tmp/ansible-tmp-1585158859.2202597-177702964044221/AnsiballZ_vmware_host_powermgmt_policy.py && sleep 0'
<testhost> EXEC /bin/sh -c 'rm -f -r /home/zuul/.ansible/tmp/ansible-tmp-1585158859.2202597-177702964044221/ > /dev/null 2>&1 && sleep 0'
ok: [testhost] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "cluster_name": "DC0_C0",
            "esxi_hostname": null,
            "hostname": "vcenter.test",
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "policy": "balanced",
            "port": 443,
            "proxy_host": null,
            "proxy_port": null,
            "username": "[email protected]",
            "validate_certs": false
        }
    },
    "result": {
        "esxi1.test": {
            "changed": false,
            "current_state": "balanced",
            "desired_state": "balanced",
            "msg": "Power policy is already configured",
            "previous_state": "balanced"
        }
    }
}

PLAY RECAP *********************************************************************
testhost                   : ok=34   changed=7    unreachable=0    failed=1    skipped=14   rescued=0    ignored=2  

vmware: module without any functional test

SUMMARY

The following modules don't have any test coverage:

  • vmware_category_info
  • vmware_cfg_backup
  • vmware_cluster_vcls
  • vmware_content_deploy_ovf_template
  • vmware_deploy_ovf
  • vmware_dns_config is deprecated, anyway
  • vmware_dvswitch_lacp
  • vmware_guest_boot_manager
  • vmware_guest_file_operation
  • vmware_guest_tools_upgrade
  • vmware_guest_tpm
  • vmware_guest_vgpu
  • vmware_guest_video
  • vmware_guest_vnc is deprecated, anyway
  • vmware_vm_shell
  • vmware_vm_vss_dvs_migrate
  • vmware_vsan_cluster

I use the following script to generate the list:

#!/bin/bash

vmware_modules=$(find ./lib/ansible/modules/cloud/vmware/ -name 'vmware_*.py' -exec basename -s .py {} \;)
for module in ${vmware_modules}; do
    if ! test -d ./test/integration/targets/${module}; then
        echo "- [ ] \`${module}\`"
    fi
done
ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware

ANSIBLE VERSION
devel

vmware_vm_info should return VMs that are accessible to the user

vmware_vm_info normally returns all information about all VMs available on the ESXi datacenter. This is nice; however, in the corporate environment I do not have access to such information. Therefore, what I would expect is to vmware_vm_info return all information about (only) those VMs that i have access to and can normally query with other ansible tasks.

Otherwise one has to resolve to something like https://stackoverflow.com/a/57402817 in order to get a single VM's uuid.

Moved from ansible/ansible#67816

Thanks.

vmware_guest not setting boot firmware

From @rolling1000ton on Mar 24, 2020 09:03

SUMMARY

The vmware_guest module doesn't set the boot firmware to EFI when VM is created from template or when the boot firmware is set to EFI after the creation of the VM. If the VM is created without a template the boot firmware is set just fine.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_guest

ANSIBLE VERSION
ansible 2.9.6
  config file = /home/user/ansible-store/ansible.cfg
  configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.6.9 (default, Nov  7 2019, 10:44:02) [GCC 8.3.0]
CONFIGURATION
DEFAULT_ACTION_PLUGIN_PATH(/home/user/ansible-store/ansible.cfg) = ['/usr/local/lib/python3.6/dist-packages/ara/plugins/action']
DEFAULT_CALLBACK_PLUGIN_PATH(/home/user/ansible-store/ansible.cfg) = ['/usr/local/lib/python3.6/dist-packages/ara/plugins/callback']
DEFAULT_CALLBACK_WHITELIST(/home/user/ansible-store/ansible.cfg) = ['slack']
DEFAULT_HOST_LIST(/home/user/ansible-store/ansible.cfg) = ['/home/user/ansible-store/inventory']
DEFAULT_JINJA2_EXTENSIONS(/home/user/ansible-store/ansible.cfg) = jinja2.ext.do,jinja2.ext.i18n,jinja2.ext.loopcontrols
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/ansible-store/ansible.cfg) = True
DEFAULT_LOG_PATH(/home/user/ansible-store/ansible.cfg) = /var/log/ansible.log
DEFAULT_ROLES_PATH(/home/user/ansible-store/ansible.cfg) = ['/home/user/ansible-store/roles']
DEFAULT_TIMEOUT(/home/user/ansible-store/ansible.cfg) = 300
DEFAULT_VAULT_PASSWORD_FILE(/home/user/ansible-store/ansible.cfg) = /etc/ansible/.vault_pass
HOST_KEY_CHECKING(/home/user/ansible-store/ansible.cfg) = False
INTERPRETER_PYTHON(/home/user/ansible-store/ansible.cfg) = auto
INVENTORY_IGNORE_EXTS(/home/user/ansible-store/ansible.cfg) = ['.pyc', '.pyo', '.swp', '.bak', '~', '.rpm', '.md', '.txt', '~', '.orig', '.ini', '.cfg', '.retry', '.py']
RETRY_FILES_ENABLED(/home/user/ansible-store/ansible.cfg) = False
OS / ENVIRONMENT

Ansible Host: Ubuntu 18.04.4 LTS
Client: VMWare ESXi 6.7

STEPS TO REPRODUCE
    - name: create vm
      vmware_guest:
        hostname: "{{ vcenter_hostname }}"
        username: "{{ vcenter_username }}"
        password: "{{ vcenter_password }}"
        validate_certs: no
        folder: /folder/vm
        cluster: cluster
        datacenter: DC
        name: "{{ vmname }}"
        state: poweredoff
        template: vmtemplate
        hardware:
          boot_firmware: efi
        networks:
        - name: NETWORK
          mac: "{{ macaddress[0] }}"

    - name: set boot firmware efi
      vmware_guest:
        hostname: "{{ vcenter_hostname }}"
        username: "{{ vcenter_username }}"
        password: "{{ vcenter_password }}"
        validate_certs: no
        name: "{{ vmname }}"
        hardware:
          boot_firmware: efi
EXPECTED RESULTS

Boot Firmware set to EFI in VMWare

ACTUAL RESULTS

Boot Firmware stays BIOS in VMWare

Copied from original issue: ansible/ansible#68424

Unsupported parameters for (vmware_guest_boot_manager) module: boot_firmware, secure_boot_enabled

From @tanguy-legressus on Mar 23, 2020 11:21

SUMMARY

Trying to add some features for Windows deployment, with the module "vmware_guest_boot_manager". Following the docs https://docs.ansible.com/ansible/latest/modules/vmware_guest_boot_manager_module.html.
Want to boot windows on EFI and with boot secure enabled.

Even if parameters are listed in the documentation, they seem to not be found/working while playing them.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_guest_boot_manager
secure_boot_enabled

ANSIBLE VERSION

Ansible 2.9.2

CONFIGURATION

Jenkins cent 7_X
version 2.204.5

OS / ENVIRONMENT

Target OS versions:
Windows 10_1809
Windows 10_1903

STEPS TO REPRODUCE
- name: create a vm 
  block:

    - name: "vSphere actions | Create VM: {{ machine }} on {{ vm_CRUD_hostname }}"
      vmware_guest:
        hostname: "{{ vm_hostname }}"
        username: "{{ vm_username }}"
        password: "{{ vm_password }}"
        annotation: "{{ vm_initial_notes }}"
        datacenter: "{{ vm_esxi_datacenter }}"
        cluster: "{{ vm_cluster }}"
        resource_pool: "{{ vm_resource_pool }}"
        validate_certs: no
        folder: "{{ vm_extra_config_folder }}"
        name: "{{ machine }}"
        state: "{{ power_state | default('poweredoff') }}"
        template: "{{ vm_template_src | default(omit) }}"
        guest_id: "{{ vm_hardware_osid }}"
        disk: "{{ vm_vmware.disk|list if vm_template_src == '' else omit }}"
        hardware:
          memory_mb: "{{ vm_hardware_memory_mb if vm_template_src == '' else omit }}"
          num_cpus: "{{ vm_hardware_num_cpus if vm_template_src == '' else omit }}"
          num_cpu_cores_per_socket: "{{ vm_hardware_num_cpu_cores_per_socket|int|abs if vm_template_src == '' else omit }}"
          scsi: "{{ vm_hardware_scsi if vm_template_src == '' else omit }}"
          boot_firmware: efi
          nested_virt: yes
        networks:
        - name: "{{ vm_network }}"
          # start_connected: True
          # allow_guest_control: True
          # device_type: "{{ vm_type }}"
          # type: "{{ vm_type }}"
          # ip: "{{ vm_ip | default(omit) }}"
          # netmask: "{{ vm_netmask | default(omit) }}"
          # gateway: "{{ vm_gateway | default(omit) }}"
        # wait for address only when creating from template
        # customization:
        #     domain: "{{ vm_domain }}"
        #     hostname: "{{ machine }}"
        #     dns_servers: "{{ vm_dns_servers }}"
        wait_for_ip_address: no
      delegate_to: localhost
      register: vmware_result
      tags: vmware_host_create
  
    - name: Enable UEFI SecureBoot for {{ machine }}
      vmware_guest_boot_manager:
        hostname: "{{ vm_hostname }}"
        username: "{{ vm_username }}"
        password: "{{ vm_password }}"
        validate_certs: no
        uuid: "{{ vmware_result.instance.hw_product_uuid }}"
        boot_firmware: efi
        secure_boot_enabled: true
      delegate_to: localhost
      register: vm_uefi_boot
      tags: vmware_host_create

  # register: vmware_result
  tags: vmware_host_create

With defaults values:

vm_secure_boot: true

EXPECTED RESULTS
ACTUAL RESULTS

Jenkins log error:

fatal: [localhost -> localhost]: FAILED! => {
    "changed": false, 
    "invocation": {
        "module_args": {
            "boot_firmware": "efi", 
            "hostname": "hostname", 
            "password": "password", 
            "secure_boot_enabled": true, 
            "username": "username", 
            "uuid": "uuid", 
            "validate_certs": false
        }
    }, 
    "msg": "Unsupported parameters for (vmware_guest_boot_manager) module: boot_firmware, secure_boot_enabled Supported parameters include: boot_delay, boot_order, boot_retry_delay, boot_retry_enabled, enter_bios_setup, hostname, name, name_match, password, port, proxy_host, proxy_port, username, uuid, validate_certs"
}

Copied from original issue: ansible/ansible#68401

vmware_guest_find slow

From @markatdxb on Mar 26, 2020 08:19

SUMMARY

running vmware_guest_find takes extensive long time - in out environment with 4000+ VMs it takes around 7 minutes to get the results. Running for example PowerCLI query takes a second to get the result. Is there any way to improve the performance of this module for large environments?

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

vmware_guest_find

ADDITIONAL INFORMATION

Copied from original issue: ansible/ansible#68484

vmware_drs_group - Add VM to DRS Group

Currently only a new Group von be created or overwritten. If you want to add a single VM, a workarounds needs to be applied.

- name: Get all DRS VM groups
  vmware_drs_group_facts:
      hostname: "{{ vcenter_hostname }}"
      password: "{{ vcenter_password }}"
      username: "{{ vcenter_username }}"
      datacenter_name: "{{ datacenter_name }}"
  delegate_to: localhost    
  register: drs_groups
- name: Display DRS Group Object
  debug:
    msg: "{{ drs_groups }}"
- name: Filter existing VMs from DRS Group Object
  set_fact: 
    vmnames: "{{ drs_groups | json_query( vmnames_query ) }}"
  vars:
    vmnames_query: 'drs_group_facts.{{ clustername }}[?group_name==`{{ vm_group }}`].vms'
- name: Display DRS Group VM List
  debug:
    msg: "{{ vmnames[0] }}"
  when: vmnames | list | count  > 0
- name: Display new DRS Group VM List
  debug:
    msg: "{{ vmnames[0] + [ vm_name ] }}"
  when: vmnames | list | count  > 0  
- name: Build full VM  List
  set_fact:
    all_vmnames: "{{ vmnames[0] + [ vm_name ] }}"
  when: vmnames | list | count  > 0
- name: Add {{ vm_name }} to exiting DRS VM group
  vmware_drs_group:
    hostname: "{{ vcenter_hostname }}"
    password: "{{ vcenter_password }}"
    username: "{{ vcenter_username }}"
    cluster_name: "{{ clustername }}" 
    datacenter_name: "{{ datacenter_name }}"
    group_name: "{{ vm_group }}"
    vms: "{{ all_vmnames }}"
    state: present
  delegate_to: localhost  
  when: vmnames | list | count > 0
- name: Add {{ vm_name }} to new DRS VM group
  vmware_drs_group:
    hostname: "{{ vcenter_hostname }}"
    password: "{{ vcenter_password }}"
    username: "{{ vcenter_username }}"
    cluster_name: "{{ clustername }}"
    datacenter_name: "{{ datacenter_name }}"
    group_name: "{{ vm_group }}"
    vms:
      - "{{ vm_name }}"
    state: present
  delegate_to: localhost  
  when: vmnames | list | count == 0

This is verry complex and not pretty stable.

Show_tag returns empty array. Module: vware_vm_info

From @DavidSanchezGracia on Apr 14, 2020 10:08

SUMMARY

When trying to return all vm machines with their corresponding tags, the tags array is returned empty.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

The module used is: vmare_vm_info

ANSIBLE VERSION
ansible 2.9.4
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/home/ansible/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.17 (default, Nov  7 2019, 10:07:09) [GCC 7.4.0]
CONFIGURATION
---
- name: Find folder path of an existing virtual machine
  hosts: localhost
  tasks:
    - name: Find all VM
      vmware_vm_info:
        hostname: "HOST"
        username: "{{ user }}"
        password: "{{ pass }}"
        validate_certs: no
        vm_type: vm
        show_tag: yes
      delegate_to: localhost
      register: vm_info

    - name: Print the information of the vm
      debug:
        var: vm_info

OS / ENVIRONMENT

UBUNTU 18.04

STEPS TO REPRODUCE
EXPECTED RESULTS
 {
                "attributes": {},
                "cluster": "Cluster_SOC",
                "esxi_hostname": "192",
                "guest_fullname": "Other (32-bit)",
                "guest_name": "",
                "ip_address": "",
                "mac_address": [
                    ""
                ],
                "power_state": "poweredOn",
                "tags": [THE TAGS THAT BELONG TO THE VM],
                "uuid": "42",
                "vm_network": {
                    "": {
                        "ipv4": [
                            ""
                        ],
                        "ipv6": [
                            ""
                        ]
                    }
                }
            },
ACTUAL RESULTS
 {
                "attributes": {},
                "cluster": "",
                "esxi_hostname": "",
                "guest_fullname": "Other (32-bit)",
                "guest_name": "NAME",
                "ip_address": "",
                "mac_address": [
                    "00"
                ],
                "power_state": "poweredOn",
                "tags": [],
                "uuid": "4230dccd18814df",
                "vm_network": {
                    "00:50:56:b0:50:f0": {
                        "ipv4": [
                            "128"
                        ],
                        "ipv6": [
                            "f"
                        ]
                    }
                }
            },

Copied from original issue: ansible/ansible#68935

vmware_guest_network: Add network adapter to vm

I want to add a new network adapter in a vm i have in my vSphere Client. The vm's OS is centos 7. When i run my ansible script the adapter is added successfully but it's state is not connected and when i try to connect it manually, it does not allow me to do it. I use ansible 2.9.6 and i found out that the same script works with ansible 2.8. My python version is 3. Is there any suggestion on how to solve this?

Here is the ansible playbook:

---
- hosts: localhost
  vars: 
    - ansible_python_interpreter: "/usr/bin/python3"
  gather_facts: no
  tasks:
  - name: Add adapter
    vmware_guest_network:
      hostname: "{{ hostname }}"
      username: "{{ username }}"
      password: "{{ password }}"
      validate_certs: no
      name: "{{ vm_name }}"
      gather_network_info: false
      networks:
        - name: "{{ name_of_adapter }}"
          state: "{{ state }}"
          device_type: vmxnet3
          dvswitch_name: dvSwitch
          connected: true
          start_connected: true 
    delegate_to: localhost
    register: network_info

Here is the output that i care the most. Why is "connected": false from the moment i assigned connected:true in my playbook?

"network_data": {
"0": {
"allow_guest_ctl": true,
"connected": false,
"device_type": "VMXNET3",
"label": "Network adapter 1",
"start_connected": true,
"unit_number": 7,
"wake_onlan": true
}

Check for missing commits vs devel vs migration process

SUMMARY

The "Big Migration" has now taken place.

As this collection already exists, we need to carefully check to see if any further commits went into devel since this repo was created. Also to see if there are any parts of the migration.py rewrite to existing code that need applying to this repo

Please check the contents of https://github.com/ansible-collection-migration/community.vmware against this repo

In particular:

  • Please do a per-file level diff against every file in the ansible-collection-migration repo and this one
  • Pay care to files added and removed.
  • During the last two weeks there have been lots of fixes, especially around and tests, dependencies, and new collection features e.g. meta/action_groups.yml
ISSUE TYPE
  • Bug Report

Add module to configure TCPIP stack

SUMMARY

On ESXi it is possible to have multiple TCPIP stacks (default, provisioning,vmotion,vxlan...). Unfortunately I don't see any module to configure these settings, like the default gateway of each stack.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

vmware_host_tcpip_stacks

ADDITIONAL INFORMATION

An example would be :

vmware_host_tcpip_stacks:
    hostname: '{{ vcenter_hostname }}'
    username: '{{ vcenter_username }}'
    password: '{{ vcenter_password }}'
    esxi_hostname: '{{ esxi_hostname }}'
    default:
      vmkernel_gateway: 192.168.1.1
      congestion_algorithm: newreno
      num_connections: 11000
    provisioning:
      congestion_algorithm: newreno
      num_connections: 11000
    vmotion:
      vmkernel_gateway: 192.168.2.1
      congestion_algorithm: newreno
      num_connections: 11000
    vxlan:
      congestion_algorithm: newreno
      num_connections: 11000
  delegate_to: localhost

vmware_tools connection plugin should encode when using Powershell

SUMMARY

When using the vmware_tools connection plugin with Windows (set shell method to Powershell) some Windows modules don't work properly.

WinRM plugin does some encoding to always use Powershell, no issues with any Windows Modules. See this issue for more information on the bugs when working with Windows and vmware_tools: ansible-collections/ansible.windows#36

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

vmware_tools

ADDITIONAL INFORMATION

The same as WinRM, also to support al Windows modules.

Please see ansible-collections/ansible.windows#36 for any more details on why this is requested.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.