Giter VIP home page Giter VIP logo

ansible-kubevirt-modules's Introduction

Ansible KubeVirt Modules

Ansible KubeVirt modules enable the automation of management of the following Kubernetes cluster object types:

  • Virtual Machines (also VM templates and VM presets),
  • VM Replica Sets,
  • and Persistent Volume Claims (including Containerized Data Importer functionality).

Since the release of Ansible 2.8, the modules, the inventory plugin and relevant unit tests are part of the upstream Ansible git repository, while this repository contains only the integration tests and example playbooks.

Table of Contents

Quickstart

For a quick introduction, please see the following kubevirt.io blog posts:

Requirements

Source Code

Testing

There are two target of tests that can be found here:

To run a full complement of tests for a given target please use the relevant all.yml playbook.

Automatic testing

Unit tests

Upstream ansible repository contains unit tests covering the kubevirt modules.

Integration tests

Module tests (tests/playbooks/all.yml are run against actual clusters with both KubeVirt and CDI deployed, on top of:

  • TravisCI (ubuntu vms supporting only minikube; no kvm acceleration for KubeVirt vms)
  • oVirt Jenkins (physical servers that run any cluster kubevirtci supports)

Module tests are run using:

  • most recently released ansible (whatever one gets with pip install ansible)
  • ansible stable branch(es)
  • ansible devel branch

Role tests (tests/roles/all.yml) are only run on TravisCI using the devel branch.

To detect regressions early, Travis runs all the tests every 24 hours against a fresh clone of ansible.git and emails kubevirt module developers if tests fail.

Manual testing

  1. Clone this repository to a machine where you can oc login to your cluster:

    $ git clone https://github.com/kubevirt/ansible-kubevirt-modules.git
    $ cd ./ansible-kubevirt-modules
  2. (Optional) Configure a virtual environment to isolate dependencies:

    $ python3 -m venv env
    $ source env/bin/activate
  3. Install dependencies:

    $ pip install openshift

    If you skipped the previous step, you might need to prepend that command with sudo.

  4. Install ansible (in one of the many ways):

    • Install the latest released version:

      $ pip install ansible

      Again, sudo might be required here.

    • Build RPM from the devel branch:

      $ git clone https://github.com/ansible/ansible.git
      $ cd ./ansible
      $ make rpm
      $ sudo rpm -Uvh ./rpm-build/ansible-*.noarch.rpm
    • Check out PRs locally

  5. Run the tests:

    $ ansible-playbook tests/playbooks/all.yml

    Note: The playbook examples include cloud-init configuration to be able to access the created VMIs.

    1. For using SSH do as follows:

      $ kubectl get all
      NAME                             READY     STATUS    RESTARTS   AGE
      po/virt-launcher-bbecker-jw5kk   1/1       Running   0          22m
      
      $ kubectl expose pod virt-launcher-bbecker-jw5kk --port=27017 --target-port=22 --name=vmservice
      $ kubectl get svc vmservice
      NAME        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
      vmservice   ClusterIP   172.30.133.9   <none>        27017/TCP   19m
      
      $ ssh -i tests/test_rsa -p 27017 [email protected]

      It might take a while for the VM to come up before SSH can be used.

    2. For using virtctl:

      $ virtctl console <vmi_name>

      Or

      $ virtctl vnc <vmi_name>

      Use the username kubevirt and the password kubevirt.

  6. (Optional) Leave the virtual environment and remove it:

    $ deactivate
    $ rm -rf env/

Notes on kubevirt_cdi_upload module

To upload an image from localhost by using the kubevirt_cdi_upload module, your system needs to be able to connect to the cdi upload proxy pod. This can be achieved by either:

  1. Exposing the cdi-uploadproxy Service from the cdi namespace, or

  2. Using kubectl port-forward to set up a temporary port forwarding through the Kubernetes API server: kubectl port-forward -n cdi service/cdi-uploadproxy 9443:443

Notes on the k8s_facts module

The following command will collect facts about the existing VM(s), if there are any, and print out a JSON document based on KubeVirt VM spec:

$ ansible-playbook examples/playbooks/k8s_facts_vm.yml

Notes on the KubeVirt inventory plugin

Inventory plugins allow users to point at data sources to compile the inventory of hosts that Ansible uses to target tasks, either via the -i /path/to/file and/or -i 'host1, host2' command line parameters or from other configuration sources.

Enabling the KubeVirt inventory plugin

To enable the KubeVirt plugin, add the following section in the tests/ansible.cfg file:

[inventory]
enable_plugins = kubevirt

Configuring the KubeVirt inventory plugin

Define the plugin configuration in tests/playbooks/plugin/kubevirt.yaml as follows:

plugin: kubevirt
connections:
  - namespaces:
      - default
    interface_name: default

In this example, the KubeVirt plugin will list all VMIs from the default namespace and use the default interface name.

Using the KubeVirt inventory plugin

To use the plugin in a playbook, run:

$ ansible-playbook -i kubevirt.yaml <playbook>

Note: The KubeVirt inventory plugin is designed to work with Multus. It can be used only for VMIs, which are connected to the bridge and display the IP address in the Status field. For VMIs exposed by Kubernetes services, please use the k8s Ansible module.

ansible-kubevirt-modules's People

Contributors

adityaramteke avatar alexxa avatar gbenhaim avatar imjoey avatar karmab avatar machacekondra avatar masayag avatar mmazur avatar tareqalayan avatar tripledes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-kubevirt-modules's Issues

Expose all k8s objects in kubevirt_vm's return json

According to docs (and in line with what upstream k8s modules do) kubevirt_vm should return info on both the VM and VMI it operates on. The latter is currently missing.

(Incidentally, once implemented, it might be a good idea to increase the number of asserts in kubevirt_vm's unit tests to also cover VMI operation, since that status would now be available on module exit.)

ansible-doc can't generate docs for vm and ovm modules

ansible-doc can't generate docs for vm and ovm modules
vmrs works fine.

ERROR! module kubevirt_vm.py has a documentation error formatting or is missing documentation.
ERROR! module kubevirt_ovm.py has a documentation error formatting or is missing documentation.

pvc and registrydisk design

Review usage of pvc and registrydisk options on all modules, please. Whether any should be required and have a default value. ATM docs say none is required, but if none is specified, an error Missing disk information pops up.

show better error msg for ApiException

Currently if there's any ApiException in the module, the whole traceback is passed to user even when user has not passed verbose option -vvv
It would be better to show only Reason element from the ApiException object for better readability, and show full trace only when verbose is asked

For example,

TASK [kubevirt_facts] ********************************************************************************************************************************
fatal: [127.0.0.1]: FAILED! => {"changed": false, "module_stderr": "/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/urllib3/connectionpool.py:857: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n  InsecureRequestWarning)\nTraceback (most recent call last):\n  File \"/tmp/ansible_2ggVls/ansible_module_kubevirt_facts.py\", line 186, in <module>\n    main()\n  File \"/tmp/ansible_2ggVls/ansible_module_kubevirt_facts.py\", line 174, in main\n    obj = facts.execute_module()\n  File \"/tmp/ansible_2ggVls/ansible_modlib.zip/ansible/module_utils/k8svirt/facts.py\", line 52, in execute_module\n  File \"/tmp/ansible_2ggVls/ansible_modlib.zip/ansible/module_utils/k8svirt/helper.py\", line 458, in list\n  File \"/tmp/ansible_2ggVls/ansible_modlib.zip/ansible/module_utils/k8svirt/helper.py\", line 451, in exists\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubernetes/client/apis/core_v1_api.py\", line 17941, in read_namespaced_persistent_volume_claim\n    (data) = self.read_namespaced_persistent_volume_claim_with_http_info(name, namespace, **kwargs)\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubernetes/client/apis/core_v1_api.py\", line 18032, in read_namespaced_persistent_volume_claim_with_http_info\n    collection_formats=collection_formats)\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubernetes/client/api_client.py\", line 321, in call_api\n    _return_http_data_only, collection_formats, _preload_content, _request_timeout)\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubernetes/client/api_client.py\", line 155, in __call_api\n    _request_timeout=_request_timeout)\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubernetes/client/api_client.py\", line 342, in request\n    headers=headers)\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubernetes/client/rest.py\", line 231, in GET\n    query_params=query_params)\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubernetes/client/rest.py\", line 222, in request\n    raise ApiException(http_resp=r)\nkubernetes.client.rest.ApiException: (404)\nReason: Not Found\nHTTP response headers: HTTPHeaderDict({'Date': 'Thu, 13 Sep 2018 09:36:29 GMT', 'Content-Length': '220', 'Content-Type': 'application/json', 'Cache-Control': 'no-store'})\nHTTP response body: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"persistentvolumeclaims \\\"pvc-demo\\\" not found\",\"reason\":\"NotFound\",\"details\":{\"name\":\"pvc-demo\",\"kind\":\"persistentvolumeclaims\"},\"code\":404}\n\n\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
	to retry, use: --limit @/home/vparekh/code/github.com/ansible-kubevirt-modules/tests/playbooks/kubevirt_pvc_facts.retry

PLAY RECAP *******************************************************************************************************************************************
127.0.0.1                  : ok=1    changed=0    unreachable=0    failed=1   

unexpected keyword argument 'field_selectors'

Running tests/playbooks/kubevirt_all_vmis_facts.yml gave me

TASK [kubevirt_facts] *********************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: Got an unexpected keyword argument 'field_selectors' to method list_virtual_machine_instance_for_all_namespaces
fatal: [127.0.0.1]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/tmp/ansible_vefHAi/ansible_module_kubevirt_facts.py\", line 186, in <module>\n    main()\n  File \"/tmp/ansible_vefHAi/ansible_module_kubevirt_facts.py\", line 174, in main\n    obj = facts.execute_module()\n  File \"/tmp/ansible_vefHAi/ansible_modlib.zip/ansible/module_utils/k8svirt/facts.py\", line 52, in execute_module\n  File \"/tmp/ansible_vefHAi/ansible_modlib.zip/ansible/module_utils/k8svirt/helper.py\", line 183, in list\n  File \"/tmp/ansible_vefHAi/ansible_modlib.zip/ansible/module_utils/k8svirt/helper.py\", line 195, in list_all\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubevirt/apis/default_api.py\", line 2884, in list_virtual_machine_instance_for_all_namespaces\n    (data) = self.list_virtual_machine_instance_for_all_namespaces_with_http_info(**kwargs)\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubevirt/apis/default_api.py\", line 2924, in list_virtual_machine_instance_for_all_namespaces_with_http_info\n    \" to method list_virtual_machine_instance_for_all_namespaces\" % key\nTypeError: Got an unexpected keyword argument 'field_selectors' to method list_virtual_machine_instance_for_all_namespaces\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
	to retry, use: --limit @/home/vparekh/code/github.com/ansible-kubevirt-modules/tests/playbooks/kubevirt_all_vmis_facts.retry

PLAY RECAP ********************************************************************************************************************************************************************************************************
127.0.0.1                  : ok=1    changed=0    unreachable=0    failed=1   

one or several separate facts modules

ATM, there are four *_facts separate modules for vm, ovm, vmrs, vmpreset (in old naming). Maybe it will be more optimal to join them, so it will be smth like:

- kubevirt_facts:
    name: <name>
    namespace: <namespace>
    kind: vm/ovm/vmrs/vmpreset
- debug:
    var: kubevirt_{vm/ovm/vmrs/vmpreset}      ---> based on kind

'DefaultApi' object has no attribute 'read_namespaced_offline_virtual_machine'

TASK [kubevirt_raw] ***********************************************************************************************************************************************************************************************
task path: /root/ansible-kubevirt-modules/tests/raw_ovm.yml:8
Using module file /root/ansible-kubevirt-modules/lib/ansible/modules/cloud/kubevirt/kubevirt_raw.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1530170650.92-124543604770935 `" && echo ansible-tmp-1530170650.92-124543604770935="` echo /root/.ansible/tmp/ansible-tmp-1530170650.92-124543604770935 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpRW0Wgb TO /root/.ansible/tmp/ansible-tmp-1530170650.92-124543604770935/kubevirt_raw.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1530170650.92-124543604770935/ /root/.ansible/tmp/ansible-tmp-1530170650.92-124543604770935/kubevirt_raw.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/env python /root/.ansible/tmp/ansible-tmp-1530170650.92-124543604770935/kubevirt_raw.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1530170650.92-124543604770935/" > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/tmp/ansible_Pnr6AC/ansible_module_kubevirt_raw.py", line 53, in <module>
    main()
  File "/tmp/ansible_Pnr6AC/ansible_module_kubevirt_raw.py", line 49, in main
    KubeVirtRawModule().execute_module()
  File "/tmp/ansible_Pnr6AC/ansible_modlib.zip/ansible/module_utils/k8svirt/raw.py", line 79, in execute_module
  File "/tmp/ansible_Pnr6AC/ansible_modlib.zip/ansible/module_utils/k8svirt/raw.py", line 102, in __get_object
  File "/tmp/ansible_Pnr6AC/ansible_modlib.zip/ansible/module_utils/k8svirt/helper.py", line 160, in exists
AttributeError: 'DefaultApi' object has no attribute 'read_namespaced_offline_virtual_machine'
 
fatal: [localhost]: FAILED! => {
    "changed": false,
    "module_stderr": "Traceback (most recent call last):\n  File \"/tmp/ansible_Pnr6AC/ansible_module_kubevirt_raw.py\", line 53, in <module>\n    main()\n  File \"/tmp/ansible_Pnr6AC/ansible_module_kubevirt_raw.py\", line 49, in main\n    KubeVirtRawModule().execute_module()\n  File \"/tmp/ansible_Pnr6AC/ansible_modlib.zip/ansible/module_utils/k8svirt/raw.py\", line 79, in execute_module\n  File \"/tmp/ansible_Pnr6AC/ansible_modlib.zip/ansible/module_utils/k8svirt/raw.py\", line 102, in __get_object\n  File \"/tmp/ansible_Pnr6AC/ansible_modlib.zip/ansible/module_utils/k8svirt/helper.py\", line 160, in exists\nAttributeError: 'DefaultApi' object has no attribute 'read_namespaced_offline_virtual_machine'\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE",
    "rc": 0
}
        to retry, use: --limit @/root/ansible-kubevirt-modules/tests/raw_ovm.retry

InsecureRequestWarning even when verify_ssl=false was defined

fatal: [127.0.0.1]: FAILED! => {
    "changed": false, 
    "module_stderr": "/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/urllib3/connectionpool.py:857: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n  InsecureRequestWarning)\n/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/urllib3/connectionpool.py:857: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n  InsecureRequestWarning)\nTraceback (most recent call last):\n  File \"/tmp/ansible_YBh2kt/ansible_module_kubevirt_raw.py\", line 219, in <module>\n    main()\n  File \"/tmp/ansible_YBh2kt/ansible_module_kubevirt_raw.py\", line 215, in main\n    KubeVirtRawModule().execute_module()\n  File \"/tmp/ansible_YBh2kt/ansible_modlib.zip/ansible/module_utils/k8svirt/raw.py\", line 96, in execute_module\n  File \"/tmp/ansible_YBh2kt/ansible_modlib.zip/ansible/module_utils/k8svirt/raw.py\", line 121, in __create\n  File \"/tmp/ansible_YBh2kt/ansible_modlib.zip/ansible/module_utils/k8svirt/helper.py\", line 415, in create\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubernetes/client/apis/core_v1_api.py\", line 5950, in create_namespaced_persistent_volume_claim\n    (data) = self.create_namespaced_persistent_volume_claim_with_http_info(namespace, body, **kwargs)\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubernetes/client/apis/core_v1_api.py\", line 6035, in create_namespaced_persistent_volume_claim_with_http_info\n    collection_formats=collection_formats)\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubernetes/client/api_client.py\", line 321, in call_api\n    _return_http_data_only, collection_formats, _preload_content, _request_timeout)\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubernetes/client/api_client.py\", line 155, in __call_api\n    _request_timeout=_request_timeout)\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubernetes/client/api_client.py\", line 364, in request\n    body=body)\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubernetes/client/rest.py\", line 266, in POST\n    body=body)\n  File \"/home/vparekh/code/github.com/venv/lib/python2.7/site-packages/kubernetes/client/rest.py\", line 222, in request\n    raise ApiException(http_resp=r)\nkubernetes.client.rest.ApiException: (404)\nReason: Not Found\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 07 Sep 2018 11:00:36 GMT', 'Content-Length': '186', 'Content-Type': 'application/json', 'Cache-Control': 'no-store'})\nHTTP response body: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"namespaces \\\"vms\\\" not found\",\"reason\":\"NotFound\",\"details\":{\"name\":\"vms\",\"kind\":\"namespaces\"},\"code\":404}\n\n\n", 
    "module_stdout": "", 
    "msg": "MODULE FAILURE", 
    "rc": 1
}

even when I had defined verify_ssl = false in ansible.cfg I get above error when running tests/playbooks/kubevirt_raw_pvc.yml
Not sure if it's a issue with k8s module or ours

registrydisk option does't work

>> ansible-playbook -i inventory tests/vm.yml -vvv
<...>
fatal: [10.8.249.119]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "name": "smallvm-xyz",
            "namespace": "default",
            "registrydisk": "kubevirt/fedora-cloud-registry-disk-demo",
            "state": "present"
        }
    },
    "msg": "Unsupported parameters for (kubevirt_vm) module: registrydisk Supported parameters include: cdrom,cloudinit,disk,iqn,lun,memory,name,namespace,pvc,src,state,target,timeout,wait"
}

type: ClusterIP in k8s_service module gives a success result but does not create service

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind enhancement

/kind bug

What happened:
While creating a service with type ClusterIP using k8s_service module gives a success result but does not create any services

---
- name: Create a service
  hosts: localhost
  tasks:
    - name: create a service type of vm
      k8s_service:
        state: absent
        name: clusterip-service-test
        namespace: test-e2e
        type: ClusterIP
        ports:
          - name: clusterip
            port: 27016
            protocal: TCP
            targetPort: 22
        selector:
          kubevirt.io/vm: test-vm-cirros
        definition:
          metadata:
            labels:
              test: clusterip-test

What you expected to happen:
It should create service

How to reproduce it (as minimally and precisely as possible):
100%

Anything else we need to know?:

Environment:

  • KubeVirt version (use virtctl version):
  • Kubernetes version (use kubectl version):
  • VM or VMI specifications:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Allow disabling SSL verification

Currently modules fail when using custom certs with the following error:

fatal: [localhost]: FAILED! => {                                                                                                                                                                                                                 
    "changed": false,                                                                                                                             
    "module_stderr": "2018-04-09 10:10:27,055 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)'),)': /apis/kubev
irt.io/v1alpha1/virtualmachines\n2018-04-09 10:10:27,066 WARNING Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)'),)': 
/apis/kubevirt.io/v1alpha1/virtualmachines\n2018-04-09 10:10:27,083 WARNING Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:
579)'),)': /apis/kubevirt.io/v1alpha1/virtualmachines\nTraceback (most recent call last):\n  File \"/tmp/ansible_GOGwNn/ansible_module_kubevirt_vm.py\", line 350, in <module>\n    main()\n  File \"/tmp/ansible_GOGwNn/ansible_module_kubevirt_vm.py\", line 326, in main\n    
found = exists(crds, module.params[\"name\"], module.params[\"namespace\"])\n  File \"/tmp/ansible_GOGwNn/ansible_module_kubevirt_vm.py\", line 262, in exists\n    DOMAIN, VERSION, 'virtualmachines')[\"items\"]\n  File \"/usr/lib/python2.7/site-packages/kubernetes/client/a
pis/custom_objects_api.py\", line 823, in list_cluster_custom_object\n    (data) = self.list_cluster_custom_object_with_http_info(group, version, plural, **kwargs)\n  File \"/usr/lib/python2.7/site-packages/kubernetes/client/apis/custom_objects_api.py\", line 923, in list_
cluster_custom_object_with_http_info\n    collection_formats=collection_formats)\n  File \"/usr/lib/python2.7/site-packages/kubernetes/client/api_client.py\", line 321, in call_api\n    _return_http_data_only, collection_formats, _preload_content, _request_timeout)\n  File
 \"/usr/lib/python2.7/site-packages/kubernetes/client/api_client.py\", line 155, in __call_api\n    _request_timeout=_request_timeout)\n  File \"/usr/lib/python2.7/site-packages/kubernetes/client/api_client.py\", line 342, in request\n    headers=headers)\n  File \"/usr/li
b/python2.7/site-packages/kubernetes/client/rest.py\", line 231, in GET\n    query_params=query_params)\n  File \"/usr/lib/python2.7/site-packages/kubernetes/client/rest.py\", line 205, in request\n    headers=headers)\n  File \"/usr/lib/python2.7/site-packages/urllib3/req
uest.py\", line 66, in request\n    **urlopen_kw)\n  File \"/usr/lib/python2.7/site-packages/urllib3/request.py\", line 87, in request_encode_url\n    return self.urlopen(method, url, **extra_kw)\n  File \"/usr/lib/python2.7/site-packages/urllib3/poolmanager.py\", line 321
, in urlopen\n    response = conn.urlopen(method, u.request_uri, **kw)\n  File \"/usr/lib/python2.7/site-packages/urllib3/connectionpool.py\", line 668, in urlopen\n    **response_kw)\n  File \"/usr/lib/python2.7/site-packages/urllib3/connectionpool.py\", line 668, in urlo
pen\n    **response_kw)\n  File \"/usr/lib/python2.7/site-packages/urllib3/connectionpool.py\", line 668, in urlopen\n    **response_kw)\n  File \"/usr/lib/python2.7/site-packages/urllib3/connectionpool.py\", line 639, in urlopen\n    _stacktrace=sys.exc_info()[2])\n  File
 \"/usr/lib/python2.7/site-packages/urllib3/util/retry.py\", line 388, in increment\n    raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='192.168.200.2', port=8443): Max retries exceeded with url: /
apis/kubevirt.io/v1alpha1/virtualmachines (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)'),))\n",                                       
    "module_stdout": "",                                                                                                                                                                           
    "msg": "MODULE FAILURE",                                                                                                                                                                       
    "rc": 0                                                                                                                                                                                        
}

Scaling module assumes VirtualMachineInstanceReplicaSet values that aren't there

According to https://kubevirt.io/docs/workloads/controllers/virtual-machine-replica-set.html (last paragraph), a return object from a VMIRS invocation should contain "status/ready replicas" values. That's also how kubernetes' built–in RS does it.

The upstream k8s scaling code assumes object.status.readyReplicas to exist as does our code (here). Trouble is – VMIRS does not return that, so using scaling with wait always fails with a timeout, since the code simply requires readyReplicas to be there.

I don't know what to do about this.

Some smaller issues

These two issues made it hard to follow the readme:

  • WRT selinux it doesn't specify what "inventory" to run (e.g. let's specify it's the openshift-ansible inventory)
  • The "KubeVirt parameters documentation" link is broken.

delete object by name and namespace only

If one wants to delete an object, for example, a VM, only name and namespace should be required, nth else.
ATM,

    - name: Delete a VM
      kubevirt_vm:
        name: test-vm
        namespace: default
        state: absent

returns

fatal: [localhost]: FAILED! => {"changed": false, "msg": "missing required arguments: pvc"}

and the VM is removed only when I specify pvc.

e2e syntax error

e2e.yml has a bug in the VirtualMachine definition: it defines inline.spec.spec instead of inline.spec.template.spec (kubevirt_raw_vm.yml has this done correctly as far as I can tell). Please submit a PR with a fix before further extending that playbook. @vatsalparekh @adityaramteke

Refactor modules

Refactoring for:

  • Following PEP8 as much as possible
  • Make them more modular so easing modifications, tests, ...

Check required status of modules options

As I understand from kubevirt_*.py, there is no single required option. It can't be that way. At least name should be "required": True.

Then, if I run playbook similar to:

  tasks:
    - kubevirt_vm:
       state: present
       name: my_vm
    - kubevirt_vm:
       state: absent
       name: my_other_vm

It always fails, because of 'missing namespace'. And so on.

Errors should be more readable

On a freshly spun up kubevirt cluster with nothing on it, running e.g. tests/playbooks/kubevirt_facts.yml should fail, since I have no vms, vmis or pvcs set up. And it does indeed fail. But the verboseness of the errors makes it non–obvious that the error is just a simple VM not found, etc. Could these errors be made easier to read?

Example:

TASK [Gather facts for {{item.kind}} in a namespaces] ***************************************************************************************************************************************
failed: [localhost] (item={u'kind': u'VirtualMachine', u'ns': u'default', u'name': u'test-working'}) => {"changed": false, "error": "(404)\nReason: Not Found\nHTTP response headers: HTTPHeaderDict({'Date': 'Wed, 19 Sep 2018 14:50:36 GMT', 'Content-Length': '248', 'Content-Type': 'application/json'})\nHTTP response body: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"virtualmachines.kubevirt.io \\\"test-working\\\" not found\",\"reason\":\"NotFound\",\"details\":{\"name\":\"test-working\",\"group\":\"kubevirt.io\",\"kind\":\"virtualmachines\"},\"code\":404}\n\n", "item": {"kind": "VirtualMachine", "name": "test-working", "ns": "default"}, "msg": "Failed to retrieve requested object"}
failed: [localhost] (item={u'kind': u'VirtualMachineInstance', u'ns': u'default', u'name': u'vmi-ephemeral2'}) => {"changed": false, "error": "(404)\nReason: Not Found\nHTTP response headers: HTTPHeaderDict({'Date': 'Wed, 19 Sep 2018 14:50:37 GMT', 'Content-Length': '268', 'Content-Type': 'application/json'})\nHTTP response body: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"virtualmachineinstances.kubevirt.io \\\"vmi-ephemeral2\\\" not found\",\"reason\":\"NotFound\",\"details\":{\"name\":\"vmi-ephemeral2\",\"group\":\"kubevirt.io\",\"kind\":\"virtualmachineinstances\"},\"code\":404}\n\n", "item": {"kind": "VirtualMachineInstance", "name": "vmi-ephemeral2", "ns": "default"}, "msg": "Failed to retrieve requested object"}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Exception: Unknown kind persistant_volume_claim
failed: [localhost] (item={u'kind': u'PersistantVolumeClaim', u'ns': u'default', u'name': u'pvc-demo'}) => {"changed": false, "item": {"kind": "PersistantVolumeClaim", "name": "pvc-demo", "ns": "default"}, "module_stderr": "Traceback (most recent call last):\n  File \"/tmp/ansible_b52hep/ansible_module_kubevirt_facts.py\", line 186, in <module>\n    main()\n  File \"/tmp/ansible_b52hep/ansible_module_kubevirt_facts.py\", line 174, in main\n    obj = facts.execute_module()\n  File \"/tmp/ansible_b52hep/ansible_modlib.zip/ansible/module_utils/k8svirt/facts.py\", line 46, in execute_module\n  File \"/tmp/ansible_b52hep/ansible_modlib.zip/ansible/module_utils/k8svirt/helper.py\", line 138, in get_helper\nException: Unknown kind persistant_volume_claim\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
        to retry, use: --limit @/home/vagrant/ansible-kubevirt-modules/tests/playbooks/kubevirt_facts.retry

Support multiple KubeVirt API versions

Currently, KubeVirt's API is still at pre-v1 therefore the Python client doesn't support anything else.

The modules reference V1* objects everywhere, but eventually other versions will have to be supported as well.

Ideally, this RFE would be implemented and it'll hide the Python client implementation details, easing the modules code. If that RFE doesn't get implemented some abstraction will be needed in the modules to handle different API versions.

vmrs module doesn't create vms

When one creates VMRs, VM Replicas should be created. Now it's default to 3. kubectl describe vmrs shows that there are 3 of them, but Events section is Empty, and when I look for VMs, there is None.

[root@igulina-nm-1 ~]# kubectl describe vmrs
Name:		test-vmrs
Namespace:	default
Labels:		<none>
Annotations:	<none>
API Version:	kubevirt.io/v1alpha1
Kind:		VirtualMachineReplicaSet
Metadata:
  Cluster Name:				
  Creation Timestamp:			2018-04-12T08:28:54Z
  Deletion Grace Period Seconds:	<nil>
  Deletion Timestamp:			<nil>
  Resource Version:			4414858
  Self Link:				/apis/kubevirt.io/v1alpha1/namespaces/default/virtualmachinereplicasets/test-vmrs
  UID:					89896eaa-3e2b-11e8-86fc-fa163e57d230
Spec:
  Replicas:	3
  Template:
    Metadata:
      Name:	test-vmrs
    Spec:
      Domain:
        Devices:
          Disks:
            Disk:
              Dev:		vda
            Name:		registrydisk
            Volume Name:	registryvolume
        Resources:
          Requests:
            Memory:	2
      Volumes:
        Name:	registryvolume
        Registry Disk:
          Image:	kubevirt/cirros-registry-disk-demo:v0.2.0
Events:			<none>

[root@igulina-nm-1 ~]# kubectl get vms --all-namespaces
No resources found.

I expect 3 VMs created as in the example here

Rename kubevirt_raw module

Following the lead of kubernetes ansible modules, it'd be good idea to rename kubevirt_raw module to just kubevirt, as it seems the raw portion of it scared potential users as they took it as unfinished or not something for the general public.

Removed deprecated directory

Currently kubevirt_migrations module is kept on the deprecated directory, it seems that is no longer part of the API, therefore it could be removed.

parameter: username in k8s module does not work

/kind bug

What happened:

---
- name: Create a VM using k8s module
  hosts: localhost
  tasks:
    - name: Create a VM using yaml file
      k8s:
        username: test_user
        password: 12345
        state: present
        src: /home/aramteke/cnv-qe/vm-yaml-examples/vm-cirros.yaml
        namespace: default

The VM is get created by the default user that you are logged in and not with the mentioned user test_user

What you expected to happen:
As mentioned in the above YAML file. The creation of VM must be done by test_user user.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.