Giter VIP home page Giter VIP logo

oneview-ansible-collection's Introduction

Ansible Collection for HPE OneView

This collection provides a series of Ansible modules and plugins for interacting with the HPE OneView Modules.

Build Status

OV Version 9.00 8.90 8.80 8.70 8.60 8.50 8.40 8.30 8.20 8.10 8.00 7.20 7.10 7.00 6.60 6.50 6.40 6.30 6.20 6.10 6.00 5.60
SDK Version/Tag v9.0.1 v8.9.0 v8.8.0 v8.7.0 v8.6.2 v8.5.1 v8.4.0 v8.3.0 v8.2.0 v8.1.0 v8.0.0 v7.2.0 v7.1.0 v7.0.0 v6.6.0 v6.5.0 v6.4.0 v6.3.0 v6.2.0 v6.1.0 v6.0.0 v1.2.1
Build Status Build status Build status Build status Build status Build status Build status Build status Build status Build status Build status Build status Build status Build status Build status Build status Build status Build status Build status Build status Build status Build status Build status

Requirements

Installation

To install HPE OneView collection hosted in Galaxy

ansible-galaxy collection install hpe.oneview

To upgrade to the latest version of HPE OneView:

ansible-galaxy collection install hpe.oneview --force

To install dependency packages

pip install -r ~/.ansible/collections/ansible_collections/hpe/oneview/requirements.txt

To install HPE OneView collection from GitHub

git clone https://github.com/HewlettPackard/oneview-ansible-collection.git
cd oneview-ansible-collection
ansible-galaxy collection build .

Now a tar file is generated. Install that file.

ansible-galaxy collection install <tar_file>

To install dependency packages

pip install -r requirements.txt

To install HPE OneView collection from Docker Image

docker build -t oneview-ansible-collections .
docker run -it --rm -v (pwd)/:/root/oneview-ansible-collections oneview-ansible-collections

That's it. If you would like to modify any role, simply modify role and re-run the image.

OneViewClient Configuration

Using a JSON Configuration File

To use the HPE OneView collection, you can store the configuration in a JSON file. This file is used to define the settings, which will be used on the OneView appliance connection, like hostname, authLoginDomain, username, and password. Here's an example:

{
  "ip": "<ip>",
  "credentials": {
    "userName": "<userName>",
    "authLoginDomain": "",
    "password": "<password>"
  },
  "api_version": 6600
}

The api_version specifies the version of the Rest API to invoke. When not defined, it will pick the OneView appliance version as default.

The authLoginDomain specifies the login domain directory of the appliance. When it is not specified, it will consider the appliance's default domain directory.

If your environment requires a proxy, define the proxy properties in the JSON file using the following syntax:

  "proxy": "<proxy_host>:<proxy_port>"

๐Ÿ”’ Tip: Check the file permissions since the password is stored in clear-text.

The configuration file path must be provided for all of the roles config arguments. For example:

- name: Gather facts about the FCoE Network with name 'FCoE Network Test'
  oneview_fcoe_network_facts:
    config: "/path/to/config.json"
    name: "FCoE Network Test"

Once you have defined the config variables, you can run the roles.

Pass Login SessionID as param

To run any task, we first need to login to HPE OneView appliance by passing the credentials in form of configuration. As part of the login process, Ansible Collection SDK gets the session id from OneView and individual sessionID is generated for each task. This could cause a session limit exceeded issue if there are more number of tasks.

So it is recommended to use a single sessionID for all tasks. But it is optional. If sessionID is not passed explicitly, it will work as earlier.

To reuse a single sessionID, it has to be passed as param sessionID inside your task.

Here's an example:

- name: Fetch Session Id
  oneview_get_session_id:
    config: "{{ config }}"
    name: "Test_Session"
  delegate_to: localhost
  register: session

- name: Create a Fibre Channel Network
  oneview_fc_network:
    hostname: <hostname>
    sessionID: "{{ session.ansible_facts.session }}"
    state: present
    data:
      name: "{{ network_name }}"
      fabricType: 'FabricAttach'
      linkStabilityTime: '30'
      autoLoginRedistribution: true
  no_log: true
  delegate_to: localhost

A SessionID remains valid for 24 hours.

Logout from Session

Ansible SDK handles OneView session in two different way

  1. By default OneView session will be logged out when each task run from Ansible collection SDK. In this case always new session gets created whenever any task invoked from the SDK

  2. One can run multiple tasks using same session. In this case first one needs to use task to get a session id and then use the same session id for all the subsequent tasks. At the end, logout task need to be invoked to delete that specific session

Scenario 1: In the below task, session will be logged out once it is done. So if we run multiple tasks then in no condition multiple sessions remain active.

- name: Create a Fibre Channel Network
  oneview_fc_network:
    hostname: <hostname>
    state: present
    data:
      name: "{{ network_name }}"
      fabricType: 'FabricAttach'
      linkStabilityTime: '30'
      autoLoginRedistribution: true
  no_log: true
  delegate_to: localhost

Scenario-2: In this example, a session is fetched, then the OneView session id is passed as param for the create fc network task. In this case, it will not do a session logout and user can logout the session once all tasks are done.

- name: Fetch Session Id
  oneview_get_session_id:
    config: "{{ config }}"
    name: "Test_Session"
  delegate_to: localhost
  register: session

- name: Create a Fibre Channel Network
  oneview_fc_network:
    hostname: <hostname>
    sessionID: "{{ session.ansible_facts.session }}"
    state: present
    data:
      name: "{{ network_name }}"
      fabricType: 'FabricAttach'
      linkStabilityTime: '30'
      autoLoginRedistribution: true
  no_log: true
  delegate_to: localhost

- name: Logout Session
  oneview_logout_session:
    config: "{{ config }}"
    sessionID: "{{ session.ansible_facts.session }}"
  delegate_to: localhost

Parameters in roles

The another way is to pass in your HPE OneView credentials to your tasks is through explicit specification on the task.

This option allows the parameters hostname, auth_login_domain, username, password, and api_version to be passed directly inside your task.

- name: Create a Fibre Channel Network
  oneview_fc_network:
    hostname: <hostname>
    username: <username>
    password: <password>
    auth_login_domain: <domain_directory>
    api_version: 6600
    state: present
    data:
      name: "{{ network_name }}"
      fabricType: 'FabricAttach'
      linkStabilityTime: '30'
      autoLoginRedistribution: true
  no_log: true
  delegate_to: localhost

Setting no_log: true is highly recommended in this case, as the credentials are otherwise returned in the log after task completion.

Setting your OneView version

The Ansible collections for HPE OneView support the API endpoints for HPE OneView 6.00, 6.10, 6.20, 6.30, 6.40, 6.50, 6.60, 7.00, 7.10, 7.20, 8.00, 8.10, 8.20, 8.30, 8.40, 8.50, 8.60, 8.70, 8.80, 8.90, 9.00

The current default HPE OneView version will pick the OneView appliance version.

To use a different API, you must set the API version together with your credentials, either using the JSON configuration:

"api_version": 6600

OR using the Environment variable:

export ONEVIEWSDK_API_VERSION='6600'

If this property is not specified, it will fall back to default value.

The API list is as follows:

  • HPE OneView 5.60 API version: 2400
  • HPE OneView 6.00 API version: 2600
  • HPE OneView 6.10 API version: 2800
  • HPE OneView 6.20 API version: 3000
  • HPE OneView 6.30 API version: 3200
  • HPE OneView 6.40 API version: 3400
  • HPE OneView 6.50 API version: 3600
  • HPE OneView 6.60 API version: 3800
  • HPE OneView 7.00 API version: 4000
  • HPE OneView 7.10 API version: 4200
  • HPE OneView 7.20 API version: 4400
  • HPE OneView 8.00 API version: 4600
  • HPE OneView 8.10 API version: 4800
  • HPE OneView 8.20 API version: 5000
  • HPE OneView 8.30 API version: 5200
  • HPE OneView 8.40 API version: 5400
  • HPE OneView 8.50 API version: 5600
  • HPE OneView 8.60 API version: 5800
  • HPE OneView 8.70 API version: 6000
  • HPE OneView 8.80 API version: 6200
  • HPE OneView 8.90 API version: 6400
  • HPE OneView 9.00 API version: 6600

HPE Synergy Image Streamer

From Release 8.1, Image streamer is no longer supported.

Usage

Playbooks

To use a module from HPE OneView collection, please reference the full namespace, collection name, and modules name that you want to use:

---
- name: Using HPE OneView collection
  hosts: all
  collections:
    - hpe.oneview
  roles:
    - hpe.oneview.oneview_fc_network
    - hpe.oneview.oneview_fc_network_facts

Run the above created playbooks as shown below.

ansible-playbook example_collection.yml

License

This project is licensed under the Apache 2.0 license. Please see the LICENSE for more information.

Contributing and feature requests

Contributing: We welcome your contributions to the Ansible Modules for HPE OneView. See CONTRIBUTING.md for more details.

Feature Requests: If you have a need that is not met by the current implementation, please let us know (via a new issue). This feedback is crucial for us to deliver a useful product. Do not assume that we have already thought of everything, because we assure you that is not the case.

Features

The HPE.Oneview collection includes roles, modules, sample playbooks, module_utils

Copyright

ยฉ Copyright 2024 Hewlett Packard Enterprise Development LP

oneview-ansible-collection's People

Contributors

akshith-gunasheelan avatar alisha-k-kalladassery avatar arthur7777xd avatar asisbagga avatar asisbagga-dev avatar chebroluharika avatar chrislynchhpe avatar harikachebrolu avatar nabhajit-ray avatar shandcruz avatar spapinwar12 avatar srijapapinwar avatar venkateshravula avatar yuvirani avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oneview-ansible-collection's Issues

Adding Multiple Rack-mounted Servers

I am trying to create server hardware for multiple servers using the "add multiple rack-mounted servers" tasks in "oneview_server_hardware" role.
I am not sure how and where to specify the multiple iLo IPs or a range of IPs so that the playbook can create multiple hardware server instances in OneView at one time? . I have to work with iLo IPs because my servers are DLs, not blades.
I see that the mpHostsAndRanges takes the multiple_hosts veritable from the /defaults/main.yml file.

Any suggestions will be appreciated!

Thanks, Peter

Question - External firmware repository

Hello,

Concerning the addition of an external firmware repository on OneView, does the current plugin (oneview_repositories) supports the usage of an HTTPS web server as a repository ?

What parameters are needed ? (ex: certificate ?)

In the REST API:

image

oneview_server_profile cannot create more than one server profile in parallel

Re-opening issue we had on the Ansible OneView module, see HewlettPackard/oneview-ansible#313 as I face the same error with the OneView Ansible collection.

TASK [Creating Server Profile "ESX-1-deploy" from Server Profile Template "ESXi7 BFS"] ***********************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: hpeOneView.exceptions.HPEOneViewTaskError: A profile is already assigned to the server hardware {"name":"Frame4, bay 4", "uri":"/rest/server-hardware/39313738-3034-5A43-3231-32343036474B"}.
fatal: [ESX-1-deploy -> localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1635241176.7871196-96277-272638110473477/AnsiballZ_oneview_server_profile.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1635241176.7871196-96277-272638110473477/AnsiballZ_oneview_server_profile.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1635241176.7871196-96277-272638110473477/AnsiballZ_oneview_server_profile.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.hpe.oneview.plugins.modules.oneview_server_profile', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_oneview_server_profile_payload_8h3k37_v/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 693, in <module>\n  File \"/tmp/ansible_oneview_server_profile_payload_8h3k37_v/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 689, in main\n  File \"/tmp/ansible_oneview_server_profile_payload_8h3k37_v/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/module_utils/oneview.py\", line 633, in run\n  File \"/tmp/ansible_oneview_server_profile_payload_8h3k37_v/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 284, in execute_module\n  File \"/tmp/ansible_oneview_server_profile_payload_8h3k37_v/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 324, in __present\n  File \"/tmp/ansible_oneview_server_profile_payload_8h3k37_v/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 461, in __create_profile\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/servers/server_profiles.py\", line 74, in create\n    resource_data = self._helper.create(data, timeout=timeout, force=force)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/resource.py\", line 464, in create\n    return self.do_post(uri, data, timeout, custom_headers)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/resource.py\", line 816, in do_post\n    return self._task_monitor.wait_for_task(task, timeout)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/task_monitor.py\", line 82, in wait_for_task\n    task_response = self.__get_task_response(task)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/task_monitor.py\", line 142, in __get_task_response\n    raise HPEOneViewTaskError(msg, error_code)\nhpeOneView.exceptions.HPEOneViewTaskError: A profile is already assigned to the server hardware {\"name\":\"Frame4, bay 4\", \"uri\":\"/rest/server-hardware/39313738-3034-5A43-3231-32343036474B\"}.\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
changed: [ESX-2-deploy -> localhost]

Inventory file has the following content:

[ESX]
ESX-1-deploy host_management_ip=192.168.3.171 
ESX-2-deploy host_management_ip=192.168.3.175 

Playbook is :

# Creating a Server Profile in HPE OneView from a boot from SAN Server Profile Template:

    - name: Creating Server Profile "{{ inventory_hostname }}" from Server Profile Template "{{ server_template }}"
      oneview_server_profile:
        config: "{{ config }}"
        data:
          serverProfileTemplateName: "{{ server_template }}"
          name: "{{ inventory_hostname }}"
        # serverHardwareUri: "/rest/server-hardware/39313738-3234-584D-5138-323830343848"
        # server_hardware: Encl1, bay 12
        # If any hardware is provided, it tries to get one available
      delegate_to: localhost
      register: result

Environment Details

  • Ansible 2.9.25 - Python 3.6.8 - python-hpOneView 6.30
  • Ansible Collection for HPE OneView 6.30
  • HPE OneView 6.30
  • Synergy 480 Gen10
  • SSP 2021-05.03

The SDK doesn't manage workload-profile

Scenario/Intent

The OneView Ansible SDK should handle workload profile and add additionnal BIOS settings related to the selected workload profile

When we create a server profile or a server profile template thru the OneView GUI , additionnal parameters are included with the workload profile and we should have the same behaviour using the SDK.

The OneView SDK shoud provide a simple solution to manage the workload profile and handle it-self the additional bios settings to add.

See the HPE documentation page 8:
https://support.hpe.com/hpesc/public/docDisplay?docId=a00016408en_us

Environment Details

  • Docker Image : hewlettpackardenterprise/hpe-oneview-sdk-for-ansible:v5.10.0-OV5.6
  • Python Version: 3.6.12
  • ansible: 2.7.7
  • OneView Synergy 5.20

Steps to Reproduce

# cat example.yaml
---
- hosts: localhost    
  vars:
    ansible_python_interpreter: '/usr/local/bin/python3.6'
    config: 'oneview_dev.json'
    EnclosureGroupName: "ad23vec001"
    sptName: "test-spt"

  tasks:

  # We want to clone the Server Profile Template 
  # and the SDK doessn't accept the enclosureGroupUri and serverHardwareTypeUri parameters
  - name: "Create a Server Profile Template with "
    oneview_server_profile_template:
      config: "{{ config }}"
      state: present
      data:
        name: "{{ sptName }}"
        serverHardwareTypeName: 'SY 480 Gen10 1'
        serverProfileDescription: "my description"
        enclosureGroupName: 'vec1'
        bios:
          complianceControl: 'checked'
          manageBios: True
          overriddenSettings: [{'id': 'WorkloadProfile', 'value': 'Virtualization-MaxPerformance' }]
		  
# ansible-playbook -v example.yaml

Expected Result

The SDK should add the BIOS settings of the workload profile : As example for the Virtualization-MaxPerformance Profile :

"server_profile_templates": [
        {
            "affinity": "Bay",
            "bios": {
                "complianceControl": "Checked",
                "manageBios": true,
                "overriddenSettings": [
                    {
                        "id": "WorkloadProfile",
                        "value": "Virtualization-MaxPerformance"
                    },
                    {
                        "id": "MinProcIdlePower",
                        "value": "NoCStates"
                    },
                    {
                        "id": "IntelUpiPowerManagement",
                        "value": "Disabled"
                    },
                    {
                        "id": "MinProcIdlePkgState",
                        "value": "NoState"
                    },
                    {
                        "id": "EnergyPerfBias",
                        "value": "MaxPerf"
                    },
                    {
                        "id": "UncoreFreqScaling",
                        "value": "Maximum"
                    },
                    {
                        "id": "PowerRegulator",
                        "value": "StaticHighPerf"
                    },
                    {
                        "id": "SubNumaClustering",
                        "value": "Enabled"
                    },
                    {
                        "id": "CollabPowerControl",
                        "value": "Disabled"
                    },
                    {
                       "id": "EnergyEfficientTurbo",
                        "value": "Disabled"
                    },
                    {
                        "id": "NumaGroupSizeOpt",
                        "value": "Clustered"
                    }
                ]
            },
[ .... ]
        }
    ]
}

Actual Result

"server_profile_templates": [
        {
            "affinity": "Bay",
            "bios": {
                "complianceControl": "Checked",
                "manageBios": true,
                "overriddenSettings": [
                    {
                        "id": "WorkloadProfile",
                        "value": "Virtualization-MaxPerformance"
                    }
                ]
            },
[ .... ]
        }
    ]
}

I thank you to improve the SDK and handle the workload profile.

The OneView Ansible Collection doesn't permit to update user password

Hi,

We are using OneView 6.1 and the latest OneView SDK Collection version 6.4.

We have an issue to update user password using the SDK: no modification is performed.

We have tested witth others SDK : It is working as expected using the POSH-HPOneView SDK and with also using the REST API.

Version used;

  • OneView 6.10.00-0436703
  • OneView Ansible SDK Collection version 6.4

You will find below an example code:

- hosts: localhost
  vars:
    config: 'oneview.json'
    ansible_python_interpreter: '/usr/bin/python3.6'
    userName: "test-user"
    roleName: "Infrastructure administrator"
    password1: "LONG1234@passwd1234"
    password2: "password67654#AZEDF"

  collections:
    - hpe.oneview

  tasks:

    - name: "Create a user"
      hpe.oneview.oneview_user:
        config: "{{ config }}"
        state: present
        data:
          userName: "{{ userName }}"
          password: "{{ password1 }}"
          enabled: True
          fullName: "User Name"
          mobilePhone: "+33600000000"
          officePhone: "+33600000000"
          permissions:
            - roleName: "{{ roleName }}"

    - name: "Update User password"
      hpe.oneview.oneview_user:
        config: "{{ config }}"
        state: present
        data:
          userName: "{{ userName }}"
          password: "{{ password2 }}"

The update of the password returns an "User is already present" error message.

PLAY [localhost] ********************************************************************************************************************************************

TASK [Gathering Facts] **************************************************************************************************************************************
ok: [localhost]

TASK [Create a user] ****************************************************************************************************************************************
changed: [localhost] => {"ansible_facts": {"user": {"category": "users", "created": "2021-11-22T15:01:23.226Z", "description": null, "eTag": "994312728", "emailAddress": "", "enabled": true, "fullName": "User Name", "mobilePhone": "+33600000000", "modified": "2021-11-22T15:01:23.226Z", "name": null, "officePhone": "+33600000000", "permissions": [{"roleName": "Infrastructure administrator", "scopeUri": null}], "state": null, "status": null, "type": "UserAndPermissions", "uri": "/rest/users/test-user", "userName": "test-user"}}, "changed": true, "msg": "User created successfully."}

TASK [Update User password] *********************************************************************************************************************************
ok: [localhost] => {"ansible_facts": {"user": {"category": "users", "created": "2021-11-22T15:01:23.226Z", "description": null, "eTag": "994312728", "emailAddress": "", "enabled": true, "fullName": "User Name", "mobilePhone": "+33600000000", "modified": "2021-11-22T15:01:23.226Z", "name": null, "officePhone": "+33600000000", "permissions": [{"roleName": "Infrastructure administrator", "scopeUri": null}], "state": null, "status": null, "type": "UserAndPermissions", "uri": "/rest/users/test-user", "userName": "test-user"}}, "changed": false, "msg": "User is already present."}

PLAY RECAP **************************************************************************************************************************************************
localhost                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

As I can see into the source code:
https://github.com/HewlettPackard/oneview-ansible-collection/blob/master/plugins/modules/oneview_user.py

The password field is removed line 330 :

            # remove password, it cannot be used in comparison
            if 'password' in merged_data:
                del merged_data['password']

The Ansible Collection SDK should be able to update the user password.

Best Regards,
Nicolas Portais

SNMPv3 users - Idempotency issue

Scenario/Intent

Hello,

I am creating SNMPv3 users via an Ansible task however running the same code twice with the same user input leads to an error.

main.yml

- name: Create snmpv3 users
  oneview_appliance_device_snmp_v3_users:
    config: "{{ config }}"
    state: present
    data:
        userName: "{{ username }}"
        securityLevel: "{{ security_level }}"
        authenticationProtocol: "{{ authentication_protocol }}"
        authenticationPassphrase: "{{ authentication_passphrase }}"
        privacyProtocol: "{{ privacy_protocol }}"
        privacyPassphrase: "{{ privacy_passphrase }}"
  delegate_to: localhost

Environment Details

  • OneView Ansible Collection: v6.6.0
  • OneView Appliance Version: 6.10
  • Ansible version: 2.9.6
  • Python version: 3.8.10

Expected Result

We have idempotency when we run the same playbook with the same SNMPv3 user input data multiple times.

Actual Result

TASK [snmpv3 : Create snmpv3 users] ******************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: hpeOneView.exceptions.HPEOneViewException: ('The supplied user name already exists.', {'errorCode': 'USER_ALREADY_EXIST', 'message': 'The supplied user name already exists.', 'details': '', 'messageParameters': [None], 'recommendedActions': ['Provide a different SNMPv3 user name and retry the operation.'], 'errorSource': None, 'nestedErrors': [], 'data': {}})

PLAY RECAP *******************************************************************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0  

oneview_server_profile TypeError: sequence item 0: expected str instance, NoneType found

Getting the follow error when trying to create a profile:

"/tmp/ansible_hpe.oneview.oneview_server_profile_payload___kih52j/ansible_hpe.oneview.oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 337, in __present\n File "/tmp/ansible_hpe.oneview.oneview_server_profile_payload___kih52j/ansible_hpe.oneview.oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 467, in __create_profile\n File "/tmp/ansible_hpe.oneview.oneview_server_profile_payload___kih52j/ansible_hpe.oneview.oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 702, in _auto_assign_server_profile\n File "/tmp/ansible_hpe.oneview.oneview_server_profile_payload___kih52j/ansible_hpe.oneview.oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 556, in __get_available_server_hardware_uri\nTypeError: sequence item 0: expected str instance, NoneType found\n",

It's in a test environment so the user is Infrastructure admin.

I followed the example as described here: https://github.com/HewlettPackard/oneview-ansible-collection/blob/master/roles/oneview_server_profile/tasks/main.yml

Set permissions for iLO accounts on Synergy blades

Hello,

I am trying to set up permissions for iLO accounts (local accounts or directory group accounts) in a service profile template.
I defined this in an Ansible task as follows:

- name: Create the server profile template
  hpe.oneview.oneview_server_profile_template:
    config: "{{ config }}"
    state: present
    data:
      name: "{{ server_profile.template.name }}"
      description: "{{ server_profile.template.description }}"      
      serverProfileDescription: "{{ server_profile.description }}" 
      serverHardwareTypeName: "{{ hardware_type_name }}"
      enclosureGroupName: "{{ enclosure_group_name }}"
      affinity: "Bay"
      managementProcessor:
        manageMp: true
        complianceControl: "Checked"
        mpSettings:
              - settingType: "LocalAccounts"
            args:
              localAccounts:
                - userName: "user"
                  displayName: "local account"
                  password:  "Password"
                  userConfigPriv: true
                  remoteConsolePriv: true
                  virtualMediaPriv: true
                  virtualPowerAndResetPriv: true
                  iLOConfigPriv: true
                  loginPriv: true
                  hostBIOSPriv: true
                  hostNICPriv: true
                  hostStoragePriv: true
  delegate_to: localhost

I retrieved some of the keywords in the OneView REST API and I guessed the remaining ones but it doesn't work. Creating the account without any permission parameters works fine.
I tested this with Oneview 6.0 and Ansible 2.9.27 on a Synergy blade SY480 Gen10
Any idea how this can be solved?
Many thanks,
V

Allow renaming of OneView Server Hardware (or advise how to rename server hardware)

I have a server hardware item with the incorrect name and need to rename it using Ansible. Essentially wanting to rename the iLO hostname as well as the Server Hardware in OneView.

Attempting to add the Server Hardware to OneView with the new name (by using the below code) causes the error found at the bottom of this issue.

Attempt:

 - name: (HPE) Add Server Hardware to OneView
    hpe.oneview.oneview_server_hardware:
      hostname: "{{ oneview_address }}"
      username: "{{ oneview_username }}"
      password: "{{ oneview_password }}"
      api_version: "{{ oneview_api_version }}"
      auth_login_domain: "{{ oneview_auth_login_domain }}"
      state: present
      data:
        hostname: "{{ management_card_hostname }}"
        username: "{{ management_card_username }}"
        password: "{{ management_card_password }}"
        licensingIntent: "OneView"
        force: false
        configurationState: "Managed"
    delegate_to: localhost

Issue:

An exception occurred during task execution. To see the full traceback, use -vvv. The error was: hpeOneView.exceptions.HPEOneViewTaskError: The server hardware has already been added as {"name":"obscured","uri":"/rest/server-hardware/obscured"}.

Manual Process

  • Login to iLO
  • Click on Network
  • Click on iLO Dedicated Network Port
  • Click on General
  • Change iLO Subsystem Name (as shown below)
    Screenshot 2021-07-19 at 13 59 05
  • Reset iLO

Force param support is not here in server profile update operation

Server profile creation and update throws warning related to logical drives. Exact error message is
that there are not enough physical drives to apply the logical drive configuration as specified in the server profile.

In GUI the error can be overridden by doing create twice. The same is not working with ansible modules.

Suspected bug with oneview_server_profile when using state: compliant

Am trying to idempotently ensure OneView Server Profile is compliant with the server profile template.

Have tried various different key/values in the data dict using the module like so:

Issue occurs specifically when using the compliant state.

- name: "{{ esx_hostname }}: Ensure server profile is compliant"
  hpe.oneview.oneview_server_profile:
    hostname: "{{ oneview_address }}"
    username: "{{ oneview_username }}"
    password: "{{ oneview_password }}"
    auth_login_domain: "{{ oneview_auth_login_domain }}"
    state: compliant
    data:
      name: "{{ management_card_hostname_no_fqdn }}"
      # server_template_name: "{{ server_profile_template }}"
      # serverHardwareUri: "{{ server_hardware_uri }}"
  delegate_to: localhost

OneView version

Firmware 6.20.00-0443754

Ansible version

[root@0ca7edca0a5b ansible]# ansible --version
ansible [core 2.11.5]
  config file = /mnt/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
  ansible collection location = /mnt/ansible/collections/ansible_collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible
  python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
  jinja version = 3.0.1
  libyaml = True

Collection version

[root@0ca7edca0a5b ansible]# ansible-galaxy collection list
# /mnt/ansible/collections/ansible_collections
Collection              Version
----------------------- -------
community.general       3.1.0
hpe.oneview             6.3.0
...

Traceback:

The full traceback is:
Traceback (most recent call last):
  File "/root/.ansible/tmp/ansible-tmp-1632415592.882123-29070-46239097238132/AnsiballZ_oneview_server_profile.py", line 100, in <module>
    _ansiballz_main()
  File "/root/.ansible/tmp/ansible-tmp-1632415592.882123-29070-46239097238132/AnsiballZ_oneview_server_profile.py", line 92, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/root/.ansible/tmp/ansible-tmp-1632415592.882123-29070-46239097238132/AnsiballZ_oneview_server_profile.py", line 40, in invoke_module
    runpy.run_module(mod_name='ansible_collections.hpe.oneview.plugins.modules.oneview_server_profile', init_globals=dict(_module_fqn='ansible_collections.hpe.oneview.plugins.modules.oneview_server_profile', _modlib_path=modlib_path),
  File "/usr/lib64/python3.8/runpy.py", line 207, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib64/python3.8/runpy.py", line 97, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/usr/lib64/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_hpe.oneview.oneview_server_profile_payload_kc_ozrot/ansible_hpe.oneview.oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 693, in <module>
  File "/tmp/ansible_hpe.oneview.oneview_server_profile_payload_kc_ozrot/ansible_hpe.oneview.oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 689, in main
  File "/tmp/ansible_hpe.oneview.oneview_server_profile_payload_kc_ozrot/ansible_hpe.oneview.oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/module_utils/oneview.py", line 633, in run
  File "/tmp/ansible_hpe.oneview.oneview_server_profile_payload_kc_ozrot/ansible_hpe.oneview.oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 296, in execute_module
  File "/tmp/ansible_hpe.oneview.oneview_server_profile_payload_kc_ozrot/ansible_hpe.oneview.oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 605, in __make_compliant
AttributeError: 'NoneType' object has no attribute 'data'
fatal: [hostname_here -> localhost]: FAILED! => {
    "changed": false,
    "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1632415592.882123-29070-46239097238132/AnsiballZ_oneview_server_profile.py\", line 100, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1632415592.882123-29070-46239097238132/AnsiballZ_oneview_server_profile.py\", line 92, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1632415592.882123-29070-46239097238132/AnsiballZ_oneview_server_profile.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.hpe.oneview.plugins.modules.oneview_server_profile', init_globals=dict(_module_fqn='ansible_collections.hpe.oneview.plugins.modules.oneview_server_profile', _modlib_path=modlib_path),\n  File \"/usr/lib64/python3.8/runpy.py\", line 207, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib64/python3.8/runpy.py\", line 97, in _run_module_code\n    _run_code(code, mod_globals, init_globals,\n  File \"/usr/lib64/python3.8/runpy.py\", line 87, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_hpe.oneview.oneview_server_profile_payload_kc_ozrot/ansible_hpe.oneview.oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 693, in <module>\n  File \"/tmp/ansible_hpe.oneview.oneview_server_profile_payload_kc_ozrot/ansible_hpe.oneview.oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 689, in main\n  File \"/tmp/ansible_hpe.oneview.oneview_server_profile_payload_kc_ozrot/ansible_hpe.oneview.oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/module_utils/oneview.py\", line 633, in run\n  File \"/tmp/ansible_hpe.oneview.oneview_server_profile_payload_kc_ozrot/ansible_hpe.oneview.oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 296, in execute_module\n  File \"/tmp/ansible_hpe.oneview.oneview_server_profile_payload_kc_ozrot/ansible_hpe.oneview.oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 605, in __make_compliant\nAttributeError: 'NoneType' object has no attribute 'data'\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

oneview_server_hardware module is not idempotent

oneview_server_hardware module is not idempotent. It works the first time it is run. If you run it a second time with the same vars, it errors in red stating the server already exists. This will break workflows. It needs to report server already exists in green so it completes as successful

Creation of external repository - Idempotency fails

Scenario/Intent

Hello,

I am using the oneview_repositories plugin to create an external repository on OneView, however running the code a second time doesn't garantee idempotency.

main.yml

- name: Create an external repository
  hpe.oneview.oneview_repositories:
    state: present
    config: "{{ config }}"
    data:
      repositoryName: "{{ repository_name }}"
      userName: "{{ repository_username }}"
      password: "{{ repository_password }}"
      repositoryURI: "{{ repository_uri }}"
      repositoryType: "{{ repository_type }}"
  delegate_to: localhost

Environment Details

  • OneView Ansible Collection: v6.5.0
  • OneView Appliance Version: 6.10
  • Ansible version: 2.9.6
  • Python version: 3.6.10

Expected Result

Idempotency is respected when running the playbook a second time. I understand that only one external repository can be present on a given OneView but the second execution should not throw an error, only ignore the task as there was no changes.

Actual Result

Result of the first ansible-playbook command:

TASK [external_fw_repo : Create an external repository] *********************************************************************************************

changed: [localhost -> localhost] => [...]
    "msg": "Repository created successfully.",
    "param": {
        "repository_name": "extRepo",
        "repository_password": "Apassword",
        "repository_type": "FirmwareExternalRepo",
        "repository_uri": "<external_repo_uri>",
        "repository_username": "user"
    }
}
META: ran handlers
META: ran handlers

PLAY RECAP ******************************************************************************************************************************************
localhost                  : ok=5    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Result of the second ansible-playbook command:

raise HPEOneViewException(body)\nhpeOneView.exceptions.HPEOneViewException: ('Adding multiple external repository is currently not supported.', {'errorCode': 'MULTIPLE_REPOSITORY_NOT_SUPPORTED_ERROR', 'message': 'Adding multiple external repository is currently not supported.', 'details': '', 'messageParameters': [], 'recommendedActions': ['Remove the existing external repository and retry the operation.'], 'errorSource': None, 'nestedErrors': [], 'data': {}})\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "param": {
        "repository_name": "extRepo",
        "repository_password": "Apassword",
        "repository_type": "FirmwareExternalRepo",
        "repository_uri": "<external_repo_uri>",
        "repository_username": "user"
    },
    "rc": 1
}

PLAY RECAP ******************************************************************************************************************************************
localhost                  : ok=3    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

Proxy role - proxy setting with no authentication requires password

Scenario/Intent

Hello,

I am using the oneview_appliance_proxy_configuration plugin to set up proxy settings on OneView with no authentication, however the task fails because it wants me to set the password parameter.

main.yml

- name: Setup HTTP/HTTPS Proxy configuration (without Authentication)
  hpe.oneview.oneview_appliance_proxy_configuration:
    state: present
    config: "{{ config }}"
    data:
    data:
      server: "{{ proxy_server }}"
      port: "{{ port }}"
      communicationProtocol: "{{ protocol }}"
  delegate_to: localhost

Environment Details

  • OneView Ansible Collection: v6.6.0
  • OneView Appliance Version: 6.10
  • Ansible version: 2.9.6
  • Python version: 3.8.10

Expected Result

We should not have to add the password parameter to the data block in the task when we want to set up the proxy settings with no authentication.

Actual Result

An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'password'
failed: [localhost -> localhost] (item={'proxy_server': '<proxy_server_address>', 'port': '8080', 'protocol': 'HTTP'})
PLAY RECAP ***********************************************************************************************************
localhost                  : ok=3    changed=0    unreachable=0    failed=1    skipped=1    rescued=0    ignored=0   

i have gotten error when i try ssh

hello!
I have gotten this error when I try to do Adhoc on my ILO:

192.168.60.30 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: Unable to negotiate with 192.168.60.30 port 22: no matching key exchange method found. Their offer: diffie-hellman-group14-sha1,diffie-hellman-group1-sha1",
    "unreachable": true
}

I have added these lines on ssh_config:

    SendEnv LANG LC_*
    HashKnownHosts yes
    GSSAPIAuthentication yes
    HostKeyAlgorithms ssh-rsa,ssh-dss
    KexAlgorithms diffie-hellman-group1-sha1,diffie-hellman-group14-sha1
    Ciphers aes128-cbc,3des-cbc
    MACs hmac-md5,hmac-sha1

also, my -vvv adhoc is:

ansible 2.9.6
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.8.10 (default, Jun  2 2021, 10:49:15) [GCC 9.4.0]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
META: ran handlers
<192.168.60.30> ESTABLISH SSH CONNECTION FOR USER: administrator
<192.168.60.30> SSH: EXEC sshpass -d11 ssh -o KexAlgorithms=+ecdh-sha2-nistp256 -o StrictHostKeyChecking=no -o 'User="administrator"' -o ConnectTimeout=30 -o HostKeyAlgorithms=ssh-rsa,ssh-dss -o KexAlgorithms=diffie-hellman-group1-sha1,diffie-hellman-group14-sha1 -o Ciphers=aes128-cbc,3des-cbc -o MACs=hmac-md5,hmac-sha1 192.168.60.30 '/bin/sh -c '"'"'echo ~administrator && sleep 0'"'"''
<192.168.60.30> (255, b'', b'Unable to negotiate with 192.168.60.30 port 22: no matching key exchange method found. Their offer: diffie-hellman-group14-sha1,diffie-hellman-group1-sha1\r\n')
192.168.60.30 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: Unable to negotiate with 192.168.60.30 port 22: no matching key exchange method found. Their offer: diffie-hellman-group14-sha1,diffie-hellman-group1-sha1",
    "unreachable": true
}

also I do ssh successful!
is there any idea?!

General documentation issues

Hi,

None of the examples shows how to use the new and improved roles.
I want to rename and reassing a SP and notice that the role has been rewamped to take the newname property. This is not true for the modules which seems to have been left dragging.

So first question is an ask to have the README show how to perform authentication when using the import_role construct.

Tried:

    - name: (server_profile_template)
      include_role:
        name: hpe.oneview.oneview_server_profile_template
      vars:
        config: "{{ config }}"
      register: msg

Results in:
Error was a <class 'ansible.errors.AnsibleError'>, original message: recursive loop detected in template string: {{ config }}"}

Also would be MUCH better allowing us to pass the config JSON as a variable rather than passing the filename. Not a "vault/secret" firnedly construction?

assign static IP addresses during LE creation

Is there any way to assign static IP addresses to the enclosure devices (network modules and iLOs) in a tasks that creates LE? This feature is in GUI in section "IPv4 Addresses" during LE creation and is available starting from OV 5.3 or 5.4.

Not found roles

roles not found for:
firmware drivers
sas interconnect
sas logical jbods
appliance time
subnet pool details

Appliance network settings - updating DNS fails

Scenario/Intent

Hello,

I am trying to update the DNS settings in the network appliance of OneView however I am getting an error when running the playbook. Am I missing something necessary for the task ?

main.yml

- name: Update dns servers of the network interface
  hpe.oneview.oneview_appliance_network_interfaces:
    state: present
    hostname: "{{ oneview_hostname }}"
    username: "{{ oneview_username }}"
    password: "{{ oneview_password }}"
    auth_login_domain: "{{ oneview_auth_login_domain }}"
    api_version: "{{ oneview_api_version }}"
    data:
      macAddress: "{{ macAddress }}"
      ipv4NameServers: "{{ network_settings }}"
  delegate_to: localhost

Example of input DNS settings: network_settings: ["firstIpAddress", "secondIpAddress"]

Environment Details

  • OneView Ansible Collection: v6.6.0
  • OneView Appliance Version: 6.10
  • Ansible version: 2.9.6
  • Python version: 3.8.10

Expected Result

The DNS (preferred or alternate) ips are updated with the correct input.

Actual Result

raise HPEOneViewTaskError(msg, error_code)\nhpeOneView.exceptions.HPEOneViewTaskError: Supplied IP address <ip @ of the OneView> is duplicate for one or more fields.\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

PLAY RECAP *******************************************************************************************************************************
localhost                  : ok=4    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0 

SAS Logical Interconnect Group

In applying a logical interconnect group playbook, a LIG that is SAS (Synergy 12Gb SAS Connection Module) does not appear to work

ansible_collections.hpe.oneview.plugins.module_utils.oneview.OneViewModuleResourceNotFound: Interconnect Type was not found.
fatal: [localhost -> localhost]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "api_version": null,
            "auth_login_domain": null,
            "config": "/home/tmorgan/deployment/connection.json",
            "data": {
                "enclosureIndexes": [
                    1
                ],
                "interconnectBaySet": 3,
                "interconnectMapTemplate": {
                    "interconnectMapEntryTemplates": [
                        {
                            "enclosureIndex": 1,
                            "logicalLocation": {
                                "locationEntries": [
                                    {
                                        "relativeValue": 1,
                                        "type": "Enclosure"
                                    },
                                    {
                                        "relativeValue": 1,
                                        "type": "Bay"
                                    }
                                ]
                            }
                        },
                        {
                            "enclosureIndex": 1,
                            "logicalLocation": {
                                "locationEntries": [
                                    {
                                        "relativeValue": 1,
                                        "type": "Enclosure"
                                    },
                                    {
                                        "relativeValue": 4,
                                        "type": "Bay"
                                    }
                                ]
                            },
                            "permittedInterconnectTypeName": "Synergy 12Gb SAS Connection Module"
                        }
                    ]
                },
                "name": "LIG_SAS",
                "redundancyType": "Redundant"
            },
            "hostname": null,
            "image_streamer_hostname": null,
            "password": null,
            "state": "present",
            "username": null,
            "validate_etag": true
        }
    },
    "msg": "Interconnect Type was not found."
}

I had previously attempted to implement this in terraform. There was an open issue in the oneview-terraform github project regarding this. The ticket was closed stating was not support for SAS LIG in the terraform provider and that it would be put on the backlog

The HPE Oneview rest API does have support for a sas logical interconnect group, but it is worth noting that it is a separate rest end point. For the time being I am working around this issue by utilizing the rest API directly.

Can we see support for this implemented in the future?

module_utils _merge_connections_boot fails with TypeError

in _merge_connections_boot
    existing_connection_map.update(dict(x[SPKeys.ID], x.copy()))
TypeError: dict expected at most 1 argument, got 2

at least on Python 3.8.3, dict takes only one argument.
This is surprising because the original module_utils (from the HPE oneview ansible repository) works just fine. The issue is specific to the galaxy build (here, 1.2.1). Couldn't there be a single repository used for the collection build? How do you make sure changes are ported from one repository to the other? Are both repos following the same quality process ?

Labels - Get the resources associated with label

Scenario/Intent

Hello,

I am trying to get a list of resources associated with a label by name. However the fact doesn't return anything related to resources.

Environment Details

  • OneView Ansible Collection: v6.3.0
  • OneView Appliance Version: 6.30
  • Ansible version: 2.11.6
  • Python version: 3.6.8

Expected Result

When getting facts from a label by name, the resources associated with this label are also returned.
Can it be done with the Ansible SDK ? In the HPE OneView SDK for Python we seem to be able to do it.

Actual Result

"ansible_facts": {
            "labels": [
                {
                    "category": "labels",
                    "created": "2021-10-13T14:53:13.626Z",
                    "eTag": "1",
                    "modified": "2021-10-13T14:53:13.626Z",
                    "name": "test",
                    "type": "Label",
                    "uri": "/rest/labels/1"
                }
            ]
        }

Playbook header

Because the playbook is running locally, should you change the example by adding some local headers to the playbook lie this?

  • hosts: localhost
    connection: local

OneView Session limit exceeded issue

OneView creates a new session for every task in ansible to delete and create a server profile from a server profile template. This led to OneView reaching its session limit, making it impossible to access OneView through ansible and also any of the OneView APIs. Is there a way we can use a session id to re-login for each task so that we do not exceed the limit?

Firmware baseline update causes OS Volume to be deleted.

When I use the oneview_server_profile to create a new Server Profile, then attempt to update the firmware baseline, instead of staging the firmware, OneView will warn that:

"Changing the OS deployment settings for the profile will cause the OS volume . . . to be deleted."

As far as I know this is not intended?

I was directed here by HPE Support. I would like to contact you directly for more info? Thank you!

HPEFirmwareUpdateIssue

Roles are really examples

Some roles need to be edited to work properly. They are more examples than working roles. For instance. The oneview_hypervisor_manager module will create, update, power off, power on delete and re-create the server. This makes it unusable as a called role from a requirements.yml file.
You will need to edit the tasks to work properly, then when you update the project in Ansible Tower, it will re-download the roles from Galaxy or Automation hub and overwrite any changes that were made.
These "roles" should be moved into a directory called "examples" and the roles should be made usable.

volume creation is not derived from volume template

HI team,

when I try to create new volume with existing template it always asks me to mention size and storagepool. where my volume template has all info about volume properties and storage pool.
requirement to create a volume using existing volume template without defining size and storage pool.
is this expected one?

versions:
ansible 2.10.9
Python 3.6.12
hpe-oneview-6.1.0
X-Api-Version: 2600

when using the below content through hpe api its working. similarly gets failed with below error in ansible playbook.
code: (copy paste is not working as expected, below code has no syntax error)

---
- hosts: localhost
  become: false
  gather_facts: false
  vars:
     python_interpreter: ~/hpe-ansible/bin/python
  collections:
      - hpe/oneview
  tasks:
    - name: Create a basic connection-less server profile template (using names)
      hpe.oneview.oneview_volume:
        hostname: localhost
        username: Admin
        password: *********
        api_version: 2600
        state: present
        data:
           properties:
             name: 'disk_00000'
             description: 'Test volume with common creation: Storage Pool'
             size: 37580963840
             storagePool: "/rest/storage-pools/pool-id"
           templateUri: "/rest/storage-volume-templates/template_uri"
           #initialScopeUris:
               #- /rest/scopes/resources/rest/storage-volumes/init_uri
           isPermanent: false
      register: datacenters
    - debug:
       msg: "{{ datacenters }}"

error:
task_response = self.__get_task_response(task)\n File "/home/user/hpe-ansible/lib64/python3.6/site-packages/hpeOneView/resources/task_monitor.py", line 142, in __get_task_response\n raise HPEOneViewTaskError(msg, error_code)\nhpeOneView.exceptions.HPEOneViewTaskError: Unable to create or update the volume.\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

//vinoth

Idempotency for Server Hardware iLO firmware version (Always returns changed instead of OK)

Scenario/Intent

Idempotent deployment of servers with accurate OK and Changed results from Ansible. Using the ilo_firmware_version_updated task always returns a result of changed.

Environment Details

  • Module Version: 6.0.0
  • Ansible Version: ansible 2.10.7
  • OneView Appliance Version: 5.50.00-0426657
  • OneView Client API Version: 2200
  • Python Version: python version = 3.8.5
  • python-hpOneView SDK Version: 6.0.0
  • Platform: ubuntu docker container running Ansible

Steps to Reproduce

image

Expected Result

Status OK returned from Ansible, not Changed.

Actual Result

image
image

image

oneview_server_profile TypeError: sequence item 0: expected str instance, NoneType found

hello
i try to use the sample

Create a Server Profile with connections

but receive the error

The full traceback is:
Traceback (most recent call last):
File "/home/[email protected]/.ansible/tmp/ansible-tmp-1632998751.61551-67691-19266064946261/AnsiballZ_oneview_server_profile.py", line 102, in
_ansiballz_main()
File "/home/[email protected]/.ansible/tmp/ansible-tmp-1632998751.61551-67691-19266064946261/AnsiballZ_oneview_server_profile.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/[email protected]/.ansible/tmp/ansible-tmp-1632998751.61551-67691-19266064946261/AnsiballZ_oneview_server_profile.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible_collections.hpe.oneview.plugins.modules.oneview_server_profile', init_globals=None, run_name='main', alter_sys=True)
File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 693, in
File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 689, in main
File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/module_utils/oneview.py", line 633, in run
File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 284, in execute_module
File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 324, in __present
File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 451, in __create_profile
File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 683, in _auto_assign_server_profile
File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 540, in __get_available_server_hardware_uri
TypeError: sequence item 0: expected str instance, NoneType found
fatal: [localhost -> localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File "/home/[email protected]/.ansible/tmp/ansible-tmp-1632998751.61551-67691-19266064946261/AnsiballZ_oneview_server_profile.py", line 102, in \n _ansiballz_main()\n File "/home/[email protected]/.ansible/tmp/ansible-tmp-1632998751.61551-67691-19266064946261/AnsiballZ_oneview_server_profile.py", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File "/home/[email protected]/.ansible/tmp/ansible-tmp-1632998751.61551-67691-19266064946261/AnsiballZ_oneview_server_profile.py", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.hpe.oneview.plugins.modules.oneview_server_profile', init_globals=None, run_name='main', alter_sys=True)\n File "/usr/lib64/python3.6/runpy.py", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code\n exec(code, run_globals)\n File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 693, in \n File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 689, in main\n File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/module_utils/oneview.py", line 633, in run\n File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 284, in execute_module\n File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 324, in __present\n File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 451, in __create_profile\n File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 683, in _auto_assign_server_profile\n File "/tmp/ansible_oneview_server_profile_payload_sypcl5im/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py", line 540, in __get_available_server_hardware_uri\nTypeError: sequence item 0: expected str instance, NoneType found\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}

  • name: Create a Server Profile with connections
    oneview_server_profile:
    config: "config.json"
    #state: "present"
    data:
    name: "demo"
    description: Server Profile with connections created from a selected Server Profile Templa
    serverHardwareTypeName: "SY 480 Gen10 1"
    connectionSettings:
    connections:
    - id: 1
    name: connection1
    functionType: Ethernet
    portId: Auto
    requestedMbps: 2500
    networkName: "vmwab01041-mgmt"
    delegate_to: localhost

ansible 2.9.26
config file = None
configured module search path = ['/home/[email protected]/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 13 2020, 07:46:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

SNMPv3 traps - creation fails

Scenario/Intent

Hello,

I am trying to create multiple SMNPv3 traps associated with a SNMPv3 user however this end with an error. I do not know if the problem could come from the username (userId ?) used during the creation of the trap.

main.yml

- name: Create snmpv3 trap
   oneview_appliance_device_snmp_v3_trap_destinations:
    config: "{{ config }}"
    state: present
    name: "{{ destination }}"
    data:
        destinationAddress: "{{ destination }}"
        port: "{{ port }}"
        userName: "{{ username }}"
  delegate_to: localhost

Environment Details

  • OneView Ansible Collection: v6.6.0
  • OneView Appliance Version: 6.10
  • Ansible version: 2.9.6
  • Python version: 3.8.10

Expected Result

Creation of a SNMPv3 trap is successful.

Actual Result

TASK [snmpv3 : Create snmpv3 trap] *******************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: 'ApplianceDeviceSNMPv3Users' object is not subscriptable

line 140, in execute_module\n  File \"/tmp/ansible_hpe.oneview.oneview_appliance_device_snmp_v3_trap_destinations_payload_dqrs94qy/ansible_hpe.oneview.oneview_appliance_device_snmp_v3_trap_destinations_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_appliance_device_snmp_v3_trap_destinations.py\", line 153, in __replace_snmpv3_username_by_userid\nTypeError: 'ApplianceDeviceSNMPv3Users' object is not subscriptable\n",

PLAY RECAP *******************************************************************************************************************
localhost                  : ok=3    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0  

oneview_logical_interconnect_group.py does not handle empty uplinksets

Created an empty uplinkset named 'Test'

Then went on to try to add some new networks to that LIG and the new Uplinkset.

      oneview_logical_interconnect_group:
        config: "{{ config }}"
        state: present
        data:
          name: 'LIG1'
          enclosureType: 'SY12000'
          uplinkSets:
            - name: 'Test'
              mode: 'Auto'
              networkType: 'Ethernet'
              networkNames:
                - 'VLAN_1100'
                - 'VLAN_1101'

It error out on:

/ansible_collections/hpe/oneview/plugins/modules/oneview_logical_interconnect_group.py\", line 323, in __replace_uplinkset_port_values\nTypeError: 'NoneType' object is not iterable\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error"

Looking at the source code

for uplinkSet in self.data['uplinkSets']:
                    existingLogicalPortConfigInfos = uplinkSet.get('logicalPortConfigInfos')
                    for item in existingLogicalPortConfigInfos:

Looks like the uplinkSet.get('logicalPortConfigInfos') returns nothing. Maybe check for this and handle accordingly?

contents.api_version

Hi, I tried running the oneview_server_profile_fact role and it failed for 'content's variable undefined. I can't figure out where it comes from. Can you please help? Here is the Ansible error message:

(myenv) [root@centos8vm .ansible]# ansible-playbook test_sever_profile_facts.yaml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] *******************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************

ok: [localhost]

TASK [hpe.oneview.oneview_server_profile_facts : Get currentVersion from URL] ******************************************************************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'contents.api_version == \"\"' failed. The error was: error while evaluating conditional (contents.api_version == \"\"): 'contents' is undefined\n\nThe error appears to be in '/root/.ansible/collections/ansible_collections/hpe/oneview/roles/oneview_server_profile_facts/tasks/main.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: Get currentVersion from URL\n  ^ here\n"}

PLAY RECAP *************************************************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

Enhancement Request: SPP "name" too restrictive

Team, when working with oneview_firmware_driver_facts to extract the SPP URI for use in subsequent tasks, I can't figure out a way to easily pull the URI when querying OneView. Because all SPPs are now "named" the same (like Service Pack for ProLiant or Service Pack for Synergy) for the SPP bundles could we add an additional attribute in the code "version" so that we can more easily obtain the URI of the desired SPP?
I would love to see something like this:

  • name: Get the baseline URI of the desired SPP
    oneview_firmware_driver_facts:
    config: "{{ config }}"
    version: SY-2021.01.01
    name: Service Pack for Synergy
    • set_fact:
      spp_uri: "{{ firmware_drivers[0].uri }}"
      Thanks!

add storage to list of tags in galaxy.yml

in the galaxy.yml file, there is a tag field. Automation Hub builds a few use specific views, adding storage to this list will help display this collection when users select it from the tag drop down.

oneview_server_profile cannot create more than one server profile in parallel

Re-opening #152 because the problem appears again in the latest version.

TASK [Creating Server Profile "ESX-1 from Server Profile Template ESXi_BFS_Frame4"] ******************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: hpeOneView.exceptions.HPEOneViewTaskError: A profile is already assigned to the server hardware {"name":"Frame4, bay 2", "uri":"/rest/server-hardware/39313738-3034-5A43-3231-32343036474D"}.
fatal: [ESX-1 -> localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1650447337.6171565-3193028-109933316272242/AnsiballZ_oneview_server_profile.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1650447337.6171565-3193028-109933316272242/AnsiballZ_oneview_server_profile.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1650447337.6171565-3193028-109933316272242/AnsiballZ_oneview_server_profile.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.hpe.oneview.plugins.modules.oneview_server_profile', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_oneview_server_profile_payload_kcruweip/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 702, in <module>\n  File \"/tmp/ansible_oneview_server_profile_payload_kcruweip/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 698, in main\n  File \"/tmp/ansible_oneview_server_profile_payload_kcruweip/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/module_utils/oneview.py\", line 671, in run\n  File \"/tmp/ansible_oneview_server_profile_payload_kcruweip/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 287, in execute_module\n  File \"/tmp/ansible_oneview_server_profile_payload_kcruweip/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 327, in __present\n  File \"/tmp/ansible_oneview_server_profile_payload_kcruweip/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 467, in __create_profile\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/servers/server_profiles.py\", line 74, in create\n    resource_data = self._helper.create(data, timeout=timeout, force=force)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/resource.py\", line 464, in create\n    return self.do_post(uri, data, timeout, custom_headers)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/resource.py\", line 816, in do_post\n    return self._task_monitor.wait_for_task(task, timeout)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/task_monitor.py\", line 82, in wait_for_task\n    task_response = self.__get_task_response(task)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/task_monitor.py\", line 142, in __get_task_response\n    raise HPEOneViewTaskError(msg, error_code)\nhpeOneView.exceptions.HPEOneViewTaskError: A profile is already assigned to the server hardware {\"name\":\"Frame4, bay 2\", \"uri\":\"/rest/server-hardware/39313738-3034-5A43-3231-32343036474D\"}.\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
^C [ERROR]: User interrupted execution

Certificates - Idempotency issue

Scenario/Intent

Hello,

I am adding multiple server certificates in OneView:
image
and it works fine. However when running the tasks a second time with the same certificates I still have changes.

main.yml

- name: Create a Server Certificate
  oneview_certificates_server:
    config: "{{ config }}"
    state: present
    name: "{{ item.aliasName }}"
    data:
      certificateDetails:
        - aliasName: "{{ item.aliasName }}"
          base64Data: "{{ item.base64Data }}"
  loop: "{{ certificates }}"

Example of input certificates:

    certificates:
        - aliasName: "test-certificate-2"
          base64Data: "-----BEGIN CERTIFICATE-----\n....\n-----END CERTIFICATE-----"
        - aliasName: "test-certificate"
          base64Data: "-----BEGIN CERTIFICATE-----\n....\n-----END CERTIFICATE-----"

Environment Details

  • OneView Ansible Collection: v6.6.0
  • OneView Appliance Version: 6.10
  • Ansible version: 2.9.6
  • Python version: 3.8.10

Expected Result

We have idempotency when we run the same playbook with the same certificate input data multiple times.

Actual Result

TASK [certificate : Create a Server Certificate] *******************************
changed: [localhost -> localhost] => (item={'aliasName': 'test-certificate', 'base64Data':

[...]

PLAY RECAP *********************************************************************
localhost                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 

REST request to change Oneview HTTP session default timeout

Hello,
Oneview provides a REST API to change the default timeout of HTTP sessions. This is described in
https://support.hpe.com/hpesc/public/docDisplay?docId=a00104888en_us - Page 136/137.
You can use POST /rest/sessions/idle-timeout to set a value to 24 hours or less.
Does the Oneview-ansible module allow to send such a POST request to the OneView appliance?
Thanks

This is duplicate of ansible issue:
HewlettPackard/oneview-ansible#629

oneview_server_profile: Server profile template uri is not updated

Situation:

We want to update a server profile to use a different server profile template by specifying the server profile template uri as part of the oneview_server_profile module.

Code used to repeat:

- name: "{{ esx_hostname }}: Gather facts about Server Profile by uri"
  hpe.oneview.oneview_server_profile_facts:
    hostname: "{{ oneview_address }}"
    username: "{{ oneview_username }}"
    password: "{{ oneview_password }}"
    auth_login_domain: "{{ oneview_auth_login_domain }}"
    uri: "{{ server_profile_uri }}"
  delegate_to: localhost
  register: server_profile_facts

- name: "{{ esx_hostname }}: Gather facts about specified Server Profile Template by name"
  hpe.oneview.oneview_server_profile_template_facts:
    hostname: "{{ oneview_address }}"
    username: "{{ oneview_username }}"
    password: "{{ oneview_password }}"
    name: "{{ server_profile_template }}"
  register: server_profile_template_facts

- name: "{{ esx_hostname }}: Ensure server profile is compliant"
  hpe.oneview.oneview_server_profile:
    hostname: "{{ oneview_address }}"
    username: "{{ oneview_username }}"
    password: "{{ oneview_password }}"
    auth_login_domain: "{{ oneview_auth_login_domain }}"
    state: compliant
    data:
      name: "{{ esx_hostname }}"
      serverProfileTemplateUri: "{{ server_profile_template_uri }}" # This line here is not changing the URI of the server profile template for the server profile
      serverHardwareUri: "{{ server_hardware_uri }}"
      description: "Configured by Ansible."
  delegate_to: localhost
  notify: "refresh server hardware"

Actual result, the server profile template uri is not updated:
Screenshot 2021-12-07 at 16 52 58

Failure to create a Server profile template

I want to create a Server profile template with boot from SAN storage volumes created upon profile creation from this template
but what ever I try, I always get an exception: HPEOneViewTaskError: One or more required fields are missing. Volume "ESXi_Boot" with attachment ID 1.

I have provided all the required parameters as per the REST API reference, but the failure persists.

What am I missing?

Playbook :

---

- name: Server profile template for VMware ESXi 7 with boot from SAN storage volumes created upon profile creation from this template
  hosts: localhost
  collections:
      - hpe.oneview
  gather_facts: no

  vars:
    config: "{{ playbook_dir }}/oneview_config.json"
    server_profile_template_name: "ESX BFS"
    server_hardware_type_name: "SY 480 Gen10 1"
    enclosure_group_name: "EG"
    connection_network_management_name: "Management-Nexus"
    connection_fabric_A_name: "FC-A"
    connection_fabric_B_name: "FC-B"
    connection_network_set_name: "Production_network_set"
    
    StorageSystemUri: "/rest/storage-systems/MXN6380203"
    Storage_pool_uri: "/rest/storage-pools/F8AA36B9-CE0B-46B7-8CC8-AD8000E62B87"
    firmwareBaselineUri: "/rest/firmware-drivers/Synergy_Service_Pack_SSP_2021_05_03_Z7550-97224"


  tasks:
    - name: Create a server profile template for VMware ESXi
      oneview_server_profile_template:
        config: "{{ config }}"
        state: present
        data:
          serverProfileDescription: "Server profile template for VMware ESXi 7 with boot from SAN storage volumes created upon profile creation from this template" 
          boot: 
            complianceControl: Checked
            manageBoot: true
            order: 
              - HardDisk       
          bootMode:
            complianceControl: Checked
            manageMode: true
            mode: UEFIOptimized
            pxeBootPolicy: Auto         
          name: "{{ server_profile_template_name }}"
          bios:
            complianceControl: Checked
            manageBios: true
            overriddenSettings: 
              - id: WorkloadProfile
                value: Virtualization-MaxPerformance
              - id: MinProcIdlePower
                value: NoCStates
              - id: IntelUpiPowerManagement
                value: Disabled
              - id: MinProcIdlePkgState
                value: NoState
              - id: EnergyPerfBias
                value: MaxPerf
              - id: UncoreFreqScaling
                value: Maximum
              - id: PowerRegulator
                value: StaticHighPerf
              - id: SubNumaClustering
                value: Enabled
              - id: CollabPowerControl
                value: Disabled
              - id: EnergyEfficientTurbo
                value: Disabled
              - id: NumaGroupSizeOpt
                value: Clustered
          serverHardwareTypeName: "{{ server_hardware_type_name }}"
          enclosureGroupName: "{{ enclosure_group_name }}"
          firmware: 
                complianceControl: Checked
                firmwareActivationType: Immediate
                firmwareBaselineUri: "{{ firmwareBaselineUri }}"
                firmwareInstallType: FirmwareOnlyOfflineMode
                forceInstallFirmware: false
                manageFirmware: true
          connectionSettings:
            complianceControl: Checked
            manageConnections: true
            connections:
              - id: 1
                name: Mgmt1
                functionType: Ethernet
                portId: Auto
                requestedMbps: 2500
                networkName: "{{ connection_network_management_name }}"
              - id: 2
                name: Mgmt2
                functionType: Ethernet
                portId: Auto
                requestedMbps: 2500
                networkName: "{{ connection_network_management_name }}"  
              - id: 3
                name: FC-A
                functionType: FibreChannel
                portId: Auto
                requestedMbps: 2500
                networkName: "{{ connection_fabric_A_name }}"  
                boot:
                  priority: LoadBalanced
                  bootVolumeSource:  ManagedVolume  
                  #bootVlanId: ""
                  #targets: []          
              - id: 4
                name: FC-B
                functionType: FibreChannel
                portId: Auto
                requestedMbps: 2500
                networkName: "{{ connection_fabric_B_name }}"    
                boot:
                  priority: LoadBalanced
                  bootVolumeSource:  ManagedVolume
                  #bootVlanId: ""
                  #targets: []
              - id: 5
                name: NetworkSet1
                functionType: Ethernet
                portId: Auto
                requestedMbps: 2500
                networkName: "{{ connection_network_set_name }}"  
              - id: 6
                name: NetworkSet2
                functionType: Ethernet
                portId: Auto
                requestedMbps: 2500
                networkName: "{{ connection_network_set_name }}"  
          sanStorage:
            complianceControl: CheckedMinimum
            hostOSType: "VMware (ESXi)"
            manageSanStorage: true
            sanSystemCredentials: []
            volumeAttachments:
              - bootVolumePriority: Primary
                id: 1
                isPermanent: True
                lun: ""
                lunType: Auto                
                storagePaths: 
                  - connectionId: 3  
                    isEnabled: true
                    targetSelector: Auto
                    #networkUri: "/rest/fc-networks/be06e274-d7c9-48a9-a3e9-92576677ca1f"
                    targets: []
                  - connectionId: 4  
                    isEnabled: true
                    targetSelector: Auto
                    #networkUri: "/rest/fc-networks/2471664d-70f6-4ba5-9836-817a5dcb7d8e"
                    targets: []                    
                volume:
                  #initialScopeUris: []
                  properties:
                    description: "OS Boot volume"
                    isDeduplicated: true
                    isShareable: false
                    name: "ESXi_Boot"
                    provisioningType: Thin
                    size: 20000
                    snapshotPool: "{{ Storage_pool_uri }}"
                    storagePool: "{{ Storage_pool_uri }}"
                    templateVersion: 1.1
                  templateUri: "" 
                volumeStorageSystemUri: "{{ StorageSystemUri }}"
                volumeUri: ""              
        # params:
        #   force: True # Supported only for API version >= 600
      delegate_to: localhost

Output

An exception occurred during task execution.To see the full traceback, use - vvv.The error was: hpeOneView.exceptions.HPEOneViewTaskError: One or more required fields are missing.Volume "ESXi_Boot" with attachment ID 1.
fatal: [localhost->localhost]: FAILED! => {
	"changed": false,
	"module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1635877105.8914585-83516-63498257904227/AnsiballZ_oneview_server_profile_template.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1635877105.8914585-83516-63498257904227/AnsiballZ_oneview_server_profile_template.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1635877105.8914585-83516-63498257904227/AnsiballZ_oneview_server_profile_template.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.hpe.oneview.plugins.modules.oneview_server_profile_template', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_oneview_server_profile_template_payload_bxhgspj5/ansible_oneview_server_profile_template_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile_template.py\", line 255, in <module>\n  File \"/tmp/ansible_oneview_server_profile_template_payload_bxhgspj5/ansible_oneview_server_profile_template_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile_template.py\", line 251, in main\n  File \"/tmp/ansible_oneview_server_profile_template_payload_bxhgspj5/ansible_oneview_server_profile_template_payload.zip/ansible_collections/hpe/oneview/plugins/module_utils/oneview.py\", line 633, in run\n  File \"/tmp/ansible_oneview_server_profile_template_payload_bxhgspj5/ansible_oneview_server_profile_template_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile_template.py\", line 165, in execute_module\n  File \"/tmp/ansible_oneview_server_profile_template_payload_bxhgspj5/ansible_oneview_server_profile_template_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile_template.py\", line 181, in __present\n  File \"/tmp/ansible_oneview_server_profile_template_payload_bxhgspj5/ansible_oneview_server_profile_template_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile_template.py\", line 217, in __update\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/resource.py\", line 67, in wrap\n    return obj(*args, **kwargs)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/servers/server_profile_templates.py\", line 137, in update\n    self.data = self._helper.update(resource, uri, force, timeout)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/resource.py\", line 513, in update\n    return self.do_put(uri, resource, timeout, custom_headers)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/resource.py\", line 836, in do_put\n    return self._task_monitor.wait_for_task(task, timeout)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/task_monitor.py\", line 82, in wait_for_task\n    task_response = self.__get_task_response(task)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/task_monitor.py\", line 142, in __get_task_response\n    raise HPEOneViewTaskError(msg, error_code)\nhpeOneView.exceptions.HPEOneViewTaskError: One or more required fields are missing. Volume \"ESXi_Boot\" with attachment ID 1.\n",
	"module_stdout": "",
	"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
	"rc": 1
}

Logical interconnect group with uplink sets

I've run into an inconsistency in the HPE Oneview API definitions.

When creating an uplink set by itself you can make reference to the Q1-Q8 ports by that name.

Example:

    "portConfigInfos" : [
        {
            "desiredSpeed" : "Auto",
            "location" : {
                "locationEntries" : [
                    {
                        "type" : "Port",
                        "value" : "Q1"
                    },
                    {
                        "type" : "Bay",
                        "value" : 3
                    },
                    {
                        "type" : "Enclosure",
                        "value" : "/rest/enclosures/0000000000A66101"
                    }
                ]
            }
        }
    ],

However, when if I create a logical interconnect group and add uplink sets at the same time using the rest API or ansible collection, the Q naming is replaced with a seemingly arbitrary "relativeValue" reference.

Example:

                    "logicalPortConfigInfos": [
                        {
                            "logicalLocation": {
                                "locationEntries": [
                                    {
                                        "type": "Enclosure",
                                        "relativeValue": 1
                                    },
                                    {
                                        "type": "Port",
                                        "relativeValue": 66
                                    },
                                    {
                                        "type": "Bay",
                                        "relativeValue": 3
                                    }
                                ]
                            },
                            "desiredSpeed": "Auto",
                            "desiredFecMode": "Auto"
                        },
                        {
                            "logicalLocation": {
                                "locationEntries": [
                                    {
                                        "type": "Enclosure",
                                        "relativeValue": 1
                                    },
                                    {
                                        "type": "Bay",
                                        "relativeValue": 6
                                    },
                                    {
                                        "type": "Port",
                                        "relativeValue": 66
                                    }
                                ]
                            },
                            "desiredSpeed": "Auto",
                            "desiredFecMode": "Auto"
                        }
                    ],

From trial and error of creating LIG and uplink sets we've figured out 61=Q1, 66=Q2 and 71=Q3.

Without us having to create a mapping of these values in our ansible automation is there a means of being able to use these Q-based names with the logical interconnect group creation API? They are labeled as Q1-8 in the web GUI and are labeled Q1-8 on the physical chassis. These relative values are quite inconvenient to creating ansible automation or API calls from the logical-interconnect-groups end point.

Thank you.

assign static IP addresses during LE creation

Hello,
I create this new issue since I can't re-open #86 which doesn't seem to be solved.
The code fix proposed in issue #619 returns the following error with oneview-ansible-collection:

An exception occurred during task execution. To see the full traceback, use -vvv. The error was: hpeOneView.exceptions.HPEOneViewException: ('The JSON cannot be mapped to array or collection.', {'errorCode': 'INVALID_NESTED_JSON_ELEMENT', 'message': 'The JSON cannot be mapped to array or collection.', 'details': 'The nested JSON "enclosureBaySettings" sent in the request is not a valid array or collection.', 'messageParameters': [], 'recommendedActions': ['Correct the JSON as appropriate array or collection and retry the request.'], 'errorSource': None, 'nestedErrors': [], 'data': {}})

RHEL 8.6
Python 3.6.8
Ansible 2.9.27
Oneview-ansible-collection 7.2.0

Could you please have a look?

Thanks,
V

Originally posted by @vinicole in #86 (comment)

oneview_server_hardware: hostname is not a valid field

Scenario/Intent

Ensure Server Hardware Compliance

Environment Details

[root@375c9d41528c ansible]# ansible --version
ansible 2.9.22
  config file = /mnt/ansible/ansible.cfg
  configured module search path = ['/mnt/ansible/library']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.8 (default, Mar 18 2021, 08:58:41) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
  • Module Version: 6.1.0
  • OneView Appliance Version: 5.50.00-0426657
  • OneView Client API Version: 2200
  • python-hpOneView SDK Version: 0.15.2
  • Platform: RHEL container

Steps to Reproduce

- name: (HPE) Ensure Server Hardware Compliance
  hpe.oneview.oneview_server_hardware:
    hostname: "{{ oneview_address }}"
    username: "{{ oneview_username }}"
    password: "{{ oneview_password }}"
    auth_login_domain: "{{ oneview_auth_login_domain }}"
    api_version: "{{ oneview_api_version }}"
    state: "{{ oneview_server_hardware_compliance_state }}"
    data:
      name: "{{ management_card_hostname }}"
  delegate_to: localhost

Expected Result

hostname to be a valid field. In a previous version of the module the ansible field was name.

Actual Result

image

If using name instead of hostname:
image

Atomic assign server hardware

Using Ansible we deploy VMware ESX hosts on HPE Synergy infrastructure.

We like to run tasks in parallel to reduce the total runtime.
Creating a single server profile takes about 10 minutes.

name: Creating Server Profile from Server Profile Template "{{ server_template }}"
      throttle: 1       # select server hardware once
      oneview_server_profile:
        config: "{{ config }}"
        data:
          serverProfileTemplateName: "{{ server_template }}"
          name: "{{ inventory_hostname }}"

When assigning hardware tasks for multiple hosts will select the very same compute.
Thus only one task succeeds, the others fail. Therefor 'throttle: 1' is included to single thread this task.

Assigning hardware should be atomic, ie. a server blade should be locked and a subsequent
'assign hardware' call should return the next available compute.

Firmware-bundle - Upload of compsig file fails, hotfix not present

Scenario/Intent

Hello,

I am trying upload spp, hotfix and compsig files to OneView using the oneview_firmware_bundle plugin. I am testing the upload of an hotfix and its two associated compsig file, however the upload fails with the compsig files.

Input data

- spp_list: ["<absolute_path>/firmware-ilo5-2.60-1.1.x86_64.rpm", "<absolute_path>/firmware-ilo5-2.60-1.1.x86_64_part1.compsig", "<absolute_path>/firmware-ilo5-2.60-1.1.x86_64_part2.compsig"]

upload.yml

- name: Create fact to get the extension of the input var file
  set_fact:
    __file_extension: ""

- name: Set fact with the extension of the input var file
  set_fact:
    __file_extension : [...]

#if file (hotfix, spp) doesn't have a compsig extension, do this task
- name: Upload the firmware bundle
  oneview_firmware_bundle:
    config: "{{ __config }}"
    state: present
    file_path: "{{ __spp_file_path }}"
  when: "'compsig' not in __file_extension" #here __file_extension = "rpm" for the first file upload

#if file has a compsig extension (signature), do this task
- name: Upload the firmware bundle
  oneview_firmware_bundle:
    config: "{{ __config }}"
    state: add_signature
    file_path: "{{ __spp_file_path }}"
  when: "'compsig' in __file_extension" #here __file_extension = "compsig" for the 2nd and 3rd file upload

Environment Details

  • OneView Ansible Collection: v6.5.0
  • OneView Appliance Version: 6.10
  • Ansible version: 2.9.6
  • Python version: 3.6.10

Expected Result

The consecutive upload of the hotfix and its compsig files on OneView.

Actual Result

Result of the ansible-playbook command

#First loop iteration to upload the hotfix

TASK [Upload the firmware bundle] **********************************************************************************
changed: [localhost] => {
    "ansible_facts": {
        "firmware_bundle": {
            "baselineShortName": "Not_Applicable",
            "bundleSize": 33932254,
            "bundleType": "Hotfix",
            "category": "firmware-drivers",
            "created": "2022-02-17T10:01:45.113Z",
            "description": "This package contains HPE Integrated Lights-Out 5 firmware",
            "eTag": "2022-02-17T10:04:33.117Z",
            "esxiOsDriverMetaData": [
                "Not_Applicable"
            ],
            "fwComponents": [
                {
                    "componentVersion": "2.60",
                    "fileName": "firmware-ilo5-2.60-1.1.x86_64.rpm",
                    "name": "HPE Integrated Lights-Out 5 firmware",
                    "swKeyNameList": [
                        "RI11"
                    ]
                }
            ],
            "hotfixes": [],
            "hpsumVersion": "Not_Applicable",
            "isoFileName": "Not_Applicable",
            "lastTaskUri": "/rest/tasks/7e520ed5-df90-4132-8cdd-61c35253c395",
            "locations": {
                "/rest/repositories/internal": "Internal"
            },
   [...]
    },
    "msg": "Firmware Bundle or Hotfix added successfully."
}

# Second loop iteration to upload the compsig files

TASK [Upload the firmware bundle] **********************************************************************************
fatal: [localhost]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "api_version": null,
            "auth_login_domain": null,
            "config": "config.json",
            "file_path": "<aboslute_path>/firmware-ilo5-2.60-1.1.x86_64_part1.compsig",
            "hostname": null,
            "image_streamer_hostname": null,
            "password": null,
            "state": "add_signature",
            "username": null,
            "validate_etag": true
        }
    },
    "msg": "Hotfix is not present."
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.