Giter VIP home page Giter VIP logo

hpe3par_ansible_module's Introduction

HPE Alletra 9000 and HPE Primera and HPE 3PAR Modules for Ansible

The HPE Alletra 9000 and HPE Primera and HPE 3PAR modules for Ansible enable automation of storage provisioning for the HPE Alletra 9000 and Primera and 3PAR array. The modules use the HPE Alletra 9000 and Primera and 3PAR SDK for Python to communicate with the storage array over the WSAPI REST interface.

Requirements

  • Ansible ver. 2.5, 2.6, 2.7, 2.8, 2.9
  • hpe3par_sdk
  • 3PAR OS
    • 3.3.1 MU1, MU2, MU3, T05
    • 3.2.2 MU4, MU6
  • Primera OS
    • 4.3.1
  • Alletra 9000 OS
    • 9.3.0
  • WSAPI service should be enabled on the HPE Alletra 9000 and Primera and 3PAR storage array.

Configuration

  • Install Ansible and hpe3par_sdk
  • Modify ansible.cfg file to point the library to the Modules folder
library=/home/user/workspace/hpe3par_ansible/Modules

Modules

This is developed as a set of modules and example playbooks to provision the following:

Examples

- name: Create CPG "{{ cpg_name }}"
  hpe3par_cpg:
    storage_system_ip="{{ storage_system_ip }}"
    storage_system_username="{{ storage_system_username }}"
    storage_system_password="{{ storage_system_password }}"
    state=present
    cpg_name="{{ cpg_name }}"
    domain="{{ domain }}"
    growth_increment="{{ growth_increment }}"
    growth_increment_unit="{{ growth_increment_unit }}"
    growth_limit="{{ growth_limit }}"
    growth_limit_unit="{{ growth_limit_unit }}"
    growth_warning="{{ growth_warning }}"
    growth_warning_unit="{{ growth_warning_unit }}"
    raid_type="{{ raid_type }}"
    set_size="{{ set_size }}"
    high_availability="{{ high_availability }}"
    disk_type="{{ disk_type }}"

- name: Delete CPG "{{ cpg_name }}"
  hpe3par_cpg:
    storage_system_ip="{{ storage_system_ip }}"
    storage_system_username="{{ storage_system_username }}"
    storage_system_password="{{ storage_system_password }}"
    state=absent

Non Idempotent Actions

Actions are Idempotent when they can be run multiple times on the same system and the results will always be identical, without producing unintended side effects.

The following actions are non-idempotent:

  • Clone: resync, create_offline
  • Snapshot: restore online, restore offline
  • Virtual Volume: grow (grow_to_size is idempotent)
  • VLUN: All actions become non-idempotent when autolun is set to true

hpe3par_ansible_module's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hpe3par_ansible_module's Issues

Admit taget in RCG with wrong RCG name giving error message in green and passing TC

While adding target to RCG with incorrect RCG name gives error message in green and passing TC as successful, whereas failure expected.

Steps to re-produce:-

Property file to add RCG:

storage_system_ip: "192.168.67.6"
storage_system_username: "3paradm"
storage_system_password: "3pardata"
state: present
remote_copy_group_name: "ans_CreateRCG_t-001"
domain:
#target_name: "CSSOS-SSA05"
remote_copy_targets:

  • target_name: "CSSOS-SSA05"

- target_name: "CSSOS-SSA06"

target_mode: "periodic"

- userCPG:

- snapCPG:

keep_snap: "false"
local_user_cpg:
local_snap_cpg:

Property file to add target:

storage_system_ip: "192.168.67.6"
storage_system_username: "3paradm"
storage_system_password: "3pardata"
remote_copy_group_name: "ansible_CreateRCG_target-001"
state: admit_target
target_mode: "sync"
target_name: "CSSOS-SSA04"
local_remote_volume_pair_list:

Please find attached test log for the same.
Admit_RCG_target_issue.txt

User should not be forced to set both task_freq, and task_freq_custom

When the task_freq property is not set and task_freq_custom is set, the below error is seen

fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"base_volume_name": "An_vol_SS_01",
"expiration_hours": 0,
"expiration_time": 2,
"expiration_unit": "Hours",
"read_only": true,
"retention_hours": 0,
"retention_time": null,
"retention_unit": "Hours",
"schedule_name": "Ansible_schedule_01",
"snapshot_name": "Ansible_volume_SS_snap_01",
"state": "create_schedule",
"storage_system_ip": "192.168.67.6",
"storage_system_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"storage_system_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"task_freq": null,
"task_freq_custom": "0 8-72 * * *"
}
},
"msg": "value of task_freq must be one of: yearly, monthly, weekly, daily, hourly, got: None"

Expiration time input for snapshot volume in schedule is not getting set

In playbook properties set the expiration time for snapshot volume two 2 hours.
From the creation time of the snapshot volume, wait for two hours and check if the volume is set for expiration. In observed that expiration time set is not taking effect, the volume is not getting marked for expiry.

CSSOS-SSA06 cli% showvv -expired
no vv listed

On trying to resume a schedule which is not in 'suspended' state, and invalid error msg is displayed

On trying to resume a schedule which is not in 'suspended' state, and invalid error msg is displayed.

fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"allow_remote_copy_parent": null,
"base_volume_name": null,
"expiration_hours": 0,
"expiration_time": null,
"expiration_unit": "Hours",
"new_name": null,
"new_schedule_name": null,
"priority": null,
"read_only": null,
"retention_hours": 0,
"retention_time": null,
"retention_unit": "Hours",
"rm_exp_time": null,
"schedule_name": "Ansible_schedule_01",
"snapshot_name": null,
"state": "resume_schedule",
"storage_system_ip": "192.168.67.6",
"storage_system_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"storage_system_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"task_freq": null
}
},
"msg": "Schedule resumed failed | An unknown exception occurred."

Snapshot name in schedule incorrect

Currently the snapshot schedule name pre appends the base_vol_name with a fixed string:
Example pasted below:
snap-Ansible_volume_SS_01.@y@@m@@d@@h@@m@@s@ Ansible_volume_SS_01

The snapshot name format should append the snapshot name from the properties file and append timestamp mentioned in schedule.

admit volume to remote copy link fails when 'differentSecondaryWWN' field value is set to true.

Properties file

storage_system_ip: "192.168.67.7"
storage_system_username: "3paradm"
storage_system_password: "3pardata"
remote_copy_group_name: "ansible_CreateRCG-002"
state: add_volume
volume_name: "Ansible_volume_01"
admit_volume_targets:

  • target_name: "CSSOS-SSA05"
    sec_volume_name: "Sec_vol_1"
    snapshot_name:
    volume_auto_creation: true
    skip_initial_sync: true
    different_secondary_wwn: true

Error

fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"admit_volume_targets": [
{
"sec_volume_name": "Sec_vol_1",
"target_name": "CSSOS-SSA05"
}
],
"different_secondary_wwn": true,
"discard_new_data": false,
"domain": null,
"full_sync": false,
"keep_snap": false,
"local_groups_direction": false,
"local_remote_volume_pair_list": null,
"local_snap_cpg": null,
"local_user_cpg": null,
"modify_targets": null,
"no_resync_snapshot": false,
"no_snapshot": false,
"recovery_action": null,
"remote_copy_group_name": "ansible_CreateRCG-002",
"remote_copy_targets": null,
"remove_secondary_volume": false,
"skip_initial_sync": true,
"skip_promote": false,
"skip_start": false,
"skip_sync": false,
"snapshot_name": null,
"source_port": null,
"starting_snapshots": null,
"state": "add_volume",
"stop_groups": false,
"storage_system_ip": "192.168.67.7",
"storage_system_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"storage_system_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"target_mode": null,
"target_name": null,
"target_port_wwn_or_ip": null,
"unset_snap_cpg": false,
"unset_user_cpg": false,
"volume_auto_creation": true,
"volume_name": "Ansible_volume_01"
}
},
"msg": "Add volume to Remote Copy Group failed. skipInitialSync cannot be true if snapshot name is not given"
}

Create snapshot scheduled fails as schedule name exceeds 32 characters

If schedule name exceeds 20 characters, the schedule fails to trigger with below error:

CSSOS-SSA06 cli% showtask -d 22409
Id Type Name Status Phase Step -------StartTime------- ------FinishTime------- -Priority- -User--
22409 scheduled_task Ansible_schedule_01 failed --- --- 2018-10-10 15:20:00 IST 2018-10-10 15:20:01 IST n/a 3paradm

Detailed status:
2018-10-10 15:20:00 IST Created task.
2018-10-10 15:20:00 IST Updated Executing "Ansible_schedule_01" as 0:14172
2018-10-10 15:20:01 IST Error Name snap-Ansible_volume_SS_01.181010152001 is too long, should be less than 32 characters
2018-10-10 15:20:01 IST Error Task exited with status 1
2018-10-10 15:20:01 IST Failed Could not complete task.

Playbook properties:
changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"allow_remote_copy_parent": null,
"base_volume_name": "Ansible_volume_SS_01",
"expiration_hours": 0,
"expiration_time": 2,
"expiration_unit": "Hours",
"new_name": null,
"priority": null,
"read_only": true,
"retention_hours": 0,
"retention_time": null,
"retention_unit": "Hours",
"rm_exp_time": null,
"schedule_name": "Ansible_schedule_01",
"snapshot_name": "Ansible_volume_SS_snap_01",
"state": "schedule_create",
"storage_system_ip": "192.168.67.6",
"storage_system_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"storage_system_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"task_freq": null,
"task_freq_custom": "20 * * * *"
}
},
"msg": "Created Schedule Ansible_schedule_01 successfully."

Unable to perform create RCG link operations

Getting error as "unsupported operation for the resource" while creating RCG link on array 3.2.2(MU6).

Steps to reproduce:-

property file:
storage_system_ip: "192.168.67.6"
storage_system_username: "3paradm"
storage_system_password: "3pardata"
state: present
remote_copy_group_name: "ansible_CreateRCG-001"
#target_name: "CSSOS-SSA05"
#target_mode: "periodic"
domain:
remote_copy_targets:

- target_name: "CSSOS-SSA05"

  • target_name: "CSSOS-SSA04"
    target_mode: "periodic"

- userCPG:

- snapCPG:

keep_snap: "1"
local_user_cpg:
local_snap_cpg:

Task:

- name: Create RCG
  hpe3par_remote_copy:
    storage_system_ip="{{ storage_system_ip }}"
    storage_system_username="{{ storage_system_username }}"
    storage_system_password="{{ storage_system_password }}"
    state=present
    remote_copy_group_name="{{ remote_copy_group_name }}"
    domain="{{ domain }}"
    remote_copy_targets="{{ remote_copy_targets }}"
    local_user_cpg="{{ local_user_cpg }}"
    local_snap_cpg="{{ local_snap_cpg }}"

Please find attached testing log for reference.
create_RCG_link_issue.txt

[Feature Request] Module to get list of Volumes, VolumesSet, CPG, VLUN, Hosts and HostsSet

Summary
It would be helpful to have a module to get the list of 'objects'* with their attributes as facts.

[*] By 'objects' I mean Volumes, VolumesSet, CPG, VLUN, Hosts and HostsSet.

UseCase
One use case could be to wipe a 3par of all existing Volumes before reprovision it.

Current Implementation
The current implementation I found is done via the REST API (Based on the following page as input).

Here is a sample:

---
- name: Delete all Volumes defined on the HPe 3PAR
  hosts: localhost
  gather_facts: no

  vars:
    hpe3par_user: "user"
    hpe3par_ip: "10.0.0.1"
    hpe3par_api_url: "https://{{ hpe3par_ip }}:8080/api/v1"

  vars_prompt:
    - name: "hpe3par_password"
      prompt: "Enter the HPe 3PAR Password"
      private: yes

  tasks:
    - name: Open the REST API session
      uri:
        url: "{{ hpe3par_api_url }}/credentials"
        method: POST
        headers:
          Content-Type: "application/json"
        body_format: json
        body: "{ 'user': '{{ hpe3par_user }}', 'password': '{{ hpe3par_password }}' }"
        status_code: 201
        return_content: yes
        validate_certs: no
      register: output

    - name: Get list of volumes
      uri:
        url: "{{ hpe3par_api_url }}/volumes"
        method: GET
        headers:
          Content-Type: "application/json"
          X-HP3PAR-WSAPI-SessionKey: "{{ output.json.key }}"
          Accept: "application/json"
        status_code: 200
        return_content: yes
        validate_certs: no
      register: volumes

    - name: Show the volumes
      debug:
        var: volumes.json.members | map(attribute='name') | list

    - name: Release authentication key
      uri:
        url: "{{ hpe3par_api_url }}/credentials/{{ output.json.key }}"
        method: DELETE
        headers:
          Content-Type: "application/json"
        validate_certs: no

    - name: Remove Virtual Volume Set
      hpe3par_volumeset:
        storage_system_ip: "{{ hpe3par_ip }}"
        storage_system_username: "{{ hpe3par_user }}"
        storage_system_password: "{{ hpe3par_password }}"
        state: absent
        volumeset_name: "{{ item }}"
      loop: "{{ volumes.json.members | map(attribute='name') | list }}"

Delete remote copy which do not exist, giving wrong error message and passing TC.

While deleting RCG which do not exist, getting error message as "Remote Copy Group is already present" and it is passing TC as successful.

Steps to Reproduce:-

Property file:-

storage_system_ip: "192.168.67.6"
storage_system_username: "3paradm"
storage_system_password: "3pardata"
state: present
remote_copy_group_name: "ansible_CreateRCG-001"
#target_name: "CSSOS-SSA05"
#target_mode: "periodic"
domain:
remote_copy_targets:

- target_name: "CSSOS-SSA05"

  • target_name: "CSSOS-SSA04"
    target_mode: "periodic"

- userCPG:

- snapCPG:

keep_snap: "1"
local_user_cpg:
local_snap_cpg:

Task :-


- name: Delete remote copy group "{{ remote_copy_group_name }}"
  hpe3par_remote_copy:
    storage_system_ip="{{ storage_system_ip }}"
    storage_system_username="{{ storage_system_username }}"
    storage_system_password="{{ storage_system_password }}"
    state=absent
    remote_copy_group_name="{{ remote_copy_group_name }}"
    keep_snap="{{ keep_snap }}"
  tags:
   - createremotecopy

Please find attached test result logs.
Delete_RCG_which_not_exist_issue.txt

Syncing a remote copy group with skip initial sync set to true and full sync set to true fails the playbook with an incorrect error msg

While adding a volume to remote copy group set skip initial sync value to true.
And while syncing a remote copy group set the full sync value to true. The playbook fails with a generic error message.

fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"admit_volume_targets": null,
"different_secondary_wwn": false,
"discard_new_data": false,
"domain": null,
"full_sync": false,
"keep_snap": false,
"local_groups_direction": false,
"local_remote_volume_pair_list": [],
"local_snap_cpg": null,
"local_user_cpg": null,
"modify_targets": null,
"no_resync_snapshot": false,
"no_snapshot": false,
"recovery_action": null,
"remote_copy_group_name": "ansible_CreateRCG-003",
"remote_copy_targets": null,
"remove_secondary_volume": false,
"skip_initial_sync": true,
"skip_promote": false,
"skip_start": false,
"skip_sync": false,
"snapshot_name": null,
"source_port": null,
"starting_snapshots": [
{
"snapshotName": "Ansible_volume_SS_snap_01",
"volumeName": "Ansible_volume_01"
}
],
"state": "start",
"stop_groups": false,
"storage_system_ip": "192.168.67.7",
"storage_system_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"storage_system_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"target_mode": null,
"target_name": "CSSOS-SSA05",
"target_port_wwn_or_ip": null,
"unset_snap_cpg": false,
"unset_user_cpg": false,
"volume_auto_creation": false,
"volume_name": null
}
},
"msg": "Start Remote Copy Group failed | Bad request (HTTP 400) 44 - invalid input: parameters cannot be present at the same time"

Admit remote copy target is showing successful message although user is giving Invalid value of "targetVolumeName"

Steps to reproduce the issue:-
Created volume "Ansible_volume_01" on source 192.168.67.7
Created volume "Ansible_volume_01" on first target CSSOS-SSA05
Created volume "Ansible_volume_01" on second target CSSOS-SSA06
Created RCG "ansible_CreateRCG-002" with target CSSOS-SSA05, with periodic mode.
Added volume "Ansible_volume_01" to RCG "ansible_CreateRCG-002"
Admit second target CSSOS-SSA06 with invalid targetVolumeName value.

Observation:
It is observed that issue is not reproduced, but found the another issue.
The new issue: After providing invalid targetVolumeName , admit target was successful and printed message as given below. But on 192.168.67.7, new target CSSOS-SSA06 was not added :-
"msg": "Admit remote copy target CSSOS-SSA06 successful in remote copy group ansible_CreateRCG-002."

Playbook used for reproducing the issue:-
name: Create volume on source
hpe3par_volume:
storage_system_ip: 192.168.67.7
storage_system_password: 3pardata
storage_system_username: 3paradm
state: present
volume_name: Ansible_volume_01
size: 1024
size_unit: MiB
cpg: FC_r1
snap_cpg: FC_r1

name: Create volume on target1
hpe3par_volume:
storage_system_ip: 192.168.67.5
storage_system_password: 3pardata
storage_system_username: 3paradm
state: present
volume_name: Ansible_volume_01
size: 1024
size_unit: MiB
cpg: FC_r1
snap_cpg: FC_r1

name: Create volume on target2
hpe3par_volume:
storage_system_ip: 192.168.67.6
storage_system_password: 3pardata
storage_system_username: 3paradm
state: present
volume_name: Ansible_volume_01
size: 1024
size_unit: MiB
cpg: FC_r1
snap_cpg: FC_r1

name: Create Remote Copy Group ansible_CreateRCG-002
hpe3par_remote_copy:
storage_system_ip: 192.168.67.7
storage_system_password: 3pardata
storage_system_username: 3paradm
state: present
remote_copy_group_name: ansible_CreateRCG-002
remote_copy_targets:

target_name: CSSOS-SSA05
target_mode: periodic
name: Add volume to remote copy group
hpe3par_remote_copy:
storage_system_ip: 192.168.67.7
storage_system_password: 3pardata
storage_system_username: 3paradm
state: add_volume
remote_copy_group_name: ansible_CreateRCG-002
volume_name: Ansible_volume_01
admit_volume_targets:

target_name: CSSOS-SSA05
sec_volume_name: Ansible_volume_01
name: admit Remote Copy target
hpe3par_remote_copy:
storage_system_ip: 192.168.67.7
storage_system_password: 3pardata
storage_system_username: 3paradm
state: admit_target
remote_copy_group_name: ansible_CreateRCG-002
target_name: CSSOS-SSA06
local_remote_volume_pair_list:

sourceVolumeName: Ansible_volume_01
targetVolumeName: demo_volume
target_mode: sync
Ansible message:-
changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"admit_volume_targets": null,
"different_secondary_wwn": false,
"discard_new_data": false,
"domain": null,
"full_sync": false,
"keep_snap": false,
"local_groups_direction": false,
"local_remote_volume_pair_list": [
{
"sourceVolumeName": "Ansible_volume_01",
"targetVolumeName": "demo_volume"
}
],
"local_snap_cpg": null,
"local_user_cpg": null,
"modify_targets": null,
"no_resync_snapshot": false,
"no_snapshot": false,
"recovery_action": null,
"remote_copy_group_name": "ansible_CreateRCG-002",
"remote_copy_targets": null,
"remove_secondary_volume": false,
"skip_initial_sync": false,
"skip_promote": false,
"skip_start": false,
"skip_sync": false,
"snapshot_name": null,
"source_port": null,
"starting_snapshots": null,
"state": "admit_target",
"stop_groups": false,
"storage_system_ip": "192.168.67.7",
"storage_system_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"storage_system_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"target_mode": "sync",
"target_name": "CSSOS-SSA06",
"target_port_wwn_or_ip": null,
"unset_snap_cpg": false,
"unset_user_cpg": false,
"volume_auto_creation": false,
"volume_name": null
}
},
"msg": "Admit remote copy target CSSOS-SSA06 successful in remote copy group ansible_CreateRCG-002."
}

remote copy group verification on source 192.168.67.7:-
CSSOS-SSA04 cli% showrcopy groups ansible_CreateRCG-002

Remote Copy System Information
Status: Started, Normal

Group Information

Name Target Status Role Mode Options
ansible_CreateRCG-002 CSSOS-SSA05 New Primary Periodic over_per_alert
LocalVV ID RemoteVV ID SyncStatus LastSyncTime
Ansible_volume_01 55199 Ansible_volume_01 46405 New NA

CSSOS-SSA04 cli%

Create schedule tasks displays as successful, when base volume is not present on the array

  1. In create schedule playbook, enter the a volume(snap and base volume) name that does not exist on the array. The "create schedule" tasks displays as successful.

changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"allow_remote_copy_parent": null,
"base_volume_name": "Ansible_volume_SS_01",
"expiration_hours": 0,
"expiration_time": null,
"expiration_unit": "Hours",
"new_name": null,
"priority": null,
"read_only": null,
"retention_hours": 0,
"retention_time": null,
"retention_unit": "Hours",
"rm_exp_time": null,
"schedule_name": "Ansible_schedule_01",
"snapshot_name": "Ansible_volume_SS_snap_01",
"state": "schedule_create",
"storage_system_ip": "192.168.67.5",
"storage_system_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"storage_system_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"task_freq": null,
"task_freq_custom": "0 * * * *"
}
},
"msg": "Created Schedule Ansible_schedule_01 successfully."

On the array the "showsched" cmd does not display the schedule
CSSOS-SSA06 cli% showsched
No scheduled tasks listed

[Feature Request] Module to enable the WebService Rest API

Summary
Up to now, the hpe3par_* modules have as prerequisite to have the WebService Rest API started, which is not the case by default. It could be great to have a module that enables the service for us.

UseCase
Provisioning of HPe 3PAR from bare-metal.

Current Implementation
With a quite ugly workaround, I'm able to ensure that the Rest API will be accessible before using my playbooks with the hpe3par_*_ modules. But it's a pain with because the password can't be managed properly and in plus is based on a tasks with command module that should be avoid.

---
- name: Prepare the SAN HPe 3PAR
  hosts: localhost
  gather_facts: no

  vars:
    hpe3par_user: "user"
    hpe3par_ip: "10.0.0.1"
	
  tasks:
    - name: Ensure that the WebService API is enabled
      command: >
        ssh -oStrictHostKeyChecking=no {{ hpe3par_user }}@{{ hpe3par_ip }} startwsapi

    - name: Wait for the WebService API to be started
      wait_for:
        host: "{{ hpe3par_ip }}"
        port: 8080
        state: started

Create schedule fails when and invalid range is passed in "task_freq_custom" field

Create a playbook with the hours range exceeding 24 hrs.

storage_system_ip: "192.168.67.6"
storage_system_username: "3paradm"
storage_system_password: "3pardata"
state: schedule_create
schedule_name: "Ansible_schedule_01"
snapshot_name: "Ansible_volume_SS_snap_01"
base_volume_name: "Ansible_volume_SS_01"
read_only: true
expiration_time: 2
expiration_unit: "Hours"
*task_freq_custom: "0 8-72 * * "

The create playbook displays as successful
changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"allow_remote_copy_parent": null,
"base_volume_name": "Ansible_volume_SS_01",
"expiration_hours": 0,
"expiration_time": null,
"expiration_unit": "Hours",
"new_name": null,
"priority": null,
"read_only": true,
"retention_hours": 0,
"retention_time": null,
"retention_unit": "Hours",
"rm_exp_time": null,
"schedule_name": "Ansible_schedule_01",
"snapshot_name": "Ansible_volume_SS_snap_01",
"state": "schedule_create",
"storage_system_ip": "192.168.67.6",
"storage_system_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"storage_system_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"task_freq": null,
"task_freq_custom": "0 8-72 * * *"
}
},
"msg": "Created Schedule Ansible_schedule_01 successfully."

The 3par "showsched" output list no schedules
CSSOS-SSA06 cli% showsched
No scheduled tasks listed

Create schedule fails when invalid number of task schedule parameters (task_freq_custom) are set in the properties file

Create a property file in valid number of "Min, hour, DOM, Month and DOW" in (task_freq_custom) field in the property file

storage_system_ip: "192.168.67.6"
storage_system_username: "3paradm"
storage_system_password: "3pardata"
state: schedule_create
schedule_name: "Ansible_schedule_01"
snapshot_name: "Ansible_volume_SS_snap_01"
base_volume_name: "Ansible_volume_SS_01"
read_only: true
expiration_time: 2
expiration_unit: "Hours"
*task_freq_custom: "1 * * "

The playbook task displays as successful completion on create schedule task

changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"allow_remote_copy_parent": null,
"base_volume_name": "Ansible_volume_SS_01",
"expiration_hours": 0,
"expiration_time": null,
"expiration_unit": "Hours",
"new_name": null,
"priority": null,
"read_only": true,
"retention_hours": 0,
"retention_time": null,
"retention_unit": "Hours",
"rm_exp_time": null,
"schedule_name": "Ansible_schedule_01",
"snapshot_name": "Ansible_volume_SS_snap_01",
"state": "schedule_create",
"storage_system_ip": "192.168.67.6",
"storage_system_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"storage_system_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"task_freq": null,
"task_freq_custom": "1 * * *"
}
},
"msg": "Created Schedule Ansible_schedule_01 successfully."
}

The array shows no schedules created

CSSOS-SSA06 cli% showsched
No scheduled tasks listed

Support for Application sets

Hi,

There are modules for vv creation, host creation but there is no module for "Application sets"?

Does this module support Application sets creation as well in HPe primera?

Thanks
Gajanan

While admitting a target to an rcg group, if the target volume name value is incorrect in 'local_remote_volume_pair_list' list an incorrect error msg is displayed

While admitting a target to an rcg group, if the secondary volume name value is incorrect in 'local_remote_volume_pair_list' list an incorrect error msg is displayed

Playbook:
storage_system_ip: "192.168.67.7"
storage_system_username: "3paradm"
storage_system_password: "3pardata"
remote_copy_group_name: "ansible_CreateRCG-002"
state: admit_target
target_mode: "sync"
target_name: "CSSOS-SSA06"
local_remote_volume_pair_list:

  • sourceVolumeName: "Ansible_volume_01"
    targetVolumeName: "invalid_sec_vol"

Error msg:

fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"admit_volume_targets": null,
"different_secondary_wwn": false,
"discard_new_data": false,
"domain": null,
"full_sync": false,
"keep_snap": false,
"local_groups_direction": false,
"local_remote_volume_pair_list": [
{
"sourceVolumeName": "Ansible_volume_01",
"targetVolumeName": "invalid_sec_vol"
}
],
"local_snap_cpg": null,
"local_user_cpg": null,
"modify_targets": null,
"no_resync_snapshot": false,
"no_snapshot": false,
"recovery_action": null,
"remote_copy_group_name": "ansible_CreateRCG-002",
"remote_copy_targets": null,
"remove_secondary_volume": false,
"skip_initial_sync": false,
"skip_promote": false,
"skip_start": false,
"skip_sync": false,
"snapshot_name": null,
"source_port": null,
"starting_snapshots": null,
"state": "admit_target",
"stop_groups": false,
"storage_system_ip": "192.168.67.7",
"storage_system_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"storage_system_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"target_mode": "sync",
"target_name": "CSSOS-SSA06",
"target_port_wwn_or_ip": null,
"unset_snap_cpg": false,
"unset_user_cpg": false,
"volume_auto_creation": false,
"volume_name": null
}
},
"msg": "Admit remote copy target failed| Admit remote copy target failed Error is 'e_volume_01:invalid_sec_vol\r' "
}

Create schedule tasks should not fails if user does not set retention time and unit parameters

In properties file do not set retention time and unit. The retention time defaults to 0, but the create schedule task fails with the below error. Since the user has to set either expiration or retention time or both, the task should not fails if either is not set.

Playbook
storage_system_ip: "192.168.67.6"
storage_system_username: "3paradm"
storage_system_password: "3pardata"
state: schedule_create
schedule_name: "Ansible_schedule_01"
snapshot_name: "Ansible_volume_SS_snap_01"
base_volume_name: "Ansible_volume_SS_01"
read_only: true
expiration_time: 2
expiration_unit: "days"
retention_time:
retention_unit:
t
ask_freq_custom: "0 8-17 * * *"

Task error
"invocation": {
"module_args": {
"base_volume_name": "Ansible_volume_SS_01",
"expiration_hours": 0,
"expiration_time": 2,
"expiration_unit": "days",
"read_only": true,
"retention_hours": 0,
"retention_time": null,
"retention_unit": null,
"schedule_name": "Ansible_schedule_01",
"snapshot_name": "Ansible_volume_SS_snap_01",
"state": "schedule_create",
"storage_system_ip": "192.168.67.6",
"storage_system_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"storage_system_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"task_freq_custom": "0 8-17 * * *"
}
},
"msg": "value of retention_unit must be one of: Hours, Days, got: None"

Create schedule fails when "readonly" value is not specified in properties file

In the create schedule properties file, the readonly value is blank.

storage_system_ip: "192.168.67.6"
storage_system_username: "3paradm"
storage_system_password: "3pardata"
state: schedule_create
schedule_name: "Ansible_schedule_01"
snapshot_name: "Ansible_volume_SS_snap_01"
base_volume_name: "Ansible_volume_SS_01"
read_only:
expiration_time: 2
expiration_unit: "Hours"
task_freq_custom: "0 * * * *"

The create schedule tasks displays as successful:
changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"allow_remote_copy_parent": null,
"base_volume_name": "Ansible_volume_SS_01",
"expiration_hours": 0,
"expiration_time": null,
"expiration_unit": "Hours",
"new_name": null,
"priority": null,
"read_only": null,
"retention_hours": 0,
"retention_time": null,
"retention_unit": "Hours",
"rm_exp_time": null,
"schedule_name": "Ansible_schedule_01",
"snapshot_name": "Ansible_volume_SS_snap_01",
"state": "schedule_create",
"storage_system_ip": "192.168.67.6",
"storage_system_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"storage_system_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"task_freq": null,
"task_freq_custom": "0 * * * *"
}
},
"msg": "Created Schedule Ansible_schedule_01 successfully."
}

The array does not list the schedule
CSSOS-SSA06 cli% showsched
No scheduled tasks listed

"createsv" command help mentions that if the "readonly" property value is not specified, the default value should be set to "read/write"

OPTIONS
-ro
Specifies that the copied volume is read-only. If not specified, the
volume is read/write.

Modify schedule gives an incorrect error when an invalid input value is provided in task freq field

Provide an incorrect 'task_freq' value eg: give mins field value > 60, and run modify schedule playbook. The below error is displayed

fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"allow_remote_copy_parent": null,
"base_volume_name": null,
"expiration_hours": 0,
"expiration_time": null,
"expiration_unit": "Hours",
"new_name": null,
"new_schedule_name": "New_Ansible_schedule_01",
"priority": null,
"read_only": null,
"retention_hours": 0,
"retention_time": null,
"retention_unit": "Hours",
"rm_exp_time": null,
"schedule_name": "Ansible_schedule_01",
"snapshot_name": null,
"state": "modify_schedule",
"storage_system_ip": "192.168.67.6",
"storage_system_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"storage_system_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"task_freq": "88 * * * *"
}
},
"msg": "Error: An invalid schedule was given: 88 is not a valid value"

[Feature Request] File Persona support and CPG, VV provisioning options in 3PAR Modules

We have the following requirement to manage and configure 3PAR/File Persona using REST API's & Ansible:

  1. Looking for File Persona Modules in 3PAR Ansible modules provided in HewlettPackard/hpe3par_ansible_module. Currently, I don't see any such modules available.
  2. Also, I have observed the all the options that are available in 3PAR CLI for CPG, VV create, are currently not available in 3PAR ansible modules provide. For example, step size (-ss <size_KB>), growth increment (-sdgs) etc.

Thank you.

Admit RCG target worked successful without RCG link between source & target array.

I tried to admit RCG target to array which was not linked to source array, expected result was failure and proper message regarding no existing link but TC passed successful with message "Admit target successful".

Dismiss link:-
CSSOS-SSA06 cli% dismissrcopylink CSSOS-SSA04 1:3:1:10.100.3.26

Property file to add RCG & tagret :-


  • hosts: localhost
    tasks:

    • name: 'Create rcg and admit remote copy target with valid RCG name'
      include_vars: 'properties/admit_rcg_target_ts02_properties.yml'

    • import_tasks: 'tasks/create_remote_copy_group_playbook.yml'

    • name: 'Admit target'
      include_vars: 'properties/admit_rcg_target_ts04_properties.yml'

    • import_tasks: 'tasks/admit_remote_copy_target_playbook.yml'

properties/admit_rcg_target_ts02_properties.yml :-

storage_system_ip: "192.168.67.6"
storage_system_username: "3paradm"
storage_system_password: "3pardata"
state: present
remote_copy_group_name: "ans_CreateRCG_t-001"
domain:
#target_name: "CSSOS-SSA05"
remote_copy_targets:

  • target_name: "CSSOS-SSA05"

- target_name: "CSSOS-SSA06"

target_mode: "periodic"

- userCPG:

- snapCPG:

keep_snap: false
local_user_cpg:
local_snap_cpg:

properties/admit_rcg_target_ts04_properties.yml :=

storage_system_ip: "192.168.67.6"
storage_system_username: "3paradm"
storage_system_password: "3pardata"
remote_copy_group_name: "ans_CreateRCG_t-001"
state: admit_target
target_mode: "sync"
target_name: "CSSOS-SSA04"
local_remote_volume_pair_list:

Please find attached log of testing.
admit_target_without_link_issue.txt

Adding volume pair to a remote copy group

While trying to add a volume pair to a remote copy group with the state "add_volume", the request seems to be unresponsive.

Playbook:

  • name: Add volume to remote copy group
    hpe3par_remote_copy:
    storage_system_ip: "{{ storage_system_ip }}"
    storage_system_username: "{{ storage_system_username }}"
    storage_system_password: "{{ storage_system_password }}"
    state: add_volume
    remote_copy_group_name: {{ rcg_name }}
    volume_name: "{{ volume_name }}"
    admit_volume_targets:
    • target_name: {{ 3par_target }}
      sec_volume_name: "{{ secondary_volume_name }}"

This issue occurs even if the RCG is newly created previously on this playbook, or was already created and wether his state stopped or started.

Trying to run API request from Postman give me the same result, nothing happens.
https://{{ storage_system_ip }}:{{ port }}/api/v1/remotecopygroups/{{ rcg_name }}/volumes
with body : "volumeName": "{{ volume_name","targets":[{"targetName":"{{ 3par_target }}","secVolumeName":"{{ secondary_volume_name }}"}]

Errors message changes for deco operations on Primera

When type is full:
"msg": "Volume creation failed | Bad request (HTTP 400) - invalid input: Either tpvv must be true OR for compressed and deduplicated volumes both 'compression' and 'tdvv' must be specified as true"

"msg": "Volume creation failed | Bad request (HTTP 400) - invalid input: On Primera for tpvv volume, 'type' must be set to thin, and for compressed and deduplicated volumes 'compression' must be set to true and 'type' must be specified as thin_dedupe"

type:thin_dedupe compression:false

"msg": "Volume creation failed | Bad request (HTTP 400) - invalid input: For compressed and deduplicated volumes both 'compression' and 'tdvv' must be specified as true"

}

"msg": "Volume creation failed | Bad request (HTTP 400) - invalid input: For compressed and deduplicated volumes 'compression' must be specified as true" and type value should be thin_dedupe

}

convert from thin and compression=false to thin and compression set to true

"msg": "Provisioning type change failed | Bad request (HTTP 400) - invalid input: On primera supported array along with compression set to true 'conversionOperation' must be 3(TDVV) or for deco operation user can set 'conversionOperation' to 4(CONVERT_TO_DECO)"

"msg": "Provisioning type change failed | Bad request (HTTP 400) - invalid input: On primera array, with compression set to true 'conversionOperation' must be have type set to "thin_dedupe"

playbook hpe3par_host offers no options for Host Operating System

Strictly this is not an issue, merely a question...I have been unable to contact the creator of the playbook....

The playbook hpe3par_host offers no options for Host Operating System, and I cannot get information on it's relevance. One can specify the Persona and assume that the associated operating system is inline with the Persona.

Please advise.

Error message not justifying exact cause of failure, in case of invalid target during RCG creation.

While creating RCG given invalid RCG name, error states "pecified target is not a target of remote copy group", which do not specify exact problem.

3PAR error states := "Could not find target CSSOS-SSA"

Please fine below log for the same.

property file:=

storage_system_ip: "192.168.67.6"
storage_system_username: "3paradm"
storage_system_password: "3pardata"
state: present
remote_copy_group_name: "ansible_CreateRCG-003"
domain: "test_domain"
remote_copy_targets:

  • target_name: "InvalidTargetName"
    target_mode: "sync"
    user_cpg:
    snap_cpg:
    local_user_cpg:
    local_snap_cpg:
    keep_snap: false

Please attach log for the same.
invalid_rcg_error_issue.txt

Error message displayed on failure should not be generic

While adding volume to a remote copy group, if the secondary volume name is invalid, i.e, greater than 32 characters, the error displayed is generic.

fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"admit_volume_targets": [
{
"sec_volume_name": "Sec_Ansible_volume_01ccccccccccccccccccc",
"target_name": "CSSOS-SSA05"
}
],
"different_secondary_wwn": null,
"discard_new_data": false,
"domain": null,
"full_sync": false,
"keep_snap": false,
"local_groups_direction": false,
"local_remote_volume_pair_list": null,
"local_snap_cpg": null,
"local_user_cpg": null,
"modify_targets": null,
"no_resync_snapshot": false,
"no_snapshot": false,
"recovery_action": null,
"remote_copy_group_name": "ansible_CreateRCG-002",
"remote_copy_targets": null,
"remove_secondary_volume": false,
"skip_initial_sync": true,
"skip_promote": false,
"skip_start": false,
"skip_sync": false,
"snapshot_name": null,
"source_port": null,
"starting_snapshots": null,
"state": "add_volume",
"stop_groups": false,
"storage_system_ip": "192.168.67.7",
"storage_system_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"storage_system_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"target_mode": null,
"target_name": null,
"target_port_wwn_or_ip": null,
"unset_snap_cpg": false,
"unset_user_cpg": false,
"volume_auto_creation": true,
"volume_name": "Ansible_volume_01"
}
},
"msg": "Remote Copy Group modify failed | Bad request (HTTP 400) 57 - invalid input: string length exceeds limit"

The error message should clearly indicate the field causing the issue

type object 'HPE3ParClient' has no attribute 'getPortNumber'

Ansible Version 2.8.2
Python Version 3.6.3
hpe3par-sdk 1.2.1
python-3parclient 4.2.11

When trying to create an online clone against Primera I receive the following error message using the latest modules, and SDK. This also fails against a 3Par in the same manner. Older versions work with 3Par, but not Primera

The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1582743385.0328465-6801247984569/AnsiballZ_hpe3par_online_clone.py", line 114, in
_ansiballz_main()
File "/root/.ansible/tmp/ansible-tmp-1582743385.0328465-6801247984569/AnsiballZ_hpe3par_online_clone.py", line 106, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-tmp-1582743385.0328465-6801247984569/AnsiballZ_hpe3par_online_clone.py", line 49, in invoke_module
imp.load_module('main', mod, module, MOD_DESC)
File "/tmp/ansible_hpe3par_online_clone_payload_FkwD1j/main.py", line 400, in
File "/tmp/ansible_hpe3par_online_clone_payload_FkwD1j/main.py", line 370, in main
AttributeError: type object 'HPE3ParClient' has no attribute 'getPortNumber'

fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File "/root/.ansible/tmp/ansible-tmp-1582743385.0328465-6801247984569/AnsiballZ_hpe3par_online_clone.py", line 114, in \n _ansiballz_main()\n File "/root/.ansible/tmp/ansible-tmp-1582743385.0328465-6801247984569/AnsiballZ_hpe3par_online_clone.py", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File "/root/.ansible/tmp/ansible-tmp-1582743385.0328465-6801247984569/AnsiballZ_hpe3par_online_clone.py", line 49, in invoke_module\n imp.load_module('main', mod, module, MOD_DESC)\n File "/tmp/ansible_hpe3par_online_clone_payload_FkwD1j/main.py", line 400, in \n File "/tmp/ansible_hpe3par_online_clone_payload_FkwD1j/main.py", line 370, in main\nAttributeError: type object 'HPE3ParClient' has no attribute 'getPortNumber'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}

Here is the task I am running in the Playbook

  • name: Create Clone clone_volume_ansible
    hpe3par_online_clone:
    storage_system_ip="10.10.253.190"
    storage_system_username="3paradm"
    storage_system_password="3pardata"
    state=present
    clone_name="{{osflavor}}"
    base_volume_name="{{osbase}}"
    dest_cpg="SSD_r6"
    tpvv=false
    tdvv=False
    compression=False
    snap_cpg="SSD_r6"

Unable to create scheduling without retention value & unit

I am unable to create schedule without retension unit & value, please find below details.

Property file:

storage_system_ip: "192.168.67.6"
storage_system_username: "3paradm"
storage_system_password: "3pardata"
state: create_schedule
schedule_name: "Ansible_schedule_01"
#snapshot_name: "Ansible_volume_SS_snap_01"
base_volume_name: "An_vol_SS_01"
read_only:
expiration_time: 2
expiration_unit: Hours
#task_freq_custom: "8 * * * *"
task_freq: "8 * * * *"

task file:


- name: Create schedule
  hpe3par_snapshot:
    storage_system_ip="{{ storage_system_ip }}"
    storage_system_username="{{ storage_system_username }}"
    storage_system_password="{{ storage_system_password }}"
    state=create_schedule
    schedule_name="{{ schedule_name }}"
    base_volume_name="{{ base_volume_name }}"
    read_only="{{ read_only }}"
    expiration_time="{{ expiration_time }}"
    expiration_unit="{{ expiration_unit }}"
    task_freq="{{ task_freq }}"

playbook;


  • hosts: localhost
    tasks:

    • name: 'CREATERCG:TS01, Create a volume and snapshot'

      include_vars: 'properties/create_schedule_ts02_properties.yml'

    • import_tasks: 'tasks/create_vv_playbook.yml'

    • name: 'Create a schedule to trigger every hour'
      include_vars: 'properties/create_schedule_ts01_properties.yml'

    • import_tasks: 'tasks/create_schedule_playbook.yml'

    • pause:
      prompt: "Press Enter"

    • import_tasks: 'tasks/delete_schedule_playbook.yml'

Please find attached test log.
scheduling_issue.txt

Please update requierments

hi,

in your requierments you have:

Primera OS

4.0.0

Primera is currently on 4.2.2. After asking HPE support I got the answer:

I inform you that the 4.0.0 version was never a release customer version. The document refers to all 4.x.x version and you can use it with 4.2.2 version.

So please update to avoid any questions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.