ansible-collections / community.aws Goto Github PK
View Code? Open in Web Editor NEWAnsible Collection for Community AWS
License: GNU General Public License v3.0
Ansible Collection for Community AWS
License: GNU General Public License v3.0
Batch compute environment fails in check mode when there are updates to make
aws_batch_compute_environment
Slightly modified to remove sensitive info
ansible 2.9.9
config file = ./ansible/ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = .../.venv/lib/python3.7/site-packages/ansible
executable location = .../.venv/bin/ansible
python version = 3.7.6 (default, Jan 8 2020, 19:59:22) [GCC 7.3.0]
Slightly modified to remove sensitive info
DEFAULT_JINJA2_NATIVE = True
DEFAULT_STDOUT_CALLBACK = debug
HOST_KEY_CHECKING = False
Ubuntu 20.04 64bit
Run the module in check mode when a compute environment already exists but with a slightly different configuration
- name: Batch compute environment
aws_batch_compute_environment:
compute_environment_name: "{{ aws_batch_compute_environment_name }}"
state: present
compute_environment_state: ENABLED
type: MANAGED
compute_resource_type: EC2
minv_cpus: 0
maxv_cpus: "{{ batch_max_cpus}}"
desiredv_cpus: "{{ batch_desired_cpus }}"
instance_types:
- i3
subnets:
- "{{ private_subnets.subnets.0.id }}"
security_group_ids:
- "{{ batch_security_groups.security_groups.0.group_id }}"
instance_role: "{{ aws_batch_ecs_instance_role_name }}"
tags:
Project: "{{ project_tag }}"
service_role: "{{ aws_batch_service_role.iam_role.arn }}"
register: compute_environment
The module reports changed but does not fail
The module fails with an error
TASK [aws_batch : Batch compute environment] *****************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {
"changed": false
}
MSG:
Unable to get compute environment information after creating
Needed for Content Collection v1.0
The "Big Migration" has now taken place.
As this collection already exists, we need to carefully check to see if any further commits went into devel since this repo was created.
Please check the contents of https://github.com/ansible-collection-migration/community.grafana against this repo
In particular:
Follow up from ansible-collections/amazon.aws#106 / ansible-collections/amazon.aws#107
Various _info modules currently have no support for check_mode.
$ grep -rL check_mode plugins/modules/*_info.py
aws_region_info
aws_sgw_info
ec2_asg_info
ec2_lc_info
iam_mfa_device_info
iam_server_certificate_info
Simplest option: Since they make no changes it's reasonable for them to simply support check_mode and ignore it's value for consistency with our other modules.
Moving this ticket from ansible-base.
If I try to create an IAM role, I can.
When I run the task a second time, it fails, because I don't have iam:UpdateAssumeRolePolicy
permissions in my IAM role.
But if the role policy document hasn't changed, I shouldn't need iam:UpdateAssumeRolePolicy
.
iam_role
$ ansible --version
ansible 2.9.0
config file = /home/ec2-user/.ansible.cfg
configured module search path = ['/home/ec2-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ec2-user/.local/lib/python3.6/site-packages/ansible
executable location = /home/ec2-user/.local/bin/ansible
python version = 3.6.10 (default, Feb 10 2020, 19:55:14) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
Note that I tried to reproduce this off the devel
branch, but got
ERROR! couldn't resolve module/action 'iam_role'. This often indicates a misspelling, missing collection, or incorrect module path.
It seems that all the cloud modules have been removed from devel
?
link
Is that deliberate?
ANSIBLE_PIPELINING(/home/ec2-user/.ansible.cfg) = True
DEFAULT_LOCAL_TMP(/home/ec2-user/.ansible.cfg) = /dev/shm/ansible/tmp_local/ansible-local-12013vjaxtg0x
Amazon Linux
---
- hosts: localhost
connection: local
tasks:
- name: "Create role for SMS logging"
iam_role:
name: SNSSMSDeliveryStatusLogging
assume_role_policy_document:
Statement:
- Action:
- "sts:AssumeRole"
Effect: Allow
Principal:
Service:
- "sns.amazonaws.com"
managed_policy:
# let SNS log to CloudWatch
- "arn:aws:iam::aws:policy/service-role/AmazonSNSRole"
boundary: "arn:aws:iam::aws:policy/PowerUserAccess" # should be "{{ boundary_policy_arn }}"
create_instance_profile: False # must be false when assigning a boundary policy
Run this playbook twice, running this as an IAM role with iam:UpdateAssumeRolePolicy
denied.
The playbook should succeed. The first run creates the role. The second run does nothing,
The first run successfully creates the role.
When I try the second time:
TASK [Create role for SMS logging] *****************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the UpdateAssumeRolePolicy operation: User: arn:aws:sts::123456:assumed-role/deployer/i-abc is not authorized to perform: iam:UpdateAssumeRolePolicy on resource: role SNSSMSDeliveryStatusLogging
fatal: [localhost]: FAILED! => changed=false
error:
code: AccessDenied
message: 'User: arn:aws:sts::123456:assumed-role/deployer/i-abc is not authorized to perform: iam:UpdateAssumeRolePolicy on resource: role SNSSMSDeliveryStatusLogging'
type: Sender
msg: 'Unable to update assume role policy for role SNSSMSDeliveryStatusLogging: An error occurred (AccessDenied) when calling the UpdateAssumeRolePolicy operation: User: arn:aws:sts::123456:assumed-role/deployer/i-abcd is not authorized to perform: iam:UpdateAssumeRolePolicy on resource: role SNSSMSDeliveryStatusLogging'
response_metadata:
http_headers:
content-length: '420'
content-type: text/xml
date: Fri, 19 Jun 2020 05:46:33 GMT
x-amzn-requestid: 576771dd-620d-4ed9-b3e1-d9638f879437
http_status_code: 403
request_id: 576771dd-620d-4ed9-b3e1-d9638f879437
retry_attempts: 0
I wondered whether it's because assume_role_policy_document
converts the yaml to json in a non-deterministic way. When I extracted that policy into json and did lookup('file', 'policy.json')
, the result is the same. So I don't think that's the cause.
At the time of writing there were 11 issues raised. None of them was referring to ec2_win_password
At the time of writing there is only 1 branch at all and it is affected by the issue.
ec2_win_password fails for given key_data if module is executed with python3. It works with python2.
ec2_win_password
Ansible 2.9.5
We have a pem file which is saved in a variable with linebreaks.
If we have a task like this
- name: get admin pws for systems
ec2_win_password:
region: "{{ aws_region }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
security_token: "{{ security_token }}"
instance_id: "{{ item }}"
key_data: "{{ sshkeyplain }}"
key_passphrase: "{{ passphrase }}"
wait: yes
no_log: true
register: passwords
loop: "{{ system_ids }}"
If you execute this module with python2 backend it works.
If you execute this module with python3 backend it fails. ("unable to parse key data")
I expect the key_data to be parsed correctly with python3 backend.
Error when executing "load_pem_private_key" in python3 plain.
key = load_pem_private_key(key_data, b_key_passphrase, default_backend())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/var/lib/awx/venv/ansible/lib/python3.6/site-packages/cryptography/hazmat/primitives/serialization/base.py", line 16, in load_pem_private_key
return backend.load_pem_private_key(data, password)
File "/var/lib/awx/venv/ansible/lib/python3.6/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 1089, in load_pem_private_key
password,
File "/var/lib/awx/venv/ansible/lib/python3.6/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 1282, in _load_key
mem_bio = self._bytes_to_bio(data)
File "/var/lib/awx/venv/ansible/lib/python3.6/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 473, in _bytes_to_bio
data_ptr = self._ffi.from_buffer(data)
TypeError: from_buffer() cannot return the address of a unicode object
Our tests showed that explicit encoding is compatible with python2 and python3. Please verify.
key = load_pem_private_key(key_data.encode("ascii"), b_key_passphrase, default_backend())
dynamodb_table module doesn't use AWS_URL as described in the documentation. Which prevents us from running it against LocalStack
dynamodb_table
ansible 2.9.3
config file = None
configured module search path = ['{{HOME}}/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.3/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.1 (default, Dec 27 2019, 18:06:00) [Clang 11.0.0 (clang-1100.0.33.16)]
empty
macOS Catalina
Running the following playbook
---
- hosts: localhost
remote_user: bob
tasks:
- name: Create the Tank table
dynamodb_table:
name: Tank
hash_key_name: id
hash_key_type: STRING
read_capacity: 1
write_capacity: 1
using the following command
AWS_URL=http://localhost:4566 AWS_REGION=us-east-1 AWS_ACCESS_KEY_ID=NONE_FOR_LOCAL AWS_SECRET_ACCESS_KEY=NONE_FOR_LOCAL ansible-playbook ./playbook-dynamo.yml
Expected to create a DynamoDB table in LocalStack since AWS_URL points to it
The command fails with the following output, looks like it still tries to hit some default AWS endpoints instead of AWS_URL
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [Create the Tank table] ***************************************************
fatal: [localhost]: FAILED! => {"changed": false, "hash_key_name": "id", "hash_key_type": "STRING", "indexes": [], "msg": "Failed to create/update dynamo table due to error: Traceback (most recent call last):\n File \"/var/folders/y7/bpszvwgj7lv54vjwp99lgg2h0000gn/T/ansible_dynamodb_table_payload_kapu_np8/ansible_dynamodb_table_payload.zip/ansible/modules/cloud/amazon/dynamodb_table.py\", line 213, in create_or_update_dynamo_table\n File \"/var/folders/y7/bpszvwgj7lv54vjwp99lgg2h0000gn/T/ansible_dynamodb_table_payload_kapu_np8/ansible_dynamodb_table_payload.zip/ansible/modules/cloud/amazon/dynamodb_table.py\", line 292, in dynamo_table_exists\n File \"/var/folders/y7/bpszvwgj7lv54vjwp99lgg2h0000gn/T/ansible_dynamodb_table_payload_kapu_np8/ansible_dynamodb_table_payload.zip/ansible/modules/cloud/amazon/dynamodb_table.py\", line 285, in dynamo_table_exists\n File \"/usr/local/Cellar/ansible/2.9.3/libexec/lib/python3.8/site-packages/boto/dynamodb2/table.py\", line 356, in describe\n result = self.connection.describe_table(self.table_name)\n File \"/usr/local/Cellar/ansible/2.9.3/libexec/lib/python3.8/site-packages/boto/dynamodb2/layer1.py\", line 977, in describe_table\n return self.make_request(action='DescribeTable',\n File \"/usr/local/Cellar/ansible/2.9.3/libexec/lib/python3.8/site-packages/boto/dynamodb2/layer1.py\", line 2840, in make_request\n response = self._mexe(http_request, sender=None,\n File \"/usr/local/Cellar/ansible/2.9.3/libexec/lib/python3.8/site-packages/boto/connection.py\", line 954, in _mexe\n status = retry_handler(response, i, next_sleep)\n File \"/usr/local/Cellar/ansible/2.9.3/libexec/lib/python3.8/site-packages/boto/dynamodb2/layer1.py\", line 2884, in _retry_handler\n raise self.ResponseError(response.status, response.reason,\nboto.exception.JSONResponseError: JSONResponseError: 400 Bad Request\n{'__type': 'com.amazon.coral.service#UnrecognizedClientException', 'message': 'The security token included in the request is invalid.'}\n", "range_key_name": null, "range_key_type": "STRING", "read_capacity": 1, "region": "us-east-1", "table_name": "Tank", "write_capacity": 1}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Related issues:
Moving issue #49019 from Ansible repository:
In module "route53_health_check" it would be helpful to be able to deactivate health checks.
My use case is to disable/deactivate health checks during a deployment and afterwards reenable them.
In the AWS Route 53 console it can be done via "Advanced Configuration" by checking "Disable health check". This option considers the health check positive as long as it is deactivated.
route53_health_check
I think the simplest way would be to add the state "disabled". So new health checks can be created in a disabled state or existing health checks will get disabled.
The other option would be to add the flag "disabled" additionally to the "present" state.
- route53_health_check:
state: disabled
fqdn: host1.example.com
type: HTTPS
resource_path: /
register: my_health_check
Ansible contained lookups for aws, like aws_secret : https://github.com/ansible/ansible/blob/stable-2.8/lib/ansible/plugins/lookup/aws_secret.py
Any reason not having them in the collection ?
Moving issue from Ansible repository.
route53 module
In some scenarios, the volume_size is being treated as a string instead of an integer. This is causing an "Invalid type" error for that parameter.
ec2_launch_template
ansible 2.9.9
config file = /Users/dlee/prj/cloud-automation/aws-infrastructure/ansible.cfg
configured module search path = ['/Users/dlee/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/dlee/prj/cloud-automation/aws-infrastructure/build/virtualenv/lib/python3.7/site-packages/ansible
executable location = /Users/dlee/prj/cloud-automation/aws-infrastructure/build/virtualenv/bin/ansible
python version = 3.7.7 (default, Mar 10 2020, 15:43:33) [Clang 11.0.0 (clang-1100.0.33.17)]
DEFAULT_FORCE_HANDLERS(/Users/dlee/prj/cloud-automation/aws-infrastructure/ansible.cfg) = True
DEFAULT_HOST_LIST(/Users/dlee/prj/cloud-automation/aws-infrastructure/ansible.cfg) = ['/Users/dlee/prj/cloud-automation/aws-infrastructure/inventory.ini']
DEFAULT_ROLES_PATH(/Users/dlee/prj/cloud-automation/aws-infrastructure/ansible.cfg) = ['/Users/dlee/prj/cloud-automation/aws-infrastructure/build/galaxy']
DEFAULT_STDOUT_CALLBACK(/Users/dlee/prj/cloud-automation/aws-infrastructure/ansible.cfg) = yaml
RETRY_FILES_ENABLED(/Users/dlee/prj/cloud-automation/aws-infrastructure/ansible.cfg) = False
macOS host
Create a role with the following two tasks, then execute that role.
---
- set_fact:
some_configs:
- template_name: some-template-1
volume_size: 10
- template_name: some-template-2
volume_size: 20
- name: ec2_launch_template some-template-X
ec2_launch_template:
template_name: "{{ item.template_name }}"
instance_type: t3.nano
block_device_mappings:
- device_name: "/dev/sda1"
ebs:
volume_size: "{{ item.volume_size | int }}"
volume_type: gp2
image_id: ami-068663a3c619dd892
with_items: "{{ some_configs }}"
Should create launch template successfully.
TASK [provision-switchvoxen-recovery-launch-configs : ec2_launch_template some-template-X] ***
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Invalid type for parameter LaunchTemplateData.BlockDeviceMappings[0].Ebs.VolumeSize, value: 10, type: <class 'str'>, valid types: <class 'int'>
failed: [localhost] (item={'template_name': 'some-template-1', 'volume_size': 10}) => changed=false
ansible_loop_var: item
boto3_version: 1.13.7
botocore_version: 1.16.7
item:
template_name: some-template-1
volume_size: 10
msg: |-
Couldn't create launch template: Parameter validation failed:
Invalid type for parameter LaunchTemplateData.BlockDeviceMappings[0].Ebs.VolumeSize, value: 10, type: <class 'str'>, valid types: <class 'int'>
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Invalid type for parameter LaunchTemplateData.BlockDeviceMappings[0].Ebs.VolumeSize, value: 20, type: <class 'str'>, valid types: <class 'int'>
failed: [localhost] (item={'template_name': 'some-template-2', 'volume_size': 20}) => changed=false
ansible_loop_var: item
boto3_version: 1.13.7
botocore_version: 1.16.7
item:
template_name: some-template-2
volume_size: 20
msg: |-
Couldn't create launch template: Parameter validation failed:
Invalid type for parameter LaunchTemplateData.BlockDeviceMappings[0].Ebs.VolumeSize, value: 20, type: <class 'str'>, valid types: <class 'int'>
Python 3.9 needs xml.etree.ElementTree to be used directly rather than importing xml.etree.cElementTree. We're getting import errors from botocore in unit tests:
https://app.shippable.com/github/ansible-collections/community.aws/runs/188/6/console
There's a patch upstream that could close the botocore issue:
boto/botocore#2002
tests/units/
master
shippable
I've written a schema for DOCUMENTATION, RETURN, etc. There are a few problems in the structure of the return docs on a pair of modules:
E module -> community.aws.cloudwatchlogs_log_group_metric_filter -> return -> metric_filters -> contains -> creation_time
E none is not an allowed value (type=type_error.none.not_allowed)
E module -> community.aws.cloudwatchlogs_log_group_metric_filter -> return -> metric_filters -> contains -> filter_name
E none is not an allowed value (type=type_error.none.not_allowed)
E module -> community.aws.cloudwatchlogs_log_group_metric_filter -> return -> metric_filters -> contains -> filter_pattern
E none is not an allowed value (type=type_error.none.not_allowed)
E module -> community.aws.cloudwatchlogs_log_group_metric_filter -> return -> metric_filters -> contains -> log_group_name
E none is not an allowed value (type=type_error.none.not_allowed)
E module -> community.aws.cloudwatchlogs_log_group_metric_filter -> return -> metric_filters -> contains -> metric_filter_count
E none is not an allowed value (type=type_error.none.not_allowed)
E module -> community.aws.ecs_service -> doc -> options -> placement_constraints -> suboptions
E none is not an allowed value (type=type_error.none.not_allowed)
Since this schema is new, it is possible that None should be allowed in one or all of the locations listed. If so, let me know and we can discuss whether/how to update the schema to allow that.
On running the cloudformation_stack_set module the first time it successfully creates the stackset and deploys the stack instances. On future runs of the play without nothing changed an additional update operation is run and changed is returned as true.
On each run it creates a new operation in the stackets console, again despite nothing being changed.
cloudformation_stack_set AWS module
ansible 2.9.2
config file = None
configured module search path = ['/home/naslanidis/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/naslanidis/code/github/aws-organizations-poc/env/lib/python3.6/site-packages/ansible
executable location = /home/naslanidis/code/github/aws-organizations-poc/env/bin/ansible
python version = 3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]
$ ansible-config dump --only-changed
$
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.4 LTS
Release: 18.04
Codename: bionic
- name: Create test stackset
cloudformation_stack_set:
name: iam-core-stackset.yml
description: Test stack in two accounts
region: ap-southeast-2
state: present
capabilities: CAPABILITY_NAMED_IAM
template: "{{role_path}}/templates/iam-core-stackset.yml"
accounts: [12345679878]
regions:
- ap-southeast-2
On first run it works fine:
TASK [baseline-stacksets : Create test core stackset] *****************************************************************************************************************************************************************************
changed: [localhost]
PLAY RECAP ************************************************************************************************************************************************************************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
On second run without change it chould be changed=0 and nothing changes.
On the 2nd (and any future) run of the play:
TASK [baseline-stacksets : Create test core stackset] *****************************************************************************************************************************************************************************
changed: [localhost]
PLAY RECAP ************************************************************************************************************************************************************************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Each time I run the module a new stackset operation is created. This should not happen when nothing has changed in the task.
It's possible this was a design choice but essentially there's no comparison being done to compare the existing stack facts against the arguments in the ansible task. In the below, if there's no existing_stack_set, one is created. Else, the update_stack_set function is called. I feel there should be some comparison being done between the stack_params dict and the existing_stack_set dict to determine if an update is required.
if state == 'present':
if not existing_stack_set:
# on create this parameter has a different name, and cannot be referenced later in the job log
stack_params['ClientRequestToken'] = 'Ansible-StackSet-Create-{0}'.format(operation_uuid)
changed = True
create_stack_set(module, stack_params, cfn)
else:
stack_params['OperationId'] = 'Ansible-StackSet-Update-{0}'.format(operation_uuid)
operation_ids.append(stack_params['OperationId'])
if module.params.get('regions'):
stack_params['OperationPreferences'] = get_operation_preferences(module)
changed |= update_stack_set(module, stack_params, cfn)
AWS Cloudformation Nested stacks provide an easy way to organize project into components and to reuse them. Nested stacks are currently not supported in Ansible.
cloudformation
cloudformation_info
Cloudformation nested stacks are recommended by AWS; as project grows their usage become inevitable. To use Ansible with a Cloudformation root stack (template that contains a nested stack), currently users need to breakdown their root stack and organize nested stacks into playbook. (Importing a nested stack in my playbook, I get this error TemplateURL must be an Amazon S3 URL.
). This is just extra effort and inconvenient.
If that would be supported, then users just need to import template file of the root stack and rest will be handled internally ( so no need to take out nested stack and put them in playbook)
- name: create a cloudformation root stack
cloudformation:
stack_name: "root-stack"
state: "present"
region: "us-east-1"
disable_rollback: true
template: "files/root-stack.json"
template_parameters:
While attempting to use an ssm connection where the S3 bucket has KMS server-side encryption enabled the existing code
returns an InvalidArgument
response (400 status code) with the message body:
Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.
aws_ssm connection plugin
ansible 2.9.10
config file = elided/ansible/ansible.cfg
configured module search path = ['elided/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /elided/.direnv/python-3.8.2-venv/lib/python3.8/site-packages/ansible
executable location = /elided/.direnv/python-3.8.2-venv/bin/ansible
python version = 3.8.2 (default, May 9 2020, 20:43:08) [GCC 9.3.0]
CACHE_PLUGIN(/elided/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/elided/ansible/ansible.cfg) = .cache/facts
COLLECTIONS_PATHS(/elided/ansible/ansible.cfg) = ['/elided/ansible/collections', '/elided/ansible/galaxy_collections']
DEFAULT_GATHERING(/elided/ansible/ansible.cfg) = smart
DEFAULT_KEEP_REMOTE_FILES(env: ANSIBLE_KEEP_REMOTE_FILES) = True
DEFAULT_ROLES_PATH(/elided/ansible/ansible.cfg) = ['/elided/ansible/roles', '/elided/ansible/galaxy_roles']
INJECT_FACTS_AS_VARS(/elided/ansible/ansible.cfg) = False
INVENTORY_CACHE_ENABLED(/elided/ansible/ansible.cfg) = True
INVENTORY_CACHE_PLUGIN_CONNECTION(/elided/ansible/ansible.cfg) = .cache/inventory
RETRY_FILES_ENABLED(/elided/ansible/ansible.cfg) = False
Controller: Ubuntu 20.04 running ansible inside a python venv with boto3 1.14.16, botocore 1.17.16
Target: Amazon Linux 2 with SSM.
aws ssm start-session --target <id>
- hosts: all
collections:
- community.aws
vars:
ansible_connection: community.aws.aws_ssm
ansible_aws_ssm_region: your-region
ansible_aws_ssm_bucket_name: 'your-bucket-name'
tasks:
- shell: echo "Hello World"
Expecting "Hello world" to be reported by Ansible.
Ansible uploads the AnsiballZ_setup.py
file successfully, but on retrieval by curl within the target system the file is replaced by the body of the failed request.
In this case the following content is returned:
<i-065641ea2afa0e2b8> EXEC /usr/bin/python /home/ssm-user/.ansible/tmp/ansible-tmp-1594113678.8116896-25457-211691010476604/AnsiballZ_setup.py
<i-065641ea2afa0e2b8> _wrap_command: 'echo mZupBuwTQWDOoanBihZsCvLbXx
sudo /usr/bin/python /home/ssm-user/.ansible/tmp/ansible-tmp-1594113678.8116896-25457-211691010476604/AnsiballZ_setup.py
echo $'\n'$?
echo DEVDTjikvphuhKknRHPKGhAOkS
'
<i-065641ea2afa0e2b8> EXEC stdout line: mZupBuwTQWDOoanBihZsCvLbXx
<i-065641ea2afa0e2b8> EXEC stdout line: File "/home/ssm-user/.ansible/tmp/ansible-tmp-1594113678.8116896-25457-211691010476604/AnsiballZ_setup.py", line 1
<i-065641ea2afa0e2b8> EXEC stdout line: <?xml version="1.0" encoding="UTF-8"?>
<i-065641ea2afa0e2b8> EXEC stdout line: ^
<i-065641ea2afa0e2b8> EXEC stdout line: SyntaxError: invalid syntax
<i-065641ea2afa0e2b8> EXEC stdout line:
<i-065641ea2afa0e2b8> EXEC stdout line: 1
<i-065641ea2afa0e2b8> EXEC stdout line: DEVDTjikvphuhKknRHPKGhAOkS
<i-065641ea2afa0e2b8> POST_PROCESS: File "/home/ssm-user/.ansible/tmp/ansible-tmp-1594113678.8116896-25457-211691010476604/AnsiballZ_setup.py", line 1
<?xml version="1.0" encoding="UTF-8"?>
^
SyntaxError: invalid syntax
1
<i-065641ea2afa0e2b8> (1, ' File "/home/ssm-user/.ansible/tmp/ansible-tmp-1594113678.8116896-25457-211691010476604/AnsiballZ_setup.py", line 1\r\r\n <?xml version="1.0" encoding="UTF-8"?>\r\r\n ^\r\r\nSyntaxError: invalid syntax\r\r', '')
<i-065641ea2afa0e2b8> CLOSING SSM CONNECTION TO: i-065641ea2afa0e2b8
<i-065641ea2afa0e2b8> TERMINATE SSM SESSION: me@test
fatal: [i-065641ea2afa0e2b8]: FAILED! => {
"ansible_facts": {},
"changed": false,
"failed_modules": {
"setup": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"failed": true,
"module_stderr": "",
"module_stdout": " File \"/home/ssm-user/.ansible/tmp/ansible-tmp-1594113678.8116896-25457-211691010476604/AnsiballZ_setup.py\", line 1\r\r\n <?xml version=\"1.0\" encoding=\"UTF-8\"?>\r\r\n ^\r\r\nSyntaxError: invalid syntax\r\r",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1,
"warnings": [
"Platform linux on host i-065641ea2afa0e2b8 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information."
]
}
},
"msg": "The following modules failed to execute: setup\n"
}
Contents of AnsiballZ_setup.py
are:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidArgument</Code><Message>Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.</Message><ArgumentName>Authorization</ArgumentName><ArgumentValue>null</ArgumentValue><RequestId>198A9FFBC724E9CE</RequestId><HostId>elided</HostId></Error>
Root causes and work-around:
--silent --show-error --fail
so the exit code of curl does not reflect the failed http status code (400 in this case) and Ansible mistakenly continues to try to execute AnsiballZ_setup.py
as if it were a python script_get_url
uses the client.generate_presigned_url
function from Boto3 but for this to work in the presence of encrypted content it requires passing in a signature version = s3v4
as part of a config object as follows:try:
import boto3
HAS_BOTO_3 = True
from botocore.config import Config
except ImportError as e:
HAS_BOTO_3_ERROR = str(e)
HAS_BOTO_3 = False
...
def _get_url(self, client_method, bucket_name, out_path, http_method):
''' Generate URL for get_object / put_object '''
config = Config(signature_version='s3v4')
client = boto3.client('s3', config=config)
return client.generate_presigned_url(client_method, Params={'Bucket': bucket_name, 'Key': out_path}, ExpiresIn=3600, HttpMethod=http_method)
Module docs of aws_s3_bucket_info
contain something that is parsed as datetime by YAML parser.
(This triggered ansible/ansible#69031.)
aws_s3_bucket_info
2.9
2.10
Issue reported here ansible/ansible#68760 and was asked to be opened here
When trying to delete a target group via elb_target_group it throws
target_type is instance but all of the following are missing: vpc_id, protocol, port.
double checked and eve in the docs it says those are only needed if state is present
elb_target_group
ansible 2.9.9
config file = /home/rowan/projects/qwertee.com/infrastructure/aws/test-sites/ansible.cfg
configured module search path = ['/home/rowan/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rowan/projects/qwertee.com/infrastructure/aws/ansible/lib/python3.8/site-packages/ansible
executable location = /home/rowan/projects/qwertee.com/infrastructure/aws/ansible/bin/ansible
python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]
ANSIBLE_NOCOWS(/home/rowan/projects/qwertee.com/infrastructure/aws/test-sites/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/rowan/projects/qwertee.com/infrastructure/aws/test-sites/ansible.cfg) = True
DEFAULT_BECOME(/home/rowan/projects/qwertee.com/infrastructure/aws/test-sites/ansible.cfg) = True
DEFAULT_BECOME_METHOD(/home/rowan/projects/qwertee.com/infrastructure/aws/test-sites/ansible.cfg) = sudo
DEFAULT_HOST_LIST(/home/rowan/projects/qwertee.com/infrastructure/aws/test-sites/ansible.cfg) = ['/home/rowan/projects/qwertee.com/infrastructure/aws/test-sites/inventory']
DEPRECATION_WARNINGS(/home/rowan/projects/qwertee.com/infrastructure/aws/test-sites/ansible.cfg) = True
HOST_KEY_CHECKING(/home/rowan/projects/qwertee.com/infrastructure/aws/test-sites/ansible.cfg) = False
Ubuntu 20.04
- name: Delete site target group
elb_target_group:
profile: "{{awsprofile}}"
region: "{{awsregion}}"
name: "{{target_group}}"
state: absent
delete with no error.
TASK [Delete site target group] **********************************************************************************************************************************************************************************
task path: /home/rowan/projects/qwertee.com/infrastructure/aws/test-sites/destroy.yml:43
Using module file /home/rowan/projects/qwertee.com/infrastructure/aws/ansible/lib/python3.8/site-packages/ansible/modules/cloud/amazon/elb_target_group.py
Pipelining is enabled.
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: rowan
<localhost> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
The full traceback is:
File "/tmp/ansible_elb_target_group_payload_pe6lf4x0/ansible_elb_target_group_payload.zip/ansible/module_utils/basic.py", line 1577, in _check_required_if
check_required_if(spec, param)
File "/tmp/ansible_elb_target_group_payload_pe6lf4x0/ansible_elb_target_group_payload.zip/ansible/module_utils/common/validation.py", line 275, in check_required_if
raise TypeError(to_native(msg))
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"debug_botocore_endpoint_logs": false,
"modify_targets": true,
"name": "site-dev-issue-2690",
"profile": "qwertee",
"purge_tags": true,
"region": "eu-west-1",
"state": "absent",
"stickiness_type": "lb_cookie",
"tags": {},
"target_type": "instance",
"validate_certs": true,
"wait": false,
"wait_timeout": 200
}
},
"msg": "target_type is instance but all of the following are missing: protocol, port, vpc_id"
}
ecs_taskdefinition idempotency checks fail if secrets
are supplied in a containerDefinition.
This appears to be because boto3 doesn't actually return the secrets in describe_task_definition
, so the _right_has_values_of_left
fails.
Can't really be fixed in Ansible until boto3 is updated (I've raised it with AWS), but useful to have this bug here with the details.
ecs_taskdefinition
ecs_taskdefinition:
state: present
force_create: no
...
containers:
...
- name: ...
secrets: ...
An ecs_taskdefinition call with the same parameters should correctly validate that a task definition revision already exists with the same configuration.
An ecs_taskdefinition with the same parameters (but including secrets config in a containerDefinition) always returns CHANGED, with a new task definition revision created.
From @danlange on Jul 14, 2020 01:29
The ecs_service module does not expose the --platform-version option of "aws ecs create-service".
ECS requires "--platform-version 1.4.0" to create a service using ECS Fargate that mounts an EFS volume.
See: https://docs.aws.amazon.com/AmazonECS/latest/userguide/platform_versions.html
Please expose this option so I can use ecs_service instead of shelling out to "aws ecs create-service"
ecs_service
ansible 2.9.10
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.3 (default, May 15 2020, 01:53:50) [GCC 9.3.0]
Dockerfile:
FROM docker:latest
RUN set -xe \
&& apk add --no-cache --virtual .build-deps \
autoconf \
cmake \
file \
g++ \
gcc \
libc-dev \
openssl-dev \
python3-dev \
libffi-dev \
make \
pkgconf \
re2c
RUN apk add --no-cache --virtual .persistent-deps \
bash \
wget \
unzip \
vim \
jq \
git \
py-pip \
libffi \
curl \
openssl \
groff \
less \
python3 \
&& pip install --upgrade \
awscli \
ansible \
boto \
boto3 \
botocore \
docker \
pip \
&& mkdir /devops \
&& apk del .build-deps
COPY ./hosts /etc/ansible/
WORKDIR /devops
# Set up the application directory
VOLUME ["/devops"]
# Setup user home
VOLUME ["/root"]
CMD ["/bin/bash"]
./hosts:
localhost ansible_connection=local ansible_python_interpreter=/usr/bin/python3
The following playbook creates a private cloud running an SSH container as an ECS Fargate service, with an EFS volume mounted.
- hosts: localhost
connection: local
gather_facts: False
vars:
vpc_name: "Example VPC"
vpc_cidr: "10.20.0.0/16"
subnet_name: "Example Subnet"
subnet_cidr: "10.20.0.0/24"
subnet_az: "{{ AWS_REGION }}a"
registry_id: "<Substitute your own registry ID here.>"
tasks:
- name: Check that required environment variables are set.
fail: msg="Must set {{ item }}"
when: "lookup('env', item) == ''"
with_items:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_REGION
- name: Set local variables corresponding to environment variables
set_fact:
AWS_REGION: "{{ lookup('env', 'AWS_REGION') }}"
- name: "VPC '{{ vpc_name }}' with CIDR {{ vpc_cidr }}"
ec2_vpc_net:
name: "{{ vpc_name }}"
cidr_block: "{{ vpc_cidr }}"
tenancy: default
register: vpc
- name: Subnet
ec2_vpc_subnet:
state: present
az: "{{ subnet_az }}"
vpc_id: "{{ vpc.vpc.id }}"
cidr: "{{ subnet_cidr }}"
tags:
Name: "{{ subnet_name }}"
register: subnet
- name: Internet gateway
ec2_vpc_igw:
vpc_id: "{{ vpc.vpc.id }}"
register: igw
- name: Route table
ec2_vpc_route_table:
vpc_id: "{{ vpc.vpc.id }}"
subnets:
- "{{ subnet.subnet.id }}"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ igw.gateway_id }}"
register: route_table
- name: ECS Cluster
shell: "aws ecs create-cluster --cluster-name 'example-cluster' --region {{ AWS_REGION }} --capacity-providers FARGATE"
changed_when: false
- name: ECR Credentials
shell: "aws ecr get-login --registry-ids 171421899218 --region {{ AWS_REGION }} --no-include-email | awk '{print $6}'"
register: ecr_credentials
changed_when: false
- name: Stub EFS client security group
ec2_group:
name: "example EFS client security group"
description: Allow example EFS client to make NFS connections
vpc_id: "{{ vpc.vpc.id }}"
register: example_efs_client_security_group
- name: EFS server security group
ec2_group:
name: "example EFS server security group"
description: Allow example EFS server to receive NFS connections
vpc_id: "{{ vpc.vpc.id }}"
rules:
- proto: tcp
ports: 2049
group_id: "{{ example_efs_client_security_group.group_id }}"
rule_desc: Allow Inbound NFS traffic
register: example_efs_server_security_group
- name: Fix example EFS client security group
ec2_group:
name: "example EFS client security group"
description: Allow example EFS client to make NFS connections
vpc_id: "{{ vpc.vpc.id }}"
rules_egress:
- proto: tcp
ports: 2049
group_id: "{{ example_efs_server_security_group.group_id }}"
rule_desc: Allow Outbound NFS traffic
register: example_efs_client_security_group
- name: example EFS
efs:
state: present
name: "example-efs"
tags:
Name: "example-efs"
targets:
- subnet_id: "{{ subnet.subnet.id }}"
security_groups: [ "{{ example_efs_server_security_group.group_id }}" ]
register: example_efs
- name: SSH security group
ec2_group:
name: "SSH security group"
description: Allow SSH access
vpc_id: "{{ vpc.vpc.id }}"
rules:
- proto: tcp
ports:
- 22
cidr_ip: 0.0.0.0/0
rule_desc: allow all on port 22 tcp
register: ssh_security_group
- name: ECR image repository for ssh image
ecs_ecr: name=gotechnies/alpine-ssh
register: image_repository
- name: Log in to ECR repository
docker_login:
registry: "{{ image_repository.repository.repositoryUri }}"
username: "AWS"
password: "{{ ecr_credentials.stdout }}"
reauthorize: yes
register: ecr_repository
changed_when: false
- name: Upload SSH image to ECR
docker_image:
name: gotechnies/alpine-ssh
source: pull
repository: "{{ image_repository.repository.repositoryUri }}"
tag: latest
push: yes
- name: ssh task
ecs_taskdefinition:
state: present
family: ssh
launch_type: FARGATE
cpu: "256"
memory: "0.5GB"
network_mode: awsvpc
execution_role_arn: "arn:aws:iam::{{ registry_id }}:role/ecsTaskExecutionRole"
containers:
- name: ssh
essential: true
image: "{{ image_repository.repository.repositoryUri }}:latest"
mountPoints:
- containerPath: /example
sourceVolume: example-efs
portMappings:
- containerPort: 22
hostPort: 22
volumes:
- name: example-efs
efsVolumeConfiguration:
fileSystemId: "{{ example_efs.efs.file_system_id }}"
transitEncryption: DISABLED
register: ssh_task_definition
- name: SSH service
ecs_service:
state: present
name: ssh-service
cluster: "example-cluster"
task_definition: "{{ ssh_task_definition.taskdefinition.taskDefinitionArn }}"
desired_count: 1
launch_type: FARGATE
network_configuration:
assign_public_ip: yes
subnets:
- "{{ subnet.subnet.id }}"
security_groups:
- "{{ ssh_security_group.group_id }}"
- "{{ example_efs_client_security_group.group_id }}"
The script completes successfully. You can SSH into the public IP address of the deployed service (as root/root) and interact with the EFS volume mounted at /example.
Note that you can see the expected results by replacing the last line of the script with this workaround:
- name: SSH service
shell: |
aws ecs create-service --cluster 'example-cluster' --service-name 'ssh-service' --region '{{ AWS_REGION }}' --platform-version '1.4.0' --task-definition '{{ ssh_task_definition.taskdefinition.taskDefinitionArn }}' --desired-count 1 --launch-type FARGATE --network-configuration '{ "awsvpcConfiguration": { "subnets":["{{ subnet.subnet.id }}"], "securityGroups": ["{{ ssh_security_group.group_id }}", "{{ example_efs_client_security_group.group_id }}"], "assignPublicIp": "ENABLED"}}'
The last step, the creation of the service itself, fails because the default EFS Fargate platform-version (1.3.0) is incompatible with EFS. The error message is:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: botocore.errorfactory.PlatformTaskDefinitionIncompatibilityException: An error occurred (PlatformTaskDefinitionIncompatibilityException) when calling the CreateService operation: One or more of the requested capabilities are not supported.
fatal: [localhost]: FAILED! => {"boto3_version": "1.14.19", "botocore_version": "1.17.19", "changed": false, "error": {"code": "PlatformTaskDefinitionIncompatibilityException", "message": "One or more of the requested capabilities are not supported."}, "msg": "Couldn't create service: An error occurred (PlatformTaskDefinitionIncompatibilityException) when calling the CreateService operation: One or more of the requested capabilities are not supported.", "response_metadata": {"http_headers": {"connection": "close", "content-length": "132", "content-type": "application/x-amz-json-1.1", "date": "Tue, 14 Jul 2020 01:12:04 GMT", "x-amzn-requestid": "277dd316-c404-4d99-8d09-c4dbc205b62c"}, "http_status_code": 400, "request_id": "277dd316-c404-4d99-8d09-c4dbc205b62c", "retry_attempts": 0}}
Copied from original issue: ansible/ansible#70625
I have an existing ec2 instance where I would like to add an addition security group.
When I add one more security group, ec2_instance
doesn't notify any change.
Originally ansible/ansible#54174
TASK [try to modify the ec2 instance] ********************************************************************************************************************************************************
ok: [localhost]
* Bug Report
ec2_instance
ansible 2.8.3
config file = None
configured module search path = ['/home/m/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/m/.local/lib/python3.7/site-packages/ansible
executable location = /home/m/.local/bin/ansible
python version = 3.7.2 (default, Mar 20 2019, 08:51:28) [GCC 8.2.0]
$ ansible-config dump --only-changed
$
$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.10
DISTRIB_CODENAME=cosmic
DISTRIB_DESCRIPTION="Pop!_OS 18.10"
---
- hosts: localhost
connection: local
gather_facts: False
vars_prompt:
- name: ec2_template
prompt: Which ec2 template file?
private: no
default: mbtest190321.my.instance.de
vars_files:
- "vars/{{ ec2_template }}.yml"
tasks:
###################################
#
# check if instance exists already
###################################
- name: check if instance already exists
include_role:
name: start_stop_terminate
tasks_from: find_instance_id
###################################
# if exists when: instance.instances | count == 1
# try update using ec2_instance module
###################################
- name: try to modify the ec2 instance
ec2_instance:
state: present
name: "{{ ec2_template }}"
instance_ids: "{{ instance.instances[0].instance_id }}"
security_groups: "{{ security_group }}"
cpu_credit_specification: "{{ cpu_credit_specification }}"
ebs_optimized: "{{ ebs_optimized }}"
detailed_monitoring: "{{ detailed_monitoring }}"
purge_tags: no
when: instance.instances | count == 1
---
instance_type: t2.medium
cpu_credit_specification: standard
ebs_optimized: no
detailed_monitoring: no
security_group:
- default
- something_other
cc @ryansb
In case of instance.instances[0].security_groups != security_groups
it should apply the changes.
TASK [try to modify the ec2 instance] ********************************************************************************************************************************************************
changed: [localhost]
Basically is must be aws ec2 modify-instance-attribute --groups <list of {{security_groups}} --instance-id <instance_id>
No changes are detected and nothing happen.
s3_bucket_notification currently only supports adding events that go to lambda. s3 events can also go to sns or sqs.
s3_bucket_notification
An unsuccessful decode call returns:
ok: [localhost] => {
"changed": false,
"invocation": {
"module_args": {
[trimmed]
}
},
"win_password": ""
}
I would expect it to return a failure state
AWS recently introduced "Instance refresh" feature in ASG. Using it is preferred way to update launch configuration in ASG
ec2_asg
Deploying ASG from the client depends on internet connectivity and is not transactional. In the middle of deployment of ASG it can fail leaving ASG in unpredicted state. Using AWS infrastructure will add more stability in deployment (and probably will make it faster)
More about the feature: https://aws.amazon.com/blogs/compute/introducing-instance-refresh-for-ec2-auto-scaling/
Boto3 documentation: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/autoscaling.html#AutoScaling.Client.start_instance_refresh
- commutiny.aws.ec2_asg_instance_refresh:
name: some-backend
This issue was first reported in ansible/ansible#67993 (it was closed by the author because they found a workaround).
The default value of lb_cookie
for stickiness_type
is no longer ignored by AWS for TCP target groups, which is causing this module to fail when not explicitly specifying a stickiness type (even when stickiness is off/not set).
It looks like Amazon is changing their API and rolling it out slowly as the linked issue starts on Mar 4, and seems to be hitting people little by little. It showed up today at different times for us on different target groups across some of our accounts.
The default should either be left out in the call to AWS so that it chooses its own default (if that's possible), or this module should choose a default based on the protocol of the target group.
The current workaround I'm using is a conditional in jinja, something like:
- elb_target_group:
protocol: "{{ proto }}"
stickiness_type: "{{ 'source_ip' if proto == 'tcp' else omit }}"
elb_target_group
2.9.6
#pseudo
- elb_target_group:
name: test
protocol: tcp
Success
TASK [immutable_server : target group for analytics-tst-plebos loadbalancer] ***
17:21:08 An exception occurred during task execution. To see the full traceback, use -vvv. The error was: InvalidConfigurationRequestException:
An error occurred (InvalidConfigurationRequest) when calling the ModifyTargetGroupAttributes operation:
Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol
17:21:08 fatal: [localhost]: FAILED! => {"changed": false, "error": {"code": "InvalidConfigurationRequest", "message": "Stickiness type 'lb_cookie'
is not supported for target groups with the TCP protocol", "type": "Sender"}, "msg": "An error occurred (InvalidConfigurationRequest)
when calling the ModifyTargetGroupAttributes operation: Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol",
"response_metadata": {"http_headers": {"connection": "close", "content-length": "359", "content-type": "text/xml", "date": "Tue, 03 Mar 2020 11:51:08 GMT",
"x-amzn-requestid": "23b0ca87-e0fb-4b84-b93b-ae5b1363df53"}, "http_status_code": 400, "request_id": "23b0ca87-e0fb-4b84-b93b-ae5b1363df53", "retry_attempts": 0}}
running check mode results in an error.
ec2_vpc_vgw_info
2.9.10
ansible -m ec2_vpc_vgw_info --check localhost
no errors
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: botocore.exceptions.ClientError: An error occurred (DryRunOperation) when calling the DescribeVpnGateways operation: Request would have succeeded, but DryRun flag is set.
fatal: [...]: FAILED! => changed=false
msg: 'An error occurred (DryRunOperation) when calling the DescribeVpnGateways operation: Request would have succeeded, but DryRun flag is set.'
When trying to use ec2_vpc_nacl
to create an ICMP rule, it fails with an error.
ec2_vpc_nacl
ansible 2.9.9
config file = /Users/dlee/prj/cloud-automation/aws-infrastructure/ansible.cfg
configured module search path = ['/Users/dlee/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/dlee/prj/cloud-automation/aws-infrastructure/build/virtualenv/lib/python3.7/site-packages/ansible
executable location = /Users/dlee/prj/cloud-automation/aws-infrastructure/build/virtualenv/bin/ansible
python version = 3.7.8 (default, Jul 4 2020, 10:17:17) [Clang 11.0.3 (clang-1103.0.32.62)]
DEFAULT_FORCE_HANDLERS(/Users/dlee/prj/cloud-automation/aws-infrastructure/ansible.cfg) = True
DEFAULT_HOST_LIST(/Users/dlee/prj/cloud-automation/aws-infrastructure/ansible.cfg) = ['/Users/dlee/prj/cloud-automation/aws-infrastructure/inventory.ini']
DEFAULT_JINJA2_NATIVE(/Users/dlee/prj/cloud-automation/aws-infrastructure/ansible.cfg) = True
DEFAULT_ROLES_PATH(/Users/dlee/prj/cloud-automation/aws-infrastructure/ansible.cfg) = ['/Users/dlee/prj/cloud-automation/aws-infrastructure/build/galaxy']
DEFAULT_STDOUT_CALLBACK(/Users/dlee/prj/cloud-automation/aws-infrastructure/ansible.cfg) = yaml
RETRY_FILES_ENABLED(/Users/dlee/prj/cloud-automation/aws-infrastructure/ansible.cfg) = False
macOS host
Run a playbook with the following task (providing VPC and subnet ids)
- name: ec2_vpc_nacl some-nacl
ec2_vpc_nacl:
vpc_id: "{{ some_vpc_id }}"
name: some-nacl
subnets: "{{ some_subnet_ids }}"
ingress:
# Allow ping and ping replies
- [100, "icmp", "allow", "0.0.0.0/0", 0, -1, null, null]
- [101, "icmp", "allow", "0.0.0.0/0", 8, -1, null, null]
egress:
- [999, "all", "allow", "0.0.0.0/0", null, null, null, null]
Should create/update the NACL correctly.
Fails with error.
TASK [some-role : ec2_vpc_nacl some-nacl] ***********
task path: /Users/dlee/prj/some-prj/roles/some-role/tasks/main.yml:84
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: dlee
<localhost> EXEC /bin/sh -c 'echo ~dlee && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/dlee/.ansible/tmp `"&& mkdir /Users/dlee/.ansible/tmp/ansible-tmp-1594309932.3515022-50187-138366259112034 && echo ansible-tmp-1594309932.3515022-50187-138366259112034="` echo /Users/dlee/.ansible/tmp/ansible-tmp-1594309932.3515022-50187-138366259112034 `" ) && sleep 0'
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/ec2.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/basic.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/cloud.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/ansible_release.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/six/__init__.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/common/__init__.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/common/dict_transformations.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/_text.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/common/parameters.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/common/validation.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/common/_collections_compat.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/common/text/__init__.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/common/_utils.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/common/file.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/common/text/formatters.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/pycompat24.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/common/text/converters.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/common/_json_compat.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/parsing/__init__.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/common/sys_info.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/common/process.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/parsing/convert_bool.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/common/collections.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/distro/__init__.py
Using module_utils file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/module_utils/distro/_distro.py
Using module file /Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/ansible/modules/cloud/amazon/ec2_vpc_nacl.py
<localhost> PUT /Users/dlee/.ansible/tmp/ansible-local-49805vro_b5s7/tmpjbvtido8 TO /Users/dlee/.ansible/tmp/ansible-tmp-1594309932.3515022-50187-138366259112034/AnsiballZ_ec2_vpc_nacl.py
<localhost> EXEC /bin/sh -c 'chmod u+x /Users/dlee/.ansible/tmp/ansible-tmp-1594309932.3515022-50187-138366259112034/ /Users/dlee/.ansible/tmp/ansible-tmp-1594309932.3515022-50187-138366259112034/AnsiballZ_ec2_vpc_nacl.py && sleep 0'
<localhost> EXEC /bin/sh -c '/Users/dlee/prj/some-prj/build/virtualenv/bin/python /Users/dlee/.ansible/tmp/ansible-tmp-1594309932.3515022-50187-138366259112034/AnsiballZ_ec2_vpc_nacl.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /Users/dlee/.ansible/tmp/ansible-tmp-1594309932.3515022-50187-138366259112034/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/var/folders/np/zjgljt4n6jg2jpftl6l7rr740000gn/T/ansible_ec2_vpc_nacl_payload__n68lmw6/ansible_ec2_vpc_nacl_payload.zip/ansible/modules/cloud/amazon/ec2_vpc_nacl.py", line 389, in create_network_acl_entry
File "/Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/dlee/prj/some-prj/build/virtualenv/lib/python3.7/site-packages/botocore/client.py", line 635, in _make_api_call
raise error_class(parsed_response, operation_name)
fatal: [localhost]: FAILED! => changed=false
invocation:
module_args:
aws_access_key: null
aws_secret_key: null
debug_botocore_endpoint_logs: false
ec2_url: null
egress:
- - 999
- all
- allow
- 0.0.0.0/0
- null
- null
- null
- null
ingress:
- - 100
- icmp
- allow
- 0.0.0.0/0
- 0
- -1
- null
- null
- - 101
- icmp
- allow
- 0.0.0.0/0
- 8
- -1
- null
- null
nacl_id: null
name: some-nacl
profile: null
region: null
security_token: null
state: present
subnets:
- '[REDACTED]'
tags: null
validate_certs: true
vpc_id: '[REDACTED]'
msg: 'An error occurred (MissingParameter) when calling the CreateNetworkAclEntry operation: The request must contain the parameter icmpTypeCode.type'
For Kinesis streams with > 100 shards, client.describe_stream(**params)['StreamDescription']
returns a paginated view, thus https://github.com/ansible-collections/community.aws/blob/master/plugins/modules/kinesis_stream.py#L363 will always be true and we'll be stuck in this infinite while loop since has_more_shards
is always True.
Kinesis Streams
2.9.9
Hard to give you steps to reproduce this issue as it only happens with kinesis streams with > 100 shards.
Creating kinesis streams should not cause a timeout!
Our deployment failed when trying to create a kinesis stream due to a timeout.
TASK [kinesis : create kinesis stream] *****************************************
--
ย | fatal: [localhost]: FAILED! => {"changed": true, "msg": "Wait time out reached, while waiting for results", "result": {}, "success": false}
Add AWS organizations support for cloudformation_stack_set module.
This would include:
cloudformation_stack_set module.
The additions would be to support permissions model and deployment target api actions as described here:
- name: Create a stack set with instances in two accounts
cloudformation_stack_set:
name: my-stack
description: Test stack in two accounts
state: present
template_url: https://s3.amazonaws.com/my-bucket/cloudformation.template
permissions_model: SERVICE_MANAGED
deployment_targets:
OrganizationalUnitIds:
- o-12345
- 0-54321
regions:
- us-east-1
SUMMARY
Doing the s3 sync operation in a shell, as the same user, has no problem, but the s3_sync module in ansible errors with:
4899 MODULE FAILURE
4900 See stdout/stderr for the exact error
4901 MODULE_STDERR:
4902 Traceback (most recent call last):
4903 File "<stdin>", line 102, in <module>
4904 File "<stdin>", line 94, in _ansiballz_main
4905 File "<stdin>", line 40, in invoke_module
4906 File "/usr/lib/python2.7/runpy.py", line 188, in run_module
4907 fname, loader, pkg_name)
4908 File "/usr/lib/python2.7/runpy.py", line 82, in _run_module_code
4909 mod_name, mod_fname, mod_loader, pkg_name)
4910 File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
4911 exec code in run_globals
4912 File "/tmp/ansible_s3_sync_payload_91htk3/ansible_s3_sync_payload.zip/ansible/modules/cloud/amazon/s3_sync.py", line 544, in <module>
4913 File "/tmp/ansible_s3_sync_payload_91htk3/ansible_s3_sync_payload.zip/ansible/modules/cloud/amazon/s3_sync.py", line 526, in main
4914 File "/tmp/ansible_s3_sync_payload_91htk3/ansible_s3_sync_payload.zip/ansible/modules/cloud/amazon/s3_sync.py", line 405, in filter_list
4915 File "/tmp/ansible_s3_sync_payload_91htk3/ansible_s3_sync_payload.zip/ansible/modules/cloud/amazon/s3_sync.py", line 390, in head_s3
4916 Exception: An error occurred (403) when calling the HeadObject operation: Forbidden
ISSUE TYPE
Bug Report
COMPONENT NAME
s3_sync
ANSIBLE VERSION
ansible 2.9.7
CONFIGURATION
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_PIPELINING(/deployuser/ansible.cfg) = True
ANSIBLE_SSH_RETRIES(/deployuser/ansible.cfg) = 10
DEFAULT_CALLBACK_WHITELIST(/deployuser/ansible.cfg) = [u'profile_tasks']
DEFAULT_GATHER_SUBSET(/deployuser/ansible.cfg) = [u'!hardware # this line may help deal with an issue where a bad nfs mount will prevent ansible from connecting
DEFAULT_HOST_LIST(env: ANSIBLE_INVENTORY) = [u'/vagrant/ansible/hosts']
DEFAULT_LOAD_CALLBACK_PLUGINS(/deployuser/ansible.cfg) = True
DEFAULT_LOG_PATH(/deployuser/ansible.cfg) = /deployuser/tmp/ansible_log
DEFAULT_STDOUT_CALLBACK(env: ANSIBLE_STDOUT_CALLBACK) = debug
OS / ENVIRONMENT
Ununtu 16.04
STEPS TO REPRODUCE
In this example I show the playbook which can run the s3 sync operation as a shell fine, but using the ansible module fails.
The playbook:
- name: Get the current caller identity information
aws_caller_info:
register: caller_info
become_user: deadlineuser
- name: Sync deadline to s3
shell: |
set -x
aws sts get-caller-identity
cd {{ deadline_linux_installers_tar | dirname }}/
aws s3 sync . s3://{{ installers_bucket }}/ --exclude "*" --include "{{ deadline_linux_installers_tar | basename }}"
become_user: deadlineuser
tags:
- install
- sync_installers
- name: "Ensure deadline {{ deadline_linux_installers_tar | dirname }}/{{ deadline_linux_installers_tar | basename }} exists in the s3 bucket {{ installers_bucket }} - Push if it doesn't."
s3_sync:
bucket: "{{ installers_bucket }}"
file_root: "{{ deadline_linux_installers_tar | dirname }}"
include: "{{ deadline_linux_installers_tar | basename }}"
mode: push
become_user: deadlineuser
tags:
- install
- sync_installers
The only thing slightly diffferent in this scenario to others I have had success with is that the bucket allows access to two AWS accounts. the pemissions on the bucket are:
{
"Version": "2012-10-17",
"Id": "s3ProdDevSharePolicy",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::254735172:root",
"arn:aws:iam::326573574:root"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::software.firehawkvfx.com",
"arn:aws:s3:::software.firehawkvfx.com/*"
]
}
]
}
I have replaced the user id's in this log with a random number, but in both tests, they match.
EXPECTED RESULTS
s3_sync should function the same as the shell command.
ACTUAL RESULTS
Here is the error:
4863 TASK [deadlinedb : Get the current caller identity information] ****************
4864 Monday 27 April 2020 13:32:44 +0930 (0:00:01.728) 0:00:06.116 **********
4865 ok: [firehawkgateway] => {
4866 "account": "254735172",
4867 "arn": "arn:aws:iam::254735172:user/storage_user",
4868 "changed": false,
4869 "user_id": "DSFHSDFJSFGJSFGJKSFGJ"
4870 }
4871 TASK [deadlinedb : Sync deadline to s3] ****************************************
4872 Monday 27 April 2020 13:32:47 +0930 (0:00:03.204) 0:00:09.320 **********
4873 changed: [firehawkgateway] => {
4874 "changed": true,
4875 "cmd": "set -x\naws sts get-caller-identity\ncd /deployuser/downloads/\naws s3 sync . s3://software.firehawkvfx.com/ --exclude \"*\" --include \"Deadline-10.1.1.3-linux-installers.tar\"\n",
4876 "delta": "0:00:02.143070",
4877 "end": "2020-04-27 13:32:50.163394",
4878 "rc": 0,
4879 "start": "2020-04-27 13:32:48.020324"
4880 }
4881 STDOUT:
4882 {
4883 "Account": "254735172",
4884 "UserId": "DSFHSDFJSFGJSFGJKSFGJ",
4885 "Arn": "arn:aws:iam::254735172:user/storage_user"
4886 }
4887 STDERR:
4888 + aws sts get-caller-identity
4889 + cd /deployuser/downloads/
4890 + aws s3 sync . s3://software.firehawkvfx.com/ --exclude * --include Deadline-10.1.1.3-linux-installers.tar
4891 TASK [deadlinedb : Ensure deadline /deployuser/downloads exists in the s3 bucket software.firehawkvfx.com - Push if it doesn't.] ***
4892 Monday 27 April 2020 13:32:49 +0930 (0:00:02.296) 0:00:11.616 **********
4893 An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Exception: An error occurred (403) when calling the HeadObject operation: Forbidden
4894 fatal: [firehawkgateway]: FAILED! => {
4895 "changed": false,
4896 "rc": 1
4897 }
4898 MSG:
4899 MODULE FAILURE
4900 See stdout/stderr for the exact error
4901 MODULE_STDERR:
4902 Traceback (most recent call last):
4903 File "<stdin>", line 102, in <module>
4904 File "<stdin>", line 94, in _ansiballz_main
4905 File "<stdin>", line 40, in invoke_module
4906 File "/usr/lib/python2.7/runpy.py", line 188, in run_module
4907 fname, loader, pkg_name)
4908 File "/usr/lib/python2.7/runpy.py", line 82, in _run_module_code
4909 mod_name, mod_fname, mod_loader, pkg_name)
4910 File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
4911 exec code in run_globals
4912 File "/tmp/ansible_s3_sync_payload_91htk3/ansible_s3_sync_payload.zip/ansible/modules/cloud/amazon/s3_sync.py", line 544, in <module>
4913 File "/tmp/ansible_s3_sync_payload_91htk3/ansible_s3_sync_payload.zip/ansible/modules/cloud/amazon/s3_sync.py", line 526, in main
4914 File "/tmp/ansible_s3_sync_payload_91htk3/ansible_s3_sync_payload.zip/ansible/modules/cloud/amazon/s3_sync.py", line 405, in filter_list
4915 File "/tmp/ansible_s3_sync_payload_91htk3/ansible_s3_sync_payload.zip/ansible/modules/cloud/amazon/s3_sync.py", line 390, in head_s3
4916 Exception: An error occurred (403) when calling the HeadObject operation: Forbidden
You can see that the bash shell operation has no problem, only the s3_sync module does.
Moving issue from Ansible repository.
Add route53 GeoLocation support
Either update/upgrade route53 module, or create a new module to support adding/changing recourd sets.
Resubmitted from ansible-collections/amazon.aws#10
elb_application_lb
requires the security_groups
option when state=present
as explained in the docs (although it also says that the default is []
which seems useless since it won't accept the option being omitted).
When creating a new ALB and supplying security_groups: []
explicitly, the ALB is created successfully with the VPC default SG.
Running the same task again will fail with the error that the security_groups
option is missing,
I'm not sure if this is reproducible outside of a VPC since I'm not sure there is such a thing as a default SG in that case.
elb_application_lb
2.9.6
- elb_application_lb:
region: "us-east-1"
name: "repro-delete"
state: "present"
subnets: "{{ my_subnets }}"
listeners:
- Protocol: HTTP
Port: 80
DefaultActions:
- Type: forward
TargetGroupName: repro-group-us-east-1a
scheme: internal
security_groups: []
wait: yes
register: alb
loop: [1, 2]
ALB is created, then second run is ok
.
(an acceptable result might also be that the first run fails with an invalid option value, but that does preclude the possibility of using a "default" SG)
Second run fails.
fatal: [localhost]: FAILED! => {"changed": false, "msg": "state is present but all of the following are missing: security_groups"}
I want to write a PR for #115 .
Previously when this was part of the main ansible repo I was able to follow these instructions. But now those instructions no longer apply, because it's a different repo, with a different lifecycle.
The first place I looked was the top-level README for this repo.
Nope, that's not useful.
It tells me how to install the repo, from the remote server. I want to install from my local repo, how do I do that?
It links to the using collections page, which again only explains how to install a collection published on a remote server. Do I have to create my own Galaxy server just to test a one line change to this repo?
The top-level README of this repo links to the dev guide for the main Ansible repo, which again is not relevant.
Do I need to clone the main ansible repo, and follow the those instructions? Or can I use any published version of Ansible installed with pip install
?
community.aws
master branch
When I was trying to create a TGW, ec2_transit_gateway module return error and failed to create one. I captured the error messages below. Which is about line#340, module exams each gateway from the return, and checks the "Description" and "State". But it turns out in my case, the "Description" key doesn't exist.
I simply change #340 from directly access the "Description" key to use get() function,
if description == gateway.get('Description','') and gateway['State'] != 'deleted':
This workaround solved my problem.
ansible 2.10.0.dev0
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.7 (default, Apr 23 2020, 15:05:00) [GCC 6.3.0 20170516]
# /root/.ansible/collections/ansible_collections
Collection Version
----------------- -------
amazon.aws 0.1.1
ansible.netcommon 0.0.2
community.aws 0.1.0
The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1589245545.3153915-20091-254377192640234/AnsiballZ_ec2_transit_gateway.py", line 102, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-tmp-1589245545.3153915-20091-254377192640234/AnsiballZ_ec2_transit_gateway.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-tmp-1589245545.3153915-20091-254377192640234/AnsiballZ_ec2_transit_gateway.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible_collections.community.aws.plugins.modules.ec2_transit_gateway', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/local/lib/python3.7/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_community.aws.ec2_transit_gateway_payload_7by178pi/ansible_community.aws.ec2_transit_gateway_payload.zip/ansible_collections/community/aws/plugins/modules/ec2_transit_gateway.py", line 578, in <module>
File "/tmp/ansible_community.aws.ec2_transit_gateway_payload_7by178pi/ansible_community.aws.ec2_transit_gateway_payload.zip/ansible_collections/community/aws/plugins/modules/ec2_transit_gateway.py", line 572, in main
File "/tmp/ansible_community.aws.ec2_transit_gateway_payload_7by178pi/ansible_community.aws.ec2_transit_gateway_payload.zip/ansible_collections/community/aws/plugins/modules/ec2_transit_gateway.py", line 270, in process
File "/tmp/ansible_community.aws.ec2_transit_gateway_payload_7by178pi/ansible_community.aws.ec2_transit_gateway_payload.zip/ansible_collections/community/aws/plugins/modules/ec2_transit_gateway.py", line 480, in ensure_tgw_present
File "/tmp/ansible_community.aws.ec2_transit_gateway_payload_7by178pi/ansible_community.aws.ec2_transit_gateway_payload.zip/ansible_collections/community/aws/plugins/modules/ec2_transit_gateway.py", line 340, in get_matching_tgw
KeyError: 'Description'
fatal: [us-east-2-vpc01]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1589245545.3153915-20091-254377192640234/AnsiballZ_ec2_transit_gateway.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1589245545.3153915-20091-254377192640234/AnsiballZ_ec2_transit_gateway.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1589245545.3153915-20091-254377192640234/AnsiballZ_ec2_transit_gateway.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.community.aws.plugins.modules.ec2_transit_gateway', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/local/lib/python3.7/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/local/lib/python3.7/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/local/lib/python3.7/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_community.aws.ec2_transit_gateway_payload_7by178pi/ansible_community.aws.ec2_transit_gateway_payload.zip/ansible_collections/community/aws/plugins/modules/ec2_transit_gateway.py\", line 578, in <module>\n File \"/tmp/ansible_community.aws.ec2_transit_gateway_payload_7by178pi/ansible_community.aws.ec2_transit_gateway_payload.zip/ansible_collections/community/aws/plugins/modules/ec2_transit_gateway.py\", line 572, in main\n File \"/tmp/ansible_community.aws.ec2_transit_gateway_payload_7by178pi/ansible_community.aws.ec2_transit_gateway_payload.zip/ansible_collections/community/aws/plugins/modules/ec2_transit_gateway.py\", line 270, in process\n File \"/tmp/ansible_community.aws.ec2_transit_gateway_payload_7by178pi/ansible_community.aws.ec2_transit_gateway_payload.zip/ansible_collections/community/aws/plugins/modules/ec2_transit_gateway.py\", line 480, in ensure_tgw_present\n File \"/tmp/ansible_community.aws.ec2_transit_gateway_payload_7by178pi/ansible_community.aws.ec2_transit_gateway_payload.zip/ansible_collections/community/aws/plugins/modules/ec2_transit_gateway.py\", line 340, in get_matching_tgw\nKeyError: 'Description'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
I'd like to be able to manage AWS Global Accelerators with ansible
list_global_accelerators
global_accelerator_listener
global_accelerator_endpoint_group
global_accelerator
I'm bad at names
- name: List Global Accelerators
list_global_accelerators:
region: us-west-2
register: accelerators
- name: Print Global Accelerator DNS
debug:
msg: "{{ item.dns_name }}"
loop: "{{ accelerators.accelerators }}"
# https://docs.aws.amazon.com/global-accelerator/latest/api/API_CreateAccelerator.html
- name: Ensure Global Accelerator Exists
global_accelerator:
enabled: true
name: my_accelerator
tags:
application: testing
register: my_accelerator
# https://docs.aws.amazon.com/global-accelerator/latest/api/API_CreateListener.html
- name: Ensure Global Accelerator Listener exists
global_accelerator_listener:
accelerator_arn: "{{ my_accelerator.accelerator_arn }}"
client_affinity: no
protocol: tcp
port_ranges:
- low: 80
high: 80
register: my_listener
# https://docs.aws.amazon.com/global-accelerator/latest/api/API_CreateEndpointGroup.html
- name: Ensure Global Accelerator Endpoint Group exists
global_accelerator_endpoint_group:
endpoint_configurations:
- ip_preservation: true
endpoint_id: i-12345abcd # EC2 Instance ID
weight: 1
region: us-east-1
health_check_interval: 2
health_check_path: /healthcheck
health_check_port: 80
listener_arn: "{{ my_listener.listener_arn }}"
threshold_count: 3
traffic_percentage: 100
register: my_endpoint_group
When using efs_info
, neither the ca_bundle
setting nor the AWS_CA_BUNDLE
setting is respected.
efs_info
ansible 2.9.2
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Nov 12 2019, 19:44:08) [GCC 7.3.1 20180712 (Red Hat 7.3.1-6)]
DEFAULT_STDOUT_CALLBACK(env: ANSIBLE_STDOUT_CALLBACK) = debug
Linux 4.14.154-128.181.amzn2.x86_64 #1 SMP Sat Nov 16 21:49:00 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
---
- name: efs_meta_facts test
hosts: localhost
environment:
AWS_CA_BUNDLE: /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
gather_facts: false
tasks:
- name: get ec2 facts
ec2_instance_info:
- name: get efs facts
efs_info:
register: efs_meta_facts
Inside ~/.aws/config
:
[default]
region = us-east-1
output = json
ca_bundle = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
Inside /etc/boto.cfg
:
[Boto]
ca_certificates_file = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
I also tried:
[Boto]
https_validate_certificates = False
I expected to get through my proxy for both the ec2_instance_info
and the efs_meta_facts
above.
ec2_instance_info
worked as expected (the only required setting was the one mentioned in ~/.aws/config
above, pointing Ansible to the correct certificate bundle.
For efs_meta_facts
, I encountered a CERTIFICATE_VERIFY_FAILED
error.
I then tried various other ways of setting ca_bundle
(old Boto2 style and via an env var) and none of them worked. I also tested with rds_instance_info
which also respected the ca_bundle
setting.
# ansible-playbook efs.yml -vvv
ansible-playbook 2.9.6
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.7.6 (default, Feb 26 2020, 20:54:15) [GCC 7.3.1 20180712 (Red Hat 7.3.1-6)]
No config file found; using defaults
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAYBOOK: efs.yml ******************************************************************************************************************************************************************
1 plays in efs.yml
PLAY [efs_meta_facts test] *********************************************************************************************************************************************************
META: ran handlers
TASK [get ec2 facts] ***************************************************************************************************************************************************************
task path: /root/diod/efs.yml:8
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1585850506.601956-226336118324942 `" && echo ansible-tmp-1585850506.601956-226336118324942="` echo /root/.ansible/tmp/ansible-tmp-1585850506.601956-226336118324942 `" ) && sleep 0'
Using module file /usr/local/lib/python3.7/site-packages/ansible/modules/cloud/amazon/ec2_instance_info.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-17763qvkv2dbb/tmpj_ntr4z1 TO /root/.ansible/tmp/ansible-tmp-1585850506.601956-226336118324942/AnsiballZ_ec2_instance_info.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1585850506.601956-226336118324942/ /root/.ansible/tmp/ansible-tmp-1585850506.601956-226336118324942/AnsiballZ_ec2_instance_info.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'AWS_CA_BUNDLE=/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem /usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1585850506.601956-226336118324942/AnsiballZ_ec2_instance_info.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1585850506.601956-226336118324942/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"instances": [
{
"ami_launch_index": 0,
...
TASK [get efs facts] ***************************************************************************************************************************************************************
task path: /root/diod/efs.yml:11
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1585850507.909464-82382026175539 `" && echo ansible-tmp-1585850507.909464-82382026175539="` echo /root/.ansible/tmp/ansible-tmp-1585850507.909464-82382026175539 `" ) && sleep 0'
Using module file /usr/local/lib/python3.7/site-packages/ansible/modules/cloud/amazon/efs_info.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-17763qvkv2dbb/tmp647lz5l1 TO /root/.ansible/tmp/ansible-tmp-1585850507.909464-82382026175539/AnsiballZ_efs_info.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1585850507.909464-82382026175539/ /root/.ansible/tmp/ansible-tmp-1585850507.909464-82382026175539/AnsiballZ_efs_info.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'AWS_CA_BUNDLE=/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem /usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1585850507.909464-82382026175539/AnsiballZ_efs_info.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1585850507.909464-82382026175539/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 662, in urlopen
self._prepare_proxy(conn)
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 948, in _prepare_proxy
conn.connect()
File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 360, in connect
ssl_context=context,
File "/usr/local/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 370, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "/usr/lib64/python3.7/ssl.py", line 423, in wrap_socket
session=session
File "/usr/lib64/python3.7/ssl.py", line 870, in _create
self.do_handshake()
File "/usr/lib64/python3.7/ssl.py", line 1139, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/botocore/httpsession.py", line 263, in send
chunked=self._chunked(request.headers),
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 720, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/usr/local/lib/python3.7/site-packages/urllib3/util/retry.py", line 376, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python3.7/site-packages/urllib3/packages/six.py", line 734, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 662, in urlopen
self._prepare_proxy(conn)
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 948, in _prepare_proxy
conn.connect()
File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 360, in connect
ssl_context=context,
File "/usr/local/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 370, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "/usr/lib64/python3.7/ssl.py", line 423, in wrap_socket
session=session
File "/usr/lib64/python3.7/ssl.py", line 870, in _create
self.do_handshake()
File "/usr/lib64/python3.7/ssl.py", line 1139, in do_handshake
self._sslobj.do_handshake()
urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/ansible_efs_info_payload_1gra7zlw/ansible_efs_info_payload.zip/ansible/modules/cloud/amazon/efs_info.py", line 267, in get_file_systems
File "/tmp/ansible_efs_info_payload_1gra7zlw/ansible_efs_info_payload.zip/ansible/module_utils/cloud.py", line 153, in retry_func
raise e
File "/tmp/ansible_efs_info_payload_1gra7zlw/ansible_efs_info_payload.zip/ansible/module_utils/cloud.py", line 140, in retry_func
return f(*args, **kwargs)
File "/tmp/ansible_efs_info_payload_1gra7zlw/ansible_efs_info_payload.zip/ansible/modules/cloud/amazon/efs_info.py", line 208, in list_file_systems
File "/usr/local/lib/python3.7/site-packages/botocore/paginate.py", line 449, in build_full_result
for response in self:
File "/usr/local/lib/python3.7/site-packages/botocore/paginate.py", line 255, in __iter__
response = self._make_request(current_kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/paginate.py", line 332, in _make_request
return self._method(**current_kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 613, in _make_api_call
operation_model, request_dict, request_context)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 632, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 137, in _send_request
success_response, exception):
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 231, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "/usr/local/lib/python3.7/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/usr/local/lib/python3.7/site-packages/botocore/retryhandler.py", line 251, in __call__
caught_exception)
File "/usr/local/lib/python3.7/site-packages/botocore/retryhandler.py", line 277, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/usr/local/lib/python3.7/site-packages/botocore/retryhandler.py", line 317, in __call__
caught_exception)
File "/usr/local/lib/python3.7/site-packages/botocore/retryhandler.py", line 223, in __call__
attempt_number, caught_exception)
File "/usr/local/lib/python3.7/site-packages/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 200, in _do_get_response
http_response = self._send(request)
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 244, in _send
return self.http_session.send(request)
File "/usr/local/lib/python3.7/site-packages/botocore/httpsession.py", line 281, in send
raise SSLError(endpoint_url=request.url, error=e)
botocore.exceptions.SSLError: SSL validation failed for https://elasticfilesystem.us-east-1.amazonaws.com/2015-02-01/file-systems [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)
fatal: [localhost]: FAILED! => {
"boto3_version": "1.12.16",
"botocore_version": "1.15.16",
"changed": false,
"invocation": {
"module_args": {
"aws_access_key": null,
"aws_secret_key": null,
"debug_botocore_endpoint_logs": false,
"ec2_url": null,
"id": null,
"name": null,
"profile": null,
"region": null,
"security_token": null,
"tags": {},
"targets": [],
"validate_certs": true
}
}
}
MSG:
Couldn't get EFS file systems: SSL validation failed for https://elasticfilesystem.us-east-1.amazonaws.com/2015-02-01/file-systems [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)
PLAY RECAP *************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Moving issue from ansible repository to community.aws collections.
Tried to create multiple route53 healthchecks to the same server but different resource paths using with_items . Instead of creating one health check for each resource path, only the first is created.
ansible 2.2.2.0
- name: set up route53 health checks
connection: local
become: false
route53_health_check:
failure_threshold: 3
ip_address: '{{ ip_address }}'
port: 80
request_interval: 30
resource_path: '{{ item }}'
state: present
type: HTTP
register: result
with_items:
- /some_status1
- /other_status2
2 health checks created
result contains health_check.ids for each health check
1 health check created
result contains two items, both having the same health_check.id
Support assignment of AWS tags to IAM users.
iam_user
Tags are a standard, often crucial organizational part of AWS objects. It is the opinion of the PR author that tagging should be a requirement of new modules, where the underlying AWS object supports it.
# Create a user and attach a managed policy using its ARN
- iam_user:
name: testuser1
managed_policy:
- arn:aws:iam::aws:policy/AmazonSNSFullAccess
tags:
- department: corpdepartment
- repo: user_deployment_repo
- classification: sensitive
state: present
Resubmitted from ansible-collections/amazon.aws#15
The health_check_path
option is ignored on an existing target group. Changing it gives a status of ok
and the setting is not changed.
I suspect this affects more health check options but haven't tried them.
Turns out this is a duplicate of ansible/ansible#50024
You have to supply health_check_protocol
in order for health_check_path
to be considered.
But the documentation still doesn't reflect that.
And really, the purpose of the check for the protocol is to ignore other params that only apply to http
and https
protocols. The check should:
health_check_protocol
is the same as protocol
if the former is not supplied, since that's what happens in realitySo I think this should remain open.
elb_target_group
2.9
health_check_path
option to something elsechanged
status with updated valueok
status with no changeThe current implementation of the aws_ssm connection plugin relies on the exported environment variables, or on a default connection profile being configured on the controller.
An ideal implementation would allow the task caller to pass an STS token, for example in cases where there is a cross-account trust policy and the node is able to retrieve such session token and execute tasks in the target account.
This would also allow a more versatile usage from the API, by dynamically assume the target role STS session and pass it to each invocation.
aws_ssm.py connection plugin
This is how a task can be called with all the sts parameters:
---
- hosts: all
vars:
ansible_aws_ssm_region: us-east-1
bucket_name: helper-bucket-flavioelawi
ansible_aws_ssm_access_key_id: <THE_ACCESS_KEY_ID>
ansible_aws_ssm_secret_access_key: <THE_SECRET_KEY>
ansible_aws_ssm_session_token: <THE_SESSION_TOKEN>
tasks:
- name: test stat
stat:
path: /etc/foo.conf
register: file_details
- debug:
msg: "file or dir exists"
when: file_details.stat.exists
When connecting to a windows instance the output may not be correctly parsed, as the $LASTEXITCODE
is empty, I still haven't successfully executed a playbook on windows using this connection plugin.
connection plugin aws_ssm.py
ansible 2.10.0.dev0
config file = None
configured module search path = ['/Users/flavioel/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/flavioel/.virtualenvs/ansible_ssm/lib/python3.6/site-packages/ansible
executable location = /Users/flavioel/.virtualenvs/ansible_ssm/bin/ansible
python version = 3.6.10 (default, Jan 27 2020, 23:07:04) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)]
empty
Source: MacOs 10.14.6 , on Python3.6
Target: Microsoft Windows Server 2016 Datacenter
SSM Agent version: 2.3.542.0
- hosts: aws_ec2
tasks:
- name: "Create file"
ansible.windows.win_file:
path: C:\Temp\foo.conf
state: touch
The playbook executes, and it either returns a success or a failure
I have added a traceback.print_exc()
in the wrapped
function, to get the visible stack trace.
TASK [Gathering Facts] ***************************************************************************************************
task path: /Users/flavioel/workplace/controller/windows_create_file.yml:1
<i-04c99feadf7cc554b> ESTABLISH SSM CONNECTION TO: i-04c99feadf7cc554b
<i-04c99feadf7cc554b> SSM COMMAND: ['/usr/local/bin/session-manager-plugin', '{"SessionId": "SESSION_REDACTED-0bf26d5cdb7b1a368", "TokenValue": "AAEAATKgop2KMV5Pm56w1z1JlmqGKzRnAc5WrJ7aaYCdCLYcAAAAAF6N6Fhu+Dq5kCKxyhapfWIM4rH//TaIqFsTfZJAXfN7nc7JjMD18AoVsmN4ZwlC7nF2OsEG3IKKrFNggtBZyLMYHaP+GsWHwS3sNu9nA73q3vIFVDlVlNIOlWrlHfTPoD58aYyx/tteSKPd4SWd3DJedb
Lb/qBkfSisgwgMtnUQLuuN0Hie4+pmRgAMMkeXaRYncwKiCYTm0+FcB170lEArzu+Lz+WJBf8suRAzl60BIePHB0669zqtanUiNwIO/wEocWcwVAeFnH/GERy+5I54jJgq6yhZ6W2nsRECNlV/jIZ9PhIjMVm+bmRpV+jsO60Zrg41cBITs6poa2/HOHf+BJPyIqpmAEgsVTykDOQS/ajvGQ==", "StreamUrl": "wss://ssmmessages.us-east-1.amazonaws.com/v1/data-channel/SESSION_REDACTED-0bf26d5cdb7b1a368?role=publish_subscribe"
, "ResponseMetadata": {"RequestId": "a6b1b9b9-4eda-4d5a-a7e8-649a9f8033d5", "HTTPStatusCode": 200, "HTTPHeaders": {"x-amzn-requestid": "a6b1b9b9-4eda-4d5a-a7e8-649a9f8033d5", "content-type": "application/x-amz-json-1.1", "content-length": "622", "date": "Wed, 08 Apr 2020 15:06:00 GMT"}, "RetryAttempts": 0}}', 'us-east-1', 'StartSession', '', '{"Targe
t": "i-04c99feadf7cc554b"}', 'https://ssm.us-east-1.amazonaws.com']
<i-04c99feadf7cc554b> SSM CONNECTION ID: SESSION_REDACTED-0bf26d5cdb7b1a368
<i-04c99feadf7cc554b> EXEC PowerShell -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -EncodedCommand UwBlAHQALQBTAHQAcgBpAGMAdABNAG8AZABlACAALQBWAGUAcgBzAGkAbwBuACAATABhAHQAZQBzAHQACgAkAHQAbQBwAF8AcABhAHQAaAAgAD0AIABbAFMAeQBzAHQAZQBtAC4ARQBuAHYAaQByAG8AbgBtAGUAbgB0AF0AOgA6AEUAeABwAGEAbgBkAEUAbgB2AGkAcgBvAG4AbQBlAG4AdABWAGEAcgBpAGEAYgBsAGUAc
wAoACcAJQBUAEUATQBQACUAJwApAAoAJAB0AG0AcAAgAD0AIABOAGUAdwAtAEkAdABlAG0AIAAtAFQAeQBwAGUAIABEAGkAcgBlAGMAdABvAHIAeQAgAC0AUABhAHQAaAAgACQAdABtAHAAXwBwAGEAdABoACAALQBOAGEAbQBlACAAJwBhAG4AcwBpAGIAbABlAC0AdABtAHAALQAxADUAOAA2ADMANQA4ADMANQA5AC4AOAA3ADcANgAxADcALQAxADAANAA4ADcAMwA4ADAAMAAzADIAOAA1ADEANAAnAAoAVwByAGkAdABlAC0ATwB1AHQAcAB1AHQAIAAtAEkAbgBwAHUAd
ABPAGIAagBlAGMAdAAgACQAdABtAHAALgBGAHUAbABsAE4AYQBtAGUACgBJAGYAIAAoAC0AbgBvAHQAIAAkAD8AKQAgAHsAIABJAGYAIAAoAEcAZQB0AC0AVgBhAHIAaQBhAGIAbABlACAATABBAFMAVABFAFgASQBUAEMATwBEAEUAIAAtAEUAcgByAG8AcgBBAGMAdABpAG8AbgAgAFMAaQBsAGUAbgB0AGwAeQBDAG8AbgB0AGkAbgB1AGUAKQAgAHsAIABlAHgAaQB0ACAAJABMAEEAUwBUAEUAWABJAFQAQwBPAEQARQAgAH0AIABFAGwAcwBlACAAewAgAGUAeABpAHQAI
AAxACAAfQAgAH0A
<i-04c99feadf7cc554b> _wrap_command: 'PowerShell -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -EncodedCommand UwBlAHQALQBTAHQAcgBpAGMAdABNAG8AZABlACAALQBWAGUAcgBzAGkAbwBuACAATABhAHQAZQBzAHQACgAkAHQAbQBwAF8AcABhAHQAaAAgAD0AIABbAFMAeQBzAHQAZQBtAC4ARQBuAHYAaQByAG8AbgBtAGUAbgB0AF0AOgA6AEUAeABwAGEAbgBkAEUAbgB2AGkAcgBvAG4AbQBlAG4AdABWAGEAcgBpAG
EAYgBsAGUAcwAoACcAJQBUAEUATQBQACUAJwApAAoAJAB0AG0AcAAgAD0AIABOAGUAdwAtAEkAdABlAG0AIAAtAFQAeQBwAGUAIABEAGkAcgBlAGMAdABvAHIAeQAgAC0AUABhAHQAaAAgACQAdABtAHAAXwBwAGEAdABoACAALQBOAGEAbQBlACAAJwBhAG4AcwBpAGIAbABlAC0AdABtAHAALQAxADUAOAA2ADMANQA4ADMANQA5AC4AOAA3ADcANgAxADcALQAxADAANAA4ADcAMwA4ADAAMAAzADIAOAA1ADEANAAnAAoAVwByAGkAdABlAC0ATwB1AHQAcAB1AHQAIAAtAE
kAbgBwAHUAdABPAGIAagBlAGMAdAAgACQAdABtAHAALgBGAHUAbABsAE4AYQBtAGUACgBJAGYAIAAoAC0AbgBvAHQAIAAkAD8AKQAgAHsAIABJAGYAIAAoAEcAZQB0AC0AVgBhAHIAaQBhAGIAbABlACAATABBAFMAVABFAFgASQBUAEMATwBEAEUAIAAtAEUAcgByAG8AcgBBAGMAdABpAG8AbgAgAFMAaQBsAGUAbgB0AGwAeQBDAG8AbgB0AGkAbgB1AGUAKQAgAHsAIABlAHgAaQB0ACAAJABMAEEAUwBUAEUAWABJAFQAQwBPAEQARQAgAH0AIABFAGwAcwBlACAAewAgAG
UAeABpAHQAIAAxACAAfQAgAH0A; echo dIpRQVkRkzKwiYCldGbnIXIsxo $LASTEXITCODE
echo xKPQoAYIrotmsawsGiYuMptuFX
'
<i-04c99feadf7cc554b> EXEC stdout line:
<i-04c99feadf7cc554b> EXEC stdout line: Starting session with SessionId: SESSION_REDACTED-0bf26d5cdb7b1a368
<i-04c99feadf7cc554b> EXEC remaining: 60
<i-04c99feadf7cc554b> EXEC stdout line: Windows PowerShell
<i-04c99feadf7cc554b> EXEC stdout line: Copyright (C) 2016 Microsoft Corporation. All rights reserved.
<i-04c99feadf7cc554b> EXEC stdout line:
PS C:\Windows\system32> echo xKPQoAYIrotmsawsGiYuMptuFractive -Exec
<i-04c99feadf7cc554b> EXEC stdout line: >> PowerShell -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -EncodedCommand UwBlAHQALQBTAHQAcgBpAGMAdABNAG8AZABlACAALQBWAGUAcgBzAGkAbwBuACAATABhAHQAZQBzAHQACgAkAHQAbQBwAF8AcABhAHQAaAAgAD0AI
<i-04c99feadf7cc554b> EXEC stdout line: ABbAFMAeQBzAHQAZQBtAC4ARQBuAHYAaQByAG8AbgBtAGUAbgB0AF0AOgA6AEUAeABwAGEAbgBkAEUAbgB2AGkAcgBvAG4AbQBlAG4AdABWAGEAcgBpAGEAYgBsAGUAcwAoACcAJQBUAEUATQBQACUAJwApAAoAJAB0AG0AcAAgAD0AIABOAGUAdwAtAEkAdABlAG0AI
<i-04c99feadf7cc554b> EXEC stdout line: AAtAFQAeQBwAGUAIABEAGkAcgBlAGMAdABvAHIAeQAgAC0AUABhAHQAaAAgACQAdABtAHAAXwBwAGEAdABoACAALQBOAGEAbQBlACAAJwBhAG4AcwBpAGIAbABlAC0AdABtAHAALQAxADUAOAA2ADMANQA4ADMANQA5AC4AOAA3ADcANgAxADcALQAxADAANAA4ADcAM
<i-04c99feadf7cc554b> EXEC stdout line: wA4ADAAMAAzADIAOAA1ADEANAAnAAoAVwByAGkAdABlAC0ATwB1AHQAcAB1AHQAIAAtAEkAbgBwAHUAdABPAGIAagBlAGMAdAAgACQAdABtAHAALgBGAHUAbABsAE4AYQBtAGUACgBJAGYAIAAoAC0AbgBvAHQAIAAkAD8AKQAgAHsAIABJAGYAIAAoAEcAZQB0AC0AV
<i-04c99feadf7cc554b> EXEC stdout line: gBhAHIAaQBhAGIAbABlACAATABBAFMAVABFAFgASQBUAEMATwBEAEUAIAAtAEUAcgByAG8AcgBBAGMAdABpAG8AbgAgAFMAaQBsAGUAbgB0AGwAeQBDAG8AbgB0AGkAbgB1AGUAKQAgAHsAIABlAHgAaQB0ACAAJABMAEEAUwBUAEUAWABJAFQAQwBPAEQARQAgAH0AI
<i-04c99feadf7cc554b> EXEC stdout line: ABFAGwAcwBlACAAewAgAGUAeABpAHQAIAAxACAAfQAgAH0A; echo dIpRQVkRkzKwiYCldGbnIXIsxo $PS C:\Windows\system32>
<i-04c99feadf7cc554b> EXEC stdout line: >> echo xKPQoAYIrotmsawsGiYuMptuFX
<i-04c99feadf7cc554b> POST_PROCESS:
Traceback (most recent call last):
File "/Users/flavioel/.ansible/collections/ansible_collections/community/aws/plugins/connection/aws_ssm.py", line 209, in wrapped
return_tuple = func(self, *args, **kwargs)
File "/Users/flavioel/.ansible/collections/ansible_collections/community/aws/plugins/connection/aws_ssm.py", line 403, in exec_command
returncode, stdout = self._post_process(stdout, mark_begin)
File "/Users/flavioel/.ansible/collections/ansible_collections/community/aws/plugins/connection/aws_ssm.py", line 441, in _post_process
last_exit_code = trailer.splitlines()[1]
IndexError: list index out of range
<i-04c99feadf7cc554b> ssm_retry: attempt: 0, caught exception(list index out of range) from cmd (PowerShell -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -EncodedCommand UwBlAHQALQBTAHQAcgBpAGMAdABNAG8AZABlACAALQBWAGUAcgBzAGkAbwBuACAATABhAHQAZQBzAHQACgAkAHQAbQBwAF8AcABhAHQAaAAgAD0AIABbAFMAeQBzAHQAZQBtAC4ARQBuAHYAaQByAG8AbgBtAGUAbgB0AF0AOgA
6AEUAeABwAGEAbgBkAEUAbgB2AGkAcgBvAG4AbQBlAG4AdABWAGEAcgBpAGEAYgBsAGUAcwAoACcAJQBUAEUATQBQACUAJwApAAoAJAB0AG0AcAAgAD0AIABOAGUAdwAtAEkAdABlAG0AIAAtAFQAeQBwAGUAIABEAGkAcgBlAGMAdABvAHIAeQAgAC0AUABhAHQAaAAgACQAdABtAHAAXwBwAGEAdABoACAALQBOAGEAbQBlACAAJwBhAG4AcwBpAGIAbABlAC0AdABtAHAALQAxADUAOAA2ADMANQA4ADMANQA5AC4AOAA3ADcANgAxADcALQAxADAANAA4ADcAMwA4ADAAMAA
zADIAOAA1ADEANAAnAAoAVwByAGkAdABlAC0ATwB1AHQAcAB1AHQAIAAtAEkAbgBwAHUAdABPAGIAagBlAGMAdAAgACQAdABtAHAALgBGAHUAbABsAE4AYQBtAGUACgBJAGYAIAAoAC0AbgBvAHQAIAAkAD8AKQAgAHsAIABJAGYAIAAoAEcAZQB0AC0AVgBhAHIAaQBhAGIAbABlACAATABBAFMAVABFAFgASQBUAEMATwBEAEUAIAAtAEUAcgByAG8AcgBBAGMAdABpAG8AbgAgAFMAaQBsAGUAbgB0AGwAeQBDAG8AbgB0AGkAbgB1AGUAKQAgAHsAIABlAHgAaQB0ACAAJAB
MAEEAUwBUAEUAWABJAFQAQwBPAEQARQAgAH0AIABFAGwAcwBlACAAewAgAGUAeABpAHQAIAAxACAAfQAgAH0A...), pausing for 0 seconds
<i-04c99feadf7cc554b> CLOSING SSM CONNECTION TO: i-04c99feadf7cc554b
^C<i-04c99feadf7cc554b> CLOSING SSM CONNECTION TO: i-04c99feadf7cc554b
[ERROR]: User interrupted execution
When using the "elb_application_lb" module to create/modify rules for a listener, the modify action for a http-header condition type does not work. If the rule is not already created, the module creates the rule correctly, but when it is already created, it returns an error: KeyError: 'Values'.
I am able to modify this rule with the aws cli.
elb_application_lb for aws
ansible 2.9.6
config file = None
configured module search path = ['/home/dylan/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/dylan/.devops_envs/devops_smc/lib/python3.6/site-packages/ansible
executable location = /home/dylan/.devops_envs/devops_smc/bin/ansible
python version = 3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]
Ubuntu 18
Create a playbook to create/modify the rule. On the first run it should successfully create the rule. On the second run, it returns an error.
name: create Load Balancer
elb_application_lb:
state: present
name: "{{ load_balancer_name }}"
security_groups: "{{ elb_security_group_name }}"
subnets: "{{ vpc_subnet_list }}"
listeners:
- Protocol: HTTPS
Port: 443
SslPolicy: ELBSecurityPolicy-2016-08
Certificates:
- CertificateArn: "{{ https_certificate_arn }}"
DefaultActions:
- Type: fixed-response
FixedResponseConfig:
ContentType: text/plain
StatusCode: "503"
Rules:
- Conditions:
- Field: http-header
HttpHeaderConfig:
HttpHeaderName: 'User-Agent'
Values: '*Trident/7:0*rv:11*'
Priority: '1'
Actions:
- Type: fixed-response
FixedResponseConfig:
StatusCode: "200"
ContentType: "text/html"
MessageBody: "<b>Hello World!</b>"
register: create_lb_result
I expect the http-header rule to be updated if it is changed.
An error is returned: KeyError: 'Values'
I am able to modify te rule with the aws cli command:
aws elbv2 modify-rule --actions Type="fixed-response",FixedResponseConfig='{MessageBody="<b>Hello World!</b>",ContentType="text/html",StatusCode="200"}' --conditions Field=http-header,HttpHeaderConfig='{HttpHeaderName='User-Agent',Values='*Trident/7.0*rv:11*'}' --rule-arn arn:myarn
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'Values'
fatal: [localhost]: FAILED! => {
"changed": false,
"rc": 1
}
MSG:
MODULE FAILURE
See stdout/stderr for the exact error
MODULE_STDERR:
Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 94, in _ansiballz_main
File "<stdin>", line 40, in invoke_module
File "/usr/lib/python3.6/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_elb_application_lb_payload_8gfeee49/ansible_elb_application_lb_payload.zip/ansible/modules/cloud/amazon/elb_application_lb.py", line 612, in <module>
File "/tmp/ansible_elb_application_lb_payload_8gfeee49/ansible_elb_application_lb_payload.zip/ansible/modules/cloud/amazon/elb_application_lb.py", line 606, in main
File "/tmp/ansible_elb_application_lb_payload_8gfeee49/ansible_elb_application_lb_payload.zip/ansible/modules/cloud/amazon/elb_application_lb.py", line 488, in create_or_update_elb
File "/tmp/ansible_elb_application_lb_payload_8gfeee49/ansible_elb_application_lb_payload.zip/ansible/module_utils/aws/elbv2.py", line 809, in compare_rules
File "/tmp/ansible_elb_application_lb_payload_8gfeee49/ansible_elb_application_lb_payload.zip/ansible/module_utils/aws/elbv2.py", line 784, in _compare_rule
File "/tmp/ansible_elb_application_lb_payload_8gfeee49/ansible_elb_application_lb_payload.zip/ansible/module_utils/aws/elbv2.py", line 720, in _compare_condition
KeyError: 'Values'
Using the ssm plugin, playbooks fail at the gathering facts stage, with:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: invalid literal for int() with base 10: "echo $'\\n'$?"
fatal: [i-xxxxx]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""}
If using gather_facts: false
, the same error occurs at the first task
As far as I can tell, SSM is configured correctly: I can run aws ssm start-session --target i-xxxxx
from my ansible host and successfully get a shell on the target host.
ssm connection plugin
ansible 2.9.9
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ubuntu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ubuntu/.local/lib/python3.8/site-packages/ansible
executable location = /home/ubuntu/.local/bin/ansible
python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]
INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = ['aws_ec2']
Both the instance running Ansible and the target (SSM-managed) instance are regular ubuntu 20.04 AMIs
aws ssm start-session --target i-xxxxx
# Playbook
- hosts: all
vars:
ansible_connection: community.aws.aws_ssm
ansible_aws_ssm_region: eu-west-2 # substitute for your region
tasks:
- name: test
command:
cmd: ls -l
# aws_ec2.yaml inventory
plugin: aws_ec2
regions:
- eu-west-2
keyed_groups:
- prefix: tag
key: tags
- prefix: aws_region
key: placement.region
hostnames:
- instance-id
compose:
ansible_host: instance-id
Run:
ansible-playbook -i aws_ec2.yaml -c community.aws.aws_ssm test-playbook.yaml
Playbook runs successfully with no errors
Playbook errors on gathering facts stage, with:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: invalid literal for int() with base 10: "echo $'\\n'$?"
fatal: [i-xxxxx]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""}
with -vvvv
verbosity:
ansible-playbook 2.9.9
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ubuntu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ubuntu/.local/lib/python3.8/site-packages/ansible
executable location = /home/ubuntu/.local/bin/ansible-playbook
python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
Parsed /home/ubuntu/aws_ec2.yaml inventory source with aws_ec2 plugin
Loading callback plugin default of type stdout, v2.0 from /home/ubuntu/.local/lib/python3.8/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: test_ssm.yaml ****************************************************************************************************************************************************************************************
Positional arguments: test_ssm.yaml
verbosity: 4
connection: community.aws.aws_ssm
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/home/ubuntu/aws_ec2.yaml',)
forks: 5
1 plays in test_ssm.yaml
PLAY [all] *****************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************************************************************************************************
task path: /home/ubuntu/test_ssm.yaml:1
<i-xxxxx> ESTABLISH SSM CONNECTION TO: i-xxxxx
<i-xxxxx> SSM COMMAND: ['/usr/local/bin/session-manager-plugin', '{"SessionId": "i-xxxxx-yyyyy", "TokenValue": "XXXXX", "StreamUrl": "wss://ssmmessages.eu-west-2.amazonaws.com/v1/data-channel/i-yyyyy-zzzzz?role=publish_subscribe", "ResponseMetadata": {"RequestId": "xxxx-xxxx-xxxx-xxxx", "HTTPStatusCode": 200, "HTTPHeaders": {"x-amzn-requestid": "xxxx-xxxx-xxxx-xxxx-xxxx", "content-type": "application/x-amz-json-1.1", "content-length": "626", "date": "Fri, 19 Jun 2020 14:41:52 GMT"}, "RetryAttempts": 0}}', 'eu-west-2', 'StartSession', '', '{"Target": "i-xxxxx"}', 'https://ssm.eu-west-2.amazonaws.com']
<i-xxxxx> SSM CONNECTION ID: i-xxxx-xxxxx
<i-xxxxx> EXEC echo ~
<i-xxxxx> _wrap_command: 'echo XXXXX
echo ~
echo $'\n'$?
echo YYYYY
'
<i-xxxxx> EXEC stdout line:
<i-xxxxx> EXEC stdout line: Starting session with SessionId: i-xxxxx-yyyyy
<i-xxxxx> EXEC remaining: 60
<i-xxxxx> EXEC stdout line: $ stty -echo
<i-xxxxx> EXEC stdout line: PS1=''
<i-xxxxx> EXEC stdout line: echo XXXXX
<i-xxxxx> EXEC stdout line: echo ~
<i-xxxxx> EXEC stdout line: echo $'\n'$?
<i-xxxxx> EXEC stdout line: echo YYYYY
<i-xxxxx> POST_PROCESS: echo ~
echo $'\n'$?
<i-xxxxx> ssm_retry: attempt: 0, caught exception(invalid literal for int() with base 10: "echo $'\\n'$?") from cmd (echo ~...), pausing for 0 seconds
<i-xxxxx> CLOSING SSM CONNECTION TO: i-xxxxx
<i-xxxxx> TERMINATE SSM SESSION: i-xxxxx-yyyyy
<i-xxxxx> ESTABLISH SSM CONNECTION TO: i-xxxxx
<i-xxxxx> SSM COMMAND: ['/usr/local/bin/session-manager-plugin', '{"SessionId": "i-xxxxx-yyyyy", "TokenValue": "XXXXX", "StreamUrl": "wss://ssmmessages.eu-west-2.amazonaws.com/v1/data-channel/i-yyyyy-zzzzz?role=publish_subscribe", "ResponseMetadata": {"RequestId": "xxxx-xxxx-xxxx-xxxx-xxxx", "HTTPStatusCode": 200, "HTTPHeaders": {"x-amzn-requestid": "xxxx-xxxx-xxxx-xxxx-xxxx", "content-type": "application/x-amz-json-1.1", "content-length": "626", "date": "Fri, 19 Jun 2020 14:41:53 GMT"}, "RetryAttempts": 0}}', 'eu-west-2', 'StartSession', '', '{"Target": "i-xxxxx"}', 'https://ssm.eu-west-2.amazonaws.com']
<i-xxxxx> SSM CONNECTION ID: i-xxxx-xxxx
<i-xxxxx> EXEC echo ~
<i-xxxxx> _wrap_command: 'echo YYYYY
echo ~
echo $'\n'$?
echo ZZZZZ
'
<i-xxxxx> EXEC stdout line:
<i-xxxxx> EXEC stdout line: Starting session with SessionId: i-xxxx-yyyy
<i-xxxxx> EXEC remaining: 60
<i-xxxxx> EXEC stdout line: $ stty -echo
<i-xxxxx> EXEC stdout line: PS1=''
<i-xxxxx> EXEC stdout line: echo YYYYY
<i-xxxxx> EXEC stdout line: echo ~
<i-xxxxx> EXEC stdout line: echo $'\n'$?
<i-xxxxx> EXEC stdout line: echo ZZZZZ
<i-xxxxx> POST_PROCESS: echo ~
echo $'\n'$?
<i-xxxxx> CLOSING SSM CONNECTION TO: i-xxxxx
<i-xxxxx> TERMINATE SSM SESSION: i-xxxxx-yyyyy
The full traceback is:
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.8/site-packages/ansible/executor/task_executor.py", line 146, in run
res = self._execute()
File "/home/ubuntu/.local/lib/python3.8/site-packages/ansible/executor/task_executor.py", line 645, in _execute
result = self._handler.run(task_vars=variables)
File "/home/ubuntu/.local/lib/python3.8/site-packages/ansible/plugins/action/gather_facts.py", line 79, in run
res = self._execute_module(module_name=fact_module, module_args=mod_args, task_vars=task_vars, wrap_async=False)
File "/home/ubuntu/.local/lib/python3.8/site-packages/ansible/plugins/action/__init__.py", line 780, in _execute_module
self._make_tmp_path()
File "/home/ubuntu/.local/lib/python3.8/site-packages/ansible/plugins/action/__init__.py", line 343, in _make_tmp_path
tmpdir = self._remote_expand_user(self.get_shell_option('remote_tmp', default='~/.ansible/tmp'), sudoable=False)
File "/home/ubuntu/.local/lib/python3.8/site-packages/ansible/plugins/action/__init__.py", line 664, in _remote_expand_user
data = self._low_level_execute_command(cmd, sudoable=False)
File "/home/ubuntu/.local/lib/python3.8/site-packages/ansible/plugins/action/__init__.py", line 1075, in _low_level_execute_command
rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)
File "/home/ubuntu/.ansible/collections/ansible_collections/community/aws/plugins/connection/aws_ssm.py", line 197, in wrapped
return_tuple = func(self, *args, **kwargs)
File "/home/ubuntu/.ansible/collections/ansible_collections/community/aws/plugins/connection/aws_ssm.py", line 389, in exec_command
returncode, stdout = self._post_process(stdout, mark_begin)
File "/home/ubuntu/.ansible/collections/ansible_collections/community/aws/plugins/connection/aws_ssm.py", line 442, in _post_process
returncode = int(stdout.splitlines()[-2])
ValueError: invalid literal for int() with base 10: "echo $'\\n'$?"
fatal: [i-xxxxx]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
The current tags
option will set the tags on both the instances and volumes created by the launch template. The interferes with use by EC2 Fleet, and generally it would be better if we could specify tags for instances and volumes independently.
ec2_launch_template
Launch templates that tag volumes, unfortunately, does not work with EC2 Fleet. The policy for the AWSServiceRoleForEC2Fleet
service role does not have permission to create tags on volumes (and both the policy and role are not editable, so the permission cannot be granted). The net effect is that if you use Ansible to create the launch template, and you use tags, then you can't use that launch template with EC2 Fleet.
I'm thinking that new instance_tags
and volume_tags
could be added to tag instances and volumes independently.
- name: Create an ec2 launch template with tags on instances and volumes
ec2_launch_template:
name: "my_template"
image_id: "ami-04b762b4289fba92b"
key_name: my_ssh_key
instance_type: t2.micro
iam_instance_profile: myTestProfile
instance_tags:
Name: some_instance
volume_tags:
Purpose: some_storage
The sample for the vgw_telemetry
return value contains datetime(2015, 1, 1)
:
vgw_telemetry:
type: list
returned: I(state=present)
description: The telemetry for the VPN tunnel.
sample:
vgw_telemetry: [{
'outside_ip_address': 'string',
'status': 'up',
'last_status_change': datetime(2015, 1, 1),
'status_message': 'string',
'accepted_route_count': 123
}]
The inner dict parses as (list of items):
[('outside_ip_address', 'string'), ('status', 'up'), ('last_status_change', 'datetime(2015'), (1, None), ('1)', None), ('status_message', 'string'), ('accepted_route_count', 123)]
When ansible-doc tries to serialize this as JSON, it crashes. Also, the docs are rendered incorrectly, see here.
(See also ansible/ansible#69031.)
vgw_telemetry
2.9
2.10
From @arossouw on Jun 19, 2020 09:32
When I try to get a list of ec2 instances with ec2.py --list, when I have a ec2 instance active on AWS, within the af-south-1 region. I get the following output:
{
"_meta": {
"hostvars": {}
}
}
When I stop that instance and create an instance in the ohio (us-east-2) region. I do get
the expected output of ec2 instances.
I've set ec2.ini to region=all
Amazon EC2
ansible 2.9.9
config file = /etc/ansible/ansible.cfg
configured module search path = ['/Users/arno/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.7 (default, Mar 24 2020, 11:13:41) [Clang 11.0.0 (clang-1100.0.33.12)]
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
Mac OS Catalina
---
- hosts: all
gather_facts: yes
user: ubuntu
vars_files:
- vars.yml
vars:
ansible_ssh_private_key_file: "{{ key1 }}"
become: yes
tasks:
- shell: /usr/bin/uptime
register: result
- debug:
var: result
verbosity: 2
- name: create a test file
file:
path: "/home/ubuntu/testfile.txt"
state: touch
Expected ec2.py --list to return a list of ec2 instances when I have a instance
running within the af-south-1 region.
Get an empty list when running ec2.py --list
ec2.py --list
Copied from original issue: ansible/ansible#70164
cloudfront_distribution compares the current and desired Aliases/LambdaFunctionAssociations configuration directly, meaning changes in the ordering of items in the lists causes an unnecessary update.
cloudfront_distribution
cloudfront_distribution:
state: present
...
aliases: ...
List ordering for Aliases/LambdaFunctionAssociations should be ignored for idempotency checks.
--- before
+++ after
@@ -1,32 +1,30 @@
{
"Aliases": {
"Items": [
+ "aa.redacted.com",
+ "as.redacted.com",
+ "bs.redacted.com",
+ "ca.redacted.com",
+ "cs.redacted.com",
+ "ha.redacted.com",
+ "is.redacted.com",
+ "la.redacted.com",
+ "ls.redacted.com",
+ "redacted.com",
"rs.redacted.com",
"sa.redacted.com",
- "is.redacted.com",
- "la.redacted.com",
- "ca.redacted.com",
- "as.redacted.com",
- "ls.redacted.com",
- "redacted.com",
- "ss.redacted.com",
- "bs.redacted.com",
- "cs.redacted.com",
- "ha.redacted.com",
- "aa.redacted.com"
+ "ss.redacted.com"
]
},
"DefaultCacheBehavior": {
"LambdaFunctionAssociations": {
"Items": [
{
- "EventType": "origin-response",
- "IncludeBody": false,
+ "EventType": "origin-request",
"LambdaFunctionARN": "arn:aws:lambda:us-east-1:redacted"
},
{
- "EventType": "origin-request",
- "IncludeBody": false,
+ "EventType": "origin-response",
"LambdaFunctionARN": "arn:aws:lambda:us-east-1:redacted"
}
]
On success, ec2_win_password returns a changed state.
However it doesn't change anything.
changed: [pdc.services.hq.adct -> localhost] => {
"changed": true,
"invocation": {
"module_args": {
"aws_access_key": null,
"aws_secret_key": null,
"debug_botocore_endpoint_logs": false,
"ec2_url": null,
"instance_id": "i-0216d76aa054cd14d",
"key_data": null,
"key_file": "- cut -",
"key_passphrase": null,
"profile": null,
"region": "ap-southeast-2",
"security_token": null,
"validate_certs": true,
"wait": false,
"wait_timeout": 120
}
},
"win_password": "-cut-"
}
Happy to prepare a pull request to change this, code change is minor.
However this is backwards incompatible, changes existing behaviour.
Copied from ansible/ansible#69017
Updating the Listener Rules of an existing elbv2 load balancer via elb_application_lb successfully modifies the rules but raises an exception while parsing the response. If no update is made (e.g. playbook is run again after the failed run) then it correctly returns [OK].
elb_application_lb
Run playbook with some ELB rules, creating the ELB for the first time - Observe success
elb_application_lb:
state: present
...
listeners:
...
Rules:
...
- Type: fixed-response
FixedResponseConfig:
StatusCode: "403"
ContentType: "text/plain"
MessageBody: Forbidden
Modify parameters of one of the rules slightly, re-run playbook - Observe failure, but note that rules have updated in AWS console.
Re-run playbook again - Observe [OK].
The particular playbook I see this on has one source-ip
rule, 5-13 host-header
rules with forward
actions, and a fixed-response
default action. I'm not sure if it's specifically any of those which are causing the issue or if it happens more generally.
[WARNING]: Module invocation had junk after the JSON data:
... (massive blob of JSON representing the elb configuration)
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "", "module_stdout": "modified_rule:" ... (massive blob of JSON representing the elb configuration) ... \\n", "msg": "MODULE FAILURE\\nSee stdout/stderr for the exact error", "rc": 0}
Migrated from ansible/ansible#68711
When using route53
module using Assumed Role based authentication, module will fail with an error like:
The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1586184741.8429592-23677387734274/AnsiballZ_route53.py", line 102, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-tmp-1586184741.8429592-23677387734274/AnsiballZ_route53.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-tmp-1586184741.8429592-23677387734274/AnsiballZ_route53.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.cloud.amazon.route53', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/local/lib/python3.7/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_route53_payload_55fkdb4s/ansible_route53_payload.zip/ansible/modules/cloud/amazon/route53.py", line 701, in <module>
File "/tmp/ansible_route53_payload_55fkdb4s/ansible_route53_payload.zip/ansible/modules/cloud/amazon/route53.py", line 595, in main
File "/usr/local/lib/python3.7/site-packages/boto/route53/connection.py", line 88, in __init__
profile_name=profile_name)
File "/usr/local/lib/python3.7/site-packages/boto/connection.py", line 555, in __init__
profile_name)
File "/usr/local/lib/python3.7/site-packages/boto/provider.py", line 201, in __init__
self.get_credentials(access_key, secret_key, security_token, profile_name)
File "/usr/local/lib/python3.7/site-packages/boto/provider.py", line 297, in get_credentials
profile_name)
boto.provider.ProfileNotFoundError: Profile "my-profile" not found!
fatal: [127.0.0.1]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1586184741.8429592-23677387734274/AnsiballZ_route53.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1586184741.8429592-23677387734274/AnsiballZ_route53.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1586184741.8429592-23677387734274/AnsiballZ_route53.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.cloud.amazon.route53', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/local/lib/python3.7/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/local/lib/python3.7/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/local/lib/python3.7/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_route53_payload_55fkdb4s/ansible_route53_payload.zip/ansible/modules/cloud/amazon/route53.py\", line 701, in <module>\n File \"/tmp/ansible_route53_payload_55fkdb4s/ansible_route53_payload.zip/ansible/modules/cloud/amazon/route53.py\", line 595, in main\n File \"/usr/local/lib/python3.7/site-packages/boto/route53/connection.py\", line 88, in __init__\n profile_name=profile_name)\n File \"/usr/local/lib/python3.7/site-packages/boto/connection.py\", line 555, in __init__\n profile_name)\n File \"/usr/local/lib/python3.7/site-packages/boto/provider.py\", line 201, in __init__\n self.get_credentials(access_key, secret_key, security_token, profile_name)\n File \"/usr/local/lib/python3.7/site-packages/boto/provider.py\", line 297, in get_credentials\n profile_name)\nboto.provider.ProfileNotFoundError: Profile \"my-profile\" not found!\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
May be related to ansible/ansible#41185, but this is a Bug not a Feature Request as this method of authentication with Boto is available and works fine with other modules.
route53
module
ansible 2.9.6
config file = None
configured module search path = ['/home/gitops/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Aug 21 2019, 00:19:59) [GCC 8.3.0]
DEFAULT_HOST_LIST(env: ANSIBLE_INVENTORY) = ['/gitops/inventories/infra-dev']
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /gitops/.vault/infra-dev
$ cat /etc/*release
3.10.2
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.10.2
PRETTY_NAME="Alpine Linux v3.10"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
Using AWS config defining a profile route53-role-profile
assuming a Role such as:
# content of ~/.aws/config
[profile route53-source-profile]
region = eu-central-1
[profile route53-role-profile]
region = eu-central-1
role_arn = arn:aws:iam::12345678910:role/Route53Role
source_profile = route53-source-profile
# content of ~/.aws/credentials
[route53-source-profile]
aws_access_key_id = XXXX
aws_secret_access_key = secret
With task such as:
# Use profile assuming our Role
# Cause mentionned bug
- route53:
state: present
profile: route53-role-profile
hosted_zone_id: "my.zone.ai"
record: "*.my.zone.ai"
type: CNAME
value: "0.0.0.0"
Will cause mentionned error.
Same result when using AWS_PROFILE
environment variable instead of profile:
But using the profile on which access keys are configured directly will work:
# Works fine
- route53:
state: present
profile: route53-source-profile
hosted_zone_id: "my.zone.ai"
record: "*.my.zone.ai"
type: CNAME
value: "0.0.0.0"
Using AWS CLI to perform similar actions with such config works fine.
route53
module to use boto
and properly assume configured role to execute task.
Module fail with error:
The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1586184741.8429592-23677387734274/AnsiballZ_route53.py", line 102, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-tmp-1586184741.8429592-23677387734274/AnsiballZ_route53.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-tmp-1586184741.8429592-23677387734274/AnsiballZ_route53.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.cloud.amazon.route53', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/local/lib/python3.7/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_route53_payload_55fkdb4s/ansible_route53_payload.zip/ansible/modules/cloud/amazon/route53.py", line 701, in <module>
File "/tmp/ansible_route53_payload_55fkdb4s/ansible_route53_payload.zip/ansible/modules/cloud/amazon/route53.py", line 595, in main
File "/usr/local/lib/python3.7/site-packages/boto/route53/connection.py", line 88, in __init__
profile_name=profile_name)
File "/usr/local/lib/python3.7/site-packages/boto/connection.py", line 555, in __init__
profile_name)
File "/usr/local/lib/python3.7/site-packages/boto/provider.py", line 201, in __init__
self.get_credentials(access_key, secret_key, security_token, profile_name)
File "/usr/local/lib/python3.7/site-packages/boto/provider.py", line 297, in get_credentials
profile_name)
boto.provider.ProfileNotFoundError: Profile "my-profile" not found!
fatal: [127.0.0.1]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1586184741.8429592-23677387734274/AnsiballZ_route53.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1586184741.8429592-23677387734274/AnsiballZ_route53.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1586184741.8429592-23677387734274/AnsiballZ_route53.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.cloud.amazon.route53', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/local/lib/python3.7/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/local/lib/python3.7/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/local/lib/python3.7/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_route53_payload_55fkdb4s/ansible_route53_payload.zip/ansible/modules/cloud/amazon/route53.py\", line 701, in <module>\n File \"/tmp/ansible_route53_payload_55fkdb4s/ansible_route53_payload.zip/ansible/modules/cloud/amazon/route53.py\", line 595, in main\n File \"/usr/local/lib/python3.7/site-packages/boto/route53/connection.py\", line 88, in __init__\n profile_name=profile_name)\n File \"/usr/local/lib/python3.7/site-packages/boto/connection.py\", line 555, in __init__\n profile_name)\n File \"/usr/local/lib/python3.7/site-packages/boto/provider.py\", line 201, in __init__\n self.get_credentials(access_key, secret_key, security_token, profile_name)\n File \"/usr/local/lib/python3.7/site-packages/boto/provider.py\", line 297, in get_credentials\n profile_name)\nboto.provider.ProfileNotFoundError: Profile \"my-profile\" not found!\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.