Giter VIP home page Giter VIP logo

a-universe-from-nothing's Introduction

Kayobe Configuration for "A Universe from Nothing: Containerised OpenStack deployment using Kolla, Ansible and Kayobe"

This repository may be used as a workshop to configure, deploy and get hands-on with OpenStack Kayobe.

It provides a configuration and walkthrough for the Kayobe project based on the configuration provided by the kayobe-config repository. It deploys a containerised OpenStack environment using Kolla, Ansible and Kayobe.

Select the Git branch of this repository for the OpenStack release you are interested in, and follow the README.

Requirements

For this workshop we require the use of a single server, configured as a seed hypervisor. This server should be a bare metal node or VM running CentOS 8, with the following minimum requirements:

  • 32GB RAM
  • 80GB disk

We will also need SSH access to the seed hypervisor, and passwordless sudo configured for the login user.

Exercise

On the seed hypervisor we will deploy three VMs:

  • 1 seed
  • 1 controller
  • 1 compute node

The seed runs a standalone Ironic service. The controller and compute node are 'virtual bare metal' hosts, and we will use the seed to provision them with an OS. Next we'll deploy OpenStack services on the controller and compute node.

At the end you'll have a miniature OpenStack cluster that you can use to test out booting an instance using Nova, access the Horizon dashboard, etc.

Usage

There are four parts to this guide:

Preparation has instructions to prepare the seed hypervisor for the exercise, and fetching the necessary source code.

Deploying a Seed includes all instructions necessary to download and install the Kayobe prerequisites on a plain CentOS 8 cloud image, including provisioning and configuration of a seed VM. Optionally, snapshot the instance after this step to reduce setup time in future.

A Universe from a Seed contains all instructions necessary to deploy from a host running a seed VM. An image suitable for this can be created via Optional: Creating a Seed Snapshot.

Once the control plane has been deployed see Next Steps for some ideas for what to try next.

Preparation

This shows how to prepare the seed hypervisor for the exercise. It assumes you have created a seed hypervisor instance fitting the requirements above and have already logged in (e.g. ssh centos@<ip>).

# Install git and tmux.
if $(which dnf 2>/dev/null >/dev/null); then
    sudo dnf -y install git tmux
else
    sudo apt update
    sudo apt -y install git tmux
fi

# Disable the firewall.
sudo systemctl is-enabled firewalld && sudo systemctl stop firewalld && sudo systemctl disable firewalld

# Disable SELinux both immediately and permanently.
if $(which setenforce 2>/dev/null >/dev/null); then
    sudo setenforce 0
    sudo sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
fi

# Prevent sudo from making DNS queries.
echo 'Defaults  !fqdn' | sudo tee /etc/sudoers.d/no-fqdn

# Optional: start a new tmux session in case we lose our connection.
tmux

# Start at home.
cd

# Clone Kayobe.
git clone https://opendev.org/openstack/kayobe.git -b master
cd kayobe

# Clone the Tenks repository.
git clone https://opendev.org/openstack/tenks.git

# Clone this Kayobe configuration.
mkdir -p config/src
cd config/src/
git clone https://github.com/stackhpc/a-universe-from-nothing.git kayobe-config -b master

# Configure host networking (bridge, routes & firewall)
./kayobe-config/configure-local-networking.sh

# Install kayobe.
cd ~/kayobe
./dev/install-dev.sh

Deploying a Seed

This shows how to create an image suitable for deploying Kayobe. It assumes you have created a seed hypervisor instance fitting the requirements above and have already logged in (e.g. ssh centos@<ip>), and performed the necessary Preparation.

cd ~/kayobe

# Activate the Kayobe environment, to allow running commands directly.
source ~/kayobe-venv/bin/activate
source config/src/kayobe-config/kayobe-env

# Bootstrap the Ansible control host.
kayobe control host bootstrap

# Configure the seed hypervisor host.
kayobe seed hypervisor host configure

# Provision the seed VM.
kayobe seed vm provision

# Configure the seed host, and deploy a local registry.
kayobe seed host configure

# Pull, retag images, then push to our local registry.
./config/src/kayobe-config/pull-retag-push-images.sh

# Deploy the seed services.
kayobe seed service deploy

# Deploying the seed restarts networking interface,
# run configure-local-networking.sh again to re-add routes.
./config/src/kayobe-config/configure-local-networking.sh

# Optional: Shutdown the seed VM if creating a seed snapshot.
sudo virsh shutdown seed

If required, add any additional SSH public keys to /home/centos/.ssh/authorized_keys

Optional: Creating a Seed Snapshot

If necessary, take a snapshot of the hypervisor instance at this point to speed up this process in future.

You are now ready to deploy a control plane using this host or snapshot.

A Universe from a Seed

This shows how to deploy a control plane from a VM image that contains a pre-deployed seed VM, or a host that has run through the steps in Deploying a Seed.

Having a snapshot image saves us some time if we need to repeat the deployment. If working from a snapshot, create a new instance with the same dimensions as the Seed image and log in to it. Otherwise, continue working with the instance from Deploying a Seed.

# Optional: start a new tmux session in case we lose our connection.
tmux

# Set working directory
cd ~/kayobe

# Configure non-persistent networking, if the node has rebooted.
./config/src/kayobe-config/configure-local-networking.sh

Make sure that the seed VM (running Bifrost and supporting services) is present and running.

# Check if the seed VM is present and running.
sudo virsh list --all

# Start up the seed VM if it is shut off.
sudo virsh start seed

We use the TENKS project to model some 'bare metal' VMs for the controller and compute node. Here we set up our model development environment, alongside the seed VM.

# NOTE: Make sure to use ./tenks, since just ‘tenks’ will install via PyPI.
export TENKS_CONFIG_PATH=config/src/kayobe-config/tenks.yml
./dev/tenks-deploy-overcloud.sh ./tenks

# Activate the Kayobe environment, to allow running commands directly.
source dev/environment-setup.sh

# Inspect and provision the overcloud hardware:
kayobe overcloud inventory discover
kayobe overcloud hardware inspect
kayobe overcloud introspection data save
kayobe overcloud provision

Configure and deploy OpenStack to the control plane (following Kayobe host configuration documentation):

kayobe overcloud host configure
kayobe overcloud container image pull
kayobe overcloud service deploy
source config/src/kayobe-config/etc/kolla/public-openrc.sh
kayobe overcloud post configure

At this point it should be possible to access the Horizon GUI via the server's public IP address, using port 80 (achieved through port forwarding to the controller VM). Use the admin credentials from OS_USERNAME and OS_PASSWORD to get in.

The following script will register some resources (keys, flavors, networks, images, etc) in OpenStack to enable booting up a tenant VM:

source config/src/kayobe-config/etc/kolla/public-openrc.sh
./config/src/kayobe-config/init-runonce.sh

Following the instructions displayed by the above script, boot a VM. You'll need to have activated the ~/os-venv virtual environment.

source ~/os-venv/bin/activate
openstack server create --image cirros \
          --flavor m1.tiny \
          --key-name mykey \
          --network demo-net demo1

# Assign a floating IP to the server to make it accessible.
openstack floating ip create public1
fip=$(openstack floating ip list -f value -c 'Floating IP Address' --status DOWN | head -n 1)
openstack server add floating ip demo1 $fip

# Check SSH access to the VM.
ssh cirros@$fip

# If the ssh command above fails you may need to reconfigure the local
networking setup again:
~/kayobe/config/src/kayobe-config/configure-local-networking.sh

Note: when accessing the VNC console of an instance via Horizon, you will be sent to the internal IP address of the controller, 192.168.33.2, which will fail. Open the console-only display link in new broser tab and replace this IP in the address bar with the public IP of the hypervisor host.

That's it, you're done!

Next Steps

Here's some ideas for things to explore with the deployment:

Exploring the Deployment

Once each of the VMs becomes available, they should be accessible via SSH as the centos or stack user at the following IP addresses:

Host IP
seed 192.168.33.5
controller0 192.168.33.3
compute0 192.168.33.6

The control plane services are run in Docker containers, so try using the docker CLI to inspect the system.

# List containers
docker ps
# List images
docker images
# List volumes
docker volume ls
# Inspect a container
docker inspect <container name>
# Execute a process in a container
docker exec -it <container> <command>

The kolla container configuration is generated under /etc/kolla on the seed and overcloud hosts - each container has its own directory that is bind mounted into the container.

Log files are stored in the kolla_logs docker volume, which is mounted at /var/log/kolla in each container. They can be accessed on the host at /var/lib/docker/volumes/kolla_logs/_data/.

Exploring Tenks & the Seed

Verify that Tenks has created controller0 and compute0 VMs:

sudo virsh list --all

Verify that virtualbmc is running:

/usr/local/bin/vbmc list
+-------------+---------+--------------+------+
| Domain name | Status  | Address      | Port |
+-------------+---------+--------------+------+
| compute0    | running | 192.168.33.4 | 6231 |
| controller0 | running | 192.168.33.4 | 6230 |
+-------------+---------+--------------+------+

VirtualBMC config is here (on the VM hypervisor host):

/root/.vbmc/controller0/config

Note that the controller and compute node are registered in Ironic, in the bifrost container. Once kayobe is deployed and configured the compute0 and controller0 will be controlled by bifrost and not virsh commands.

ssh [email protected]
docker exec -it bifrost_deploy bash
export OS_CLOUD=bifrost
baremetal node list
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name        | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| d7184461-ac4b-4b9e-b9ed-329978fc0648 | compute0    | None          | power on    | active             | False       |
| 1a40de56-be8a-49e2-a903-b408f432ef23 | controller0 | None          | power on    | active             | False       |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
exit

Enabling Centralised Logging

In Kolla-Ansible, centralised logging is easily enabled and results in the deployment of Elasticsearch and Kibana services and configuration to forward all OpenStack service logging. Be cautious as Elasticsearch will consume a significant portion of available resources on a standard deployment.

To enable the service, one flag must be changed in ~/kayobe/config/src/kayobe-config/etc/kayobe/kolla.yml:

-#kolla_enable_central_logging:
+kolla_enable_central_logging: yes

This will install elasticsearch and kibana containers, and configure logging via fluentd so that logging from all deployed Docker containers will be routed to Elasticsearch.

Before this can be applied, it is necessary to download the missing images to the seed VM. Pull, retag and push the centralised logging images:

~/kayobe/config/src/kayobe-config/pull-retag-push-images.sh kibana elasticsearch

To deploy the logging stack:

kayobe overcloud container image pull
kayobe overcloud service deploy

As simple as that...

The new containers can be seen running on the controller node:

$ ssh [email protected] sudo docker ps
CONTAINER ID        IMAGE                                                                   COMMAND                  CREATED             STATUS              PORTS               NAMES
304b197f888b        192.168.33.5:4000/kolla/centos-source-kibana:master                     "dumb-init --single-c"   18 minutes ago      Up 18 minutes                           kibana
9eb0cf47c7f7        192.168.33.5:4000/kolla/centos-source-elasticsearch:master              "dumb-init --single-c"   18 minutes ago      Up 18 minutes                           elasticsearch
...

We can see the log indexes in Elasticsearch:

curl -X GET "192.168.33.3:9200/_cat/indices?v"

To access Kibana, we must first forward connections from our public interface to the kibana service running on our controller0 VM.

The easiest way to do this is to add Kibana's default port (5601) to our configure-local-networking.sh script in ~/kayobe/config/src/kayobe-config/:

--- a/configure-local-networking.sh
+++ b/configure-local-networking.sh
@@ -20,7 +20,7 @@ seed_hv_private_ip=$(ip a show dev $iface | grep 'inet ' | awk '{ print $2 }' |
 # Forward the following ports to the controller.
 # 80: Horizon
 # 6080: VNC console
-forwarded_ports="80 6080"
+forwarded_ports="80 6080 5601"

Then rerun the script to apply the change:

config/src/kayobe-config/configure-local-networking.sh

We can now connect to Kibana using our hypervisor host public IP and port 5601.

The username is kibana and the password we can extract from the Kolla-Ansible passwords (in production these would be vault-encrypted but they are not here).

grep kibana config/src/kayobe-config/etc/kolla/passwords.yml

Once you're in, Kibana needs some further setup which is not automated. Set the log index to flog-* and you should be ready to go.

Adding the Barbican service

Barbican is the OpenStack secret management service. It is an example of a simple service we can use to illustrate the process of adding new services to our deployment.

As with the Logging service above, enable Barbican by modifying the flag in ~/kayobe/config/src/kayobe-config/etc/kayobe/kolla.yml as follows:

-#kolla_enable_barbican:
+kolla_enable_barbican: yes

This instructs Kolla to install the barbican api, worker & keystone-listener containers. Pull down barbican images:

~/kayobe/config/src/kayobe-config/pull-retag-push-images.sh barbican

To deploy the Barbican service:

# Activate the venv if not already active
cd ~/kayobe
source dev/environment-setup.sh

kayobe overcloud container image pull
kayobe overcloud service deploy

Once Barbican has been deployed it can be tested using the barbicanclient plugin to the OpenStack CLI. This should be installed and tested in the OpenStack venv:

# Deactivate existing venv context if necessary
deactivate

# Activate the OpenStack venv
. ~/os-venv/bin/activate

# Install barbicanclient
pip install python-barbicanclient -c https://releases.openstack.org/constraints/upper/master

# Source the OpenStack environment variables
source ~/kayobe/config/src/kayobe-config/etc/kolla/public-openrc.sh

# Store a test secret
openstack secret store --name mysecret --payload foo=bar

# Copy the 'Secret href' URI for later use
SECRET_URL=$(openstack secret list --name mysecret -f value --column 'Secret href')

# Get secret metadata
openstack secret get ${SECRET_URL}

# Get secret payload
openstack secret get ${SECRET_URL} --payload

Congratulations, you have successfully installed Barbican on Kayobe.

References

a-universe-from-nothing's People

Contributors

ajaeger avatar alex-welsh avatar bbezak avatar brtkwr avatar g0rgamesh avatar gmannos avatar jovial avatar kendallnelson avatar markgoddard avatar mnasiadka avatar motehue avatar oneswig avatar openstackadmin avatar priteau avatar superpeng000 avatar wasaac avatar yankcrime avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

a-universe-from-nothing's Issues

Error during task "Wait for the ironic node to be inspected"

TASK [Wait for the ironic node to be inspected] *************************************************************************************************************************************
fatal: [controller0]: FAILED! => {"msg": "Unexpected templating type error occurred on ({{ wait_inspected_timeout // wait_inspected_interval }}): unsupported operand type(s) for //: 'str' and 'int'"}
fatal: [compute0]: FAILED! => {"msg": "Unexpected templating type error occurred on ({{ wait_inspected_timeout // wait_inspected_interval }}): unsupported operand type(s) for //: 'str' and 'int'"}

ssh-known-host failed - victoria

centos 8 cloud image
[cloud-user@kvm02 kayobe]$ history
1 top
2 sudo dnf update -y
3 reboot
4 sudo reboot
5 ssh-keygen
6 sudo systemctl is-enabled firewalld && sudo systemctl stop firewalld && sudo systemctl disable firewalld
7 clear
8 sudo setenforce 0
9 sudo sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
10 git clone https://opendev.org/openstack/kayobe.git -b stable/victoria
11 dnf install git -y
12 sudo dnf install git -y
13 git clone https://opendev.org/openstack/kayobe.git -b stable/victoria
14 cd kayobe
15 git clone https://opendev.org/openstack/tenks.git
16 mkdir -p config/src
17 cd config/src/
18 git clone https://github.com/stackhpc/a-universe-from-nothing.git kayobe-config -b stable/victoria
19 /kayobe-config/configure-local-networking.sh
20 cd ~/kayobe
21 date ; ./dev/install-dev.sh ; date
22 cd ~/kayobe
23 date ; ./dev/seed-hypervisor-deploy.sh ; date
24 history

TASK [ssh-known-host : Scan for SSH keys] **************************************************************************************************************************************************************************************************
failed: [seed-hypervisor] (item=192.168.33.4) => {"ansible_loop_var": "item", "changed": false, "cmd": ["ssh-keyscan", "192.168.33.4"], "delta": "0:00:05.010814", "end": "2021-06-04 16:51:55.087427", "item": "192.168.33.4", "msg": "non-zero return code", "rc": 1, "start": "2021-06-04 16:51:50.076613", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}

small error in README

README provides these instructions to list the machines registered in the bifrost_deploy container in the seed

ssh [email protected]
sudo docker exec -it bifrost_deploy bash
source env-vars
openstack baremetal node list

But file env-vars doesn't exist in my test environment. I had to do source /root/openrc

ssh [email protected]
sudo docker exec -it bifrost_deploy bash
source /root/openrc
openstack baremetal node list

No python in seed node.

/dev/seed-deploy.sh

TASK [singleplatform-eng.users : Per-user group creation]
"Shared connection to 192.168.33.5 closed.\r\n", "module_stdout": "/bin/sh: /usr/bin/python3: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}

I login in seed node , manually install python and rerun /dev/seed-deploy.sh and the error is gone

Provisioning timeout

I have faced with kind of trouble.
Provisioning timeout is too short for my env.
I pleasure to see a configuration of your environment corresponding current timeouts.

Seed VM doesn't use its data volume

By default, the seed VM has a root volume of 50 GiB and a data volume of 100 GiB. We are not making any use of the data volume and Docker data is all written to the root volume.

No external network access from kolla container builds

The Seed VM Docker daemon is configured with iptables disabled. This appears to be an unintended effect of this commit:

af360c2

Various options:

  • Set the variable docker_disable_default_iptables_rules to be true using inventory variables, in etc/kayobe/kolla/inventory/group_vars - this may only apply to overcloud
  • Configure host network namespace for container builds with config of this form:
kolla_build_extra_config: |
  [DEFAULT]
  network_mode = host

Update container image pull script to use new image naming scheme

See https://review.opendev.org/c/openstack/kolla-ansible/+/842709 for details.

TASK [bifrost : Starting bifrost deploy container] *********************************************************************************************************************************************************
fatal: [seed]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n  File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/api/client.py\", line 268, in _raise_for_status\\n    response.raise_for_status()\\n  File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/requests/models.py\", line 960, in raise_for_status\\n    raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.41/images/create?tag=master-centos-stream8&fromImage=192.168.33.5%3A4000%2Fopenstack.kolla%2Fbifrost-deploy\\n\\nDuring handling of the above exception, another exception occurred:\\n\\nTraceback (most recent call last):\\n  File \"/tmp/ansible_kolla_docker_payload_jyq5mhl8/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", line 381, in main\\n  File \"/tmp/ansible_kolla_docker_payload_jyq5mhl8/ansible_kolla_docker_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 669, in start_container\\n    self.pull_image()\\n  File \"/tmp/ansible_kolla_docker_payload_jyq5mhl8/ansible_kolla_docker_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 451, in pull_image\\n    repository=image, tag=tag, stream=True\\n  File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/api/image.py\", line 430, in pull\\n    self._raise_for_status(response)\\n  File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/api/client.py\", line 270, in _raise_for_status\\n    raise create_api_error_from_http_exception(e)\\n  File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/errors.py\", line 31, in create_api_error_from_http_exception\\n    raise cls(e, response=response, explanation=explanation)\\ndocker.errors.NotFound: 404 Client Error for http+docker://localhost/v1.41/images/create?tag=master-centos-stream8&fromImage=192.168.33.5%3A4000%2Fopenstack.kolla%2Fbifrost-deploy: Not Found (\"manifest for 192.168.33.5:4000/openstack.kolla/bifrost-deploy:master-centos-stream8 not found: manifest unknown: manifest unknown\")\\n'"}

virt-customize guestfs_launch error on CentOS 8 Train seed-deploy

TASK [Ensure the overcloud host image has bogus name server entries removed] ****************************************************************************************************
fatal: [seed]: FAILED! => {"changed": true, "cmd": ["docker", "exec", "bifrost_deploy", "bash", "-c", " export LIBGUESTFS_BACKEND=direct && ansible localhost --connection local --become -m command -a \"virt-customize -a /httpboot/deployment_image.qcow2 --edit \\\"/etc/resolv.conf:s/^nameserver .*\\..*\\..*\\..*\\$//\\\"\""], "delta": "0:00:07.395318", "end": "2020-07-20 16:57:56.264315", "msg": "non-zero return code", "rc": 2, "start": "2020-07-20 16:57:48.868997", "stderr": "[WARNING]: No inventory was parsed, only implicit localhost is available", "stderr_lines": ["[WARNING]: No inventory was parsed, only implicit localhost is available"], "stdout": "localhost | FAILED | rc=1 >>\n[   0.0] Examining the guest ...virt-customize: error: libguestfs error: guestfs_launch failed.\nThis usually means the libguestfs appliance failed to start or crashed.\nDo:\n  export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1\nand run the command again.  For further information, read:\n  http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\nYou can also run 'libguestfs-test-tool' and post the *complete* output\ninto a bug report or message to the libguestfs mailing list.\n\nIf reporting bugs, run virt-customize with debugging enabled and include \nthe complete output:\n\n  virt-customize -v -x [...]non-zero return code", "stdout_lines": ["localhost | FAILED | rc=1 >>", "[   0.0] Examining the guest ...virt-customize: error: libguestfs error: guestfs_launch failed.", "This usually means the libguestfs appliance failed to start or crashed.", "Do:", "  export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1", "and run the command again.  For further information, read:", "  http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs", "You can also run 'libguestfs-test-tool' and post the *complete* output", "into a bug report or message to the libguestfs mailing list.", "", "If reporting bugs, run virt-customize with debugging enabled and include ", "the complete output:", "", "  virt-customize -v -x [...]non-zero return code"]}

Running virt-customize with more debug in the Bifrost container, I see this output:

qemu-kvm: error: failed to set MSR 0x48e to 0xfff9fffe04006172
qemu-kvm: /builddir/build/BUILD/qemu-4.2.0/target/i386/kvm.c:2695: kvm_buf_set_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.  

A possible root cause appears to be the Bifrost container is now built on CentOS 8.2 but the Seed VM is based on a CentOS 8.1 image.

fatal: [controller0]: FAILED! => { "msg": "Timeout (12s) waiting for privilege escalation prompt: " }

Hi,
The below error appears when i try to inspect the overloud hardware. Kindly help me to resolve this issue. Many Thanks

==============================
(kayobe-venv) [centos@localhost kayobe]$ kayobe overcloud hardware inspect -vvvv
initialize_app
prepare_to_run_command OvercloudHardwareInspect
Inspecting overcloud
Running command: ansible-playbook -vvvv --inventory /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/bifrost.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/bmc.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/ceph.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/compute.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/controllers.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/dell-switch-bmp.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/dns.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/docker-registry.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/docker.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/globals.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/grafana.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/idrac.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inspector.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/ipa.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/ironic.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/kolla.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/monitoring.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/network-allocation.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/networks.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/neutron.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/nova.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/ntp.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/opensm.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/openstack.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/overcloud.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/pip.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/seed-hypervisor.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/seed-vm.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/seed.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/ssh.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/storage.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/swift.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/users.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/yum-cron.yml -e @/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/yum.yml /home/centos/kayobe-venv/share/kayobe/ansible/kolla-bifrost-hostvars.yml /home/centos/kayobe-venv/share/kayobe/ansible/overcloud-hardware-inspect.yml
/home/centos/kayobe-venv/lib/python2.7/site-packages/cryptography/init.py:39: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in a future release.
CryptographyDeprecationWarning,
ansible-playbook 2.8.13
config file = None
configured module search path = [u'/home/centos/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible
executable location = /home/centos/kayobe-venv/bin/ansible-playbook
python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/groups as it did not pass it's verify_file() method
script declined parsing /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/groups as it did not pass it's verify_file() method
auto declined parsing /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/groups as it did not pass it's verify_file() method
Not replacing invalid character(s) "set([u'-'])" in group name (seed-hypervisor)
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details

Not replacing invalid character(s) "set([u'-'])" in group name (seed-hypervisor)
Not replacing invalid character(s) "set([u'-'])" in group name (container-image-builders)
Not replacing invalid character(s) "set([u'-'])" in group name (container-image-builders)
Not replacing invalid character(s) "set([u'-'])" in group name (docker-registry)
Not replacing invalid character(s) "set([u'-'])" in group name (docker-registry)
Not replacing invalid character(s) "set([u'-'])" in group name (baremetal-compute)
Not replacing invalid character(s) "set([u'-'])" in group name (baremetal-compute)
Not replacing invalid character(s) "set([u'-'])" in group name (mgmt-switches)
Not replacing invalid character(s) "set([u'-'])" in group name (mgmt-switches)
Not replacing invalid character(s) "set([u'-'])" in group name (ctl-switches)
Not replacing invalid character(s) "set([u'-'])" in group name (ctl-switches)
Not replacing invalid character(s) "set([u'-'])" in group name (hs-switches)
Not replacing invalid character(s) "set([u'-'])" in group name (hs-switches)
Parsed /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/groups inventory source with ini plugin
setting up inventory plugins
host_list declined parsing /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/hosts as it did not pass it's verify_file() method
script declined parsing /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/hosts as it did not pass it's verify_file() method
auto declined parsing /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/hosts as it did not pass it's verify_file() method
Set default localhost to localhost
Not replacing invalid character(s) "set([u'-'])" in group name (seed-hypervisor)
Not replacing invalid character(s) "set([u'-'])" in group name (baremetal-compute)
Not replacing invalid character(s) "set([u'-'])" in group name (mgmt-switches)
Not replacing invalid character(s) "set([u'-'])" in group name (ctl-switches)
Not replacing invalid character(s) "set([u'-'])" in group name (hs-switches)
Parsed /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/hosts inventory source with ini plugin
setting up inventory plugins
host_list declined parsing /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/overcloud as it did not pass it's verify_file() method
script declined parsing /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/overcloud as it did not pass it's verify_file() method
auto declined parsing /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/overcloud as it did not pass it's verify_file() method
Parsed /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/overcloud inventory source with ini plugin
[WARNING]: Found both group and host with same name: seed

[WARNING]: Found both group and host with same name: seed-hypervisor

Loading callback plugin default of type stdout, v2.0 from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/callback/default.pyc

PLAYBOOK: kolla-bifrost-hostvars.yml ***********************************************************************************Positional arguments: /home/centos/kayobe-venv/share/kayobe/ansible/kolla-bifrost-hostvars.yml /home/centos/kayobe-venv/share/kayobe/ansible/overcloud-hardware-inspect.yml
become_method: sudo
inventory: (u'/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory',)
forks: 5
tags: (u'all',)
extra_vars: (u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/bifrost.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/bmc.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/ceph.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/compute.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/controllers.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/dell-switch-bmp.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/dns.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/docker-registry.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/docker.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/globals.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/grafana.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/idrac.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inspector.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/ipa.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/ironic.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/kolla.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/monitoring.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/network-allocation.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/networks.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/neutron.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/nova.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/ntp.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/opensm.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/openstack.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/overcloud.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/pip.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/seed-hypervisor.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/seed-vm.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/seed.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/ssh.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/storage.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/swift.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/users.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/yum-cron.yml', u'@/home/centos/kayobe/config/src/kayobe-config/etc/kayobe/yum.yml')
verbosity: 4
connection: smart
timeout: 10
1 plays in /home/centos/kayobe-venv/share/kayobe/ansible/kolla-bifrost-hostvars.yml

PLAY [Ensure the Bifrost overcloud inventory is populated] *************************************************************META: ran handlers

TASK [Ensure the Bifrost host variables directory exists] **************************************************************task path: /home/centos/kayobe-venv/share/kayobe/ansible/kolla-bifrost-hostvars.yml:29
<192.168.33.5> ESTABLISH SSH CONNECTION FOR USER: stack
<192.168.33.5> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="stack"' -o ConnectTimeout=10 -o ControlPath=/home/centos/.ansible/cp/35beaf5ddd 192.168.33.5 '/bin/sh -c '"'"'echo ~stack && sleep 0'"'"''

<192.168.33.5> (0, '/home/stack\n', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for \r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/centos/.ansible/cp/35beaf5ddd" does not exist\r\ndebug2: resolving "192.168.33.5" port 22\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 192.168.33.5 [192.168.33.5] port 22.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: fd 3 clearing O_NONBLOCK\r\ndebug1: Connection established.\r\ndebug3: timeout: 9998 ms remain after connect\r\ndebug1: identity file /home/centos/.ssh/id_rsa type 1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/centos/.ssh/id_rsa-cert type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/centos/.ssh/id_dsa type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/centos/.ssh/id_dsa-cert type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/centos/.ssh/id_ecdsa type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/centos/.ssh/id_ecdsa-cert type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/centos/.ssh/id_ed25519 type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/centos/.ssh/id_ed25519-cert type -1\r\ndebug1: Enabling compatibility mode for protocol 2.0\r\ndebug1: Local version string SSH-2.0-OpenSSH_7.4\r\ndebug1: Remote protocol version 2.0, remote software version OpenSSH_7.4\r\ndebug1: match: OpenSSH_7.4 pat OpenSSH compat 0x04000000\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: Authenticating to 192.168.33.5:22 as 'stack'\r\ndebug3: hostkeys_foreach: reading file "/home/centos/.ssh/known_hosts"\r\ndebug3: record_hostkey: found key type ECDSA in file /home/centos/.ssh/known_hosts:11\r\ndebug3: record_hostkey: found key type RSA in file /home/centos/.ssh/known_hosts:12\r\ndebug3: record_hostkey: found key type ED25519 in file /home/centos/.ssh/known_hosts:13\r\ndebug3: load_hostkeys: loaded 3 keys from 192.168.33.5\r\ndebug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa\r\ndebug3: send packet: type 20\r\ndebug1: SSH2_MSG_KEXINIT sent\r\ndebug3: receive packet: type 20\r\ndebug1: SSH2_MSG_KEXINIT received\r\ndebug2: local client KEXINIT proposal\r\ndebug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1,ext-info-c\r\ndebug2: host key algorithms: [email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa,[email protected],ssh-dss\r\ndebug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc\r\ndebug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc\r\ndebug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: compression ctos: [email protected],zlib,none\r\ndebug2: compression stoc: [email protected],zlib,none\r\ndebug2: languages ctos: \r\ndebug2: languages stoc: \r\ndebug2: first_kex_follows 0 \r\ndebug2: reserved 0 \r\ndebug2: peer server KEXINIT proposal\r\ndebug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1\r\ndebug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519\r\ndebug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,cast128-cbc,3des-cbc\r\ndebug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,cast128-cbc,3des-cbc\r\ndebug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: compression ctos: none,[email protected]\r\ndebug2: compression stoc: none,[email protected]\r\ndebug2: languages ctos: \r\ndebug2: languages stoc: \r\ndebug2: first_kex_follows 0 \r\ndebug2: reserved 0 \r\ndebug1: kex: algorithm: curve25519-sha256\r\ndebug1: kex: host key algorithm: ecdsa-sha2-nistp256\r\ndebug1: kex: server->client cipher: [email protected] MAC: compression: [email protected]\r\ndebug1: kex: client->server cipher: [email protected] MAC: compression: [email protected]\r\ndebug1: kex: curve25519-sha256 need=64 dh_need=64\r\ndebug1: kex: curve25519-sha256 need=64 dh_need=64\r\ndebug3: send packet: type 30\r\ndebug1: expecting SSH2_MSG_KEX_ECDH_REPLY\r\ndebug3: receive packet: type 31\r\ndebug1: Server host key: ecdsa-sha2-nistp256 SHA256:Law9MRHgw1XlfbIx463QCtnqr2KcZ7KQqNkM+gbB3oc\r\ndebug3: hostkeys_foreach: reading file "/home/centos/.ssh/known_hosts"\r\ndebug3: record_hostkey: found key type ECDSA in file /home/centos/.ssh/known_hosts:11\r\ndebug3: record_hostkey: found key type RSA in file /home/centos/.ssh/known_hosts:12\r\ndebug3: record_hostkey: found key type ED25519 in file /home/centos/.ssh/known_hosts:13\r\ndebug3: load_hostkeys: loaded 3 keys from 192.168.33.5\r\ndebug1: Host '192.168.33.5' is known and matches the ECDSA host key.\r\ndebug1: Found key in /home/centos/.ssh/known_hosts:11\r\ndebug3: send packet: type 21\r\ndebug2: set_newkeys: mode 1\r\ndebug1: rekey after 134217728 blocks\r\ndebug1: SSH2_MSG_NEWKEYS sent\r\ndebug1: expecting SSH2_MSG_NEWKEYS\r\ndebug3: receive packet: type 21\r\ndebug1: SSH2_MSG_NEWKEYS received\r\ndebug2: set_newkeys: mode 0\r\ndebug1: rekey after 134217728 blocks\r\ndebug2: key: /home/centos/.ssh/id_rsa (0x55a30597b470)\r\ndebug2: key: /home/centos/.ssh/id_dsa ((nil))\r\ndebug2: key: /home/centos/.ssh/id_ecdsa ((nil))\r\ndebug2: key: /home/centos/.ssh/id_ed25519 ((nil))\r\ndebug3: send packet: type 5\r\ndebug3: receive packet: type 7\r\ndebug1: SSH2_MSG_EXT_INFO received\r\ndebug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>\r\ndebug3: receive packet: type 6\r\ndebug2: service_accept: ssh-userauth\r\ndebug1: SSH2_MSG_SERVICE_ACCEPT received\r\ndebug3: send packet: type 50\r\ndebug3: receive packet: type 51\r\ndebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic\r\ndebug3: start over, passed a different list publickey,gssapi-keyex,gssapi-with-mic\r\ndebug3: preferred gssapi-with-mic,gssapi-keyex,hostbased,publickey\r\ndebug3: authmethod_lookup gssapi-with-mic\r\ndebug3: remaining preferred: gssapi-keyex,hostbased,publickey\r\ndebug3: authmethod_is_enabled gssapi-with-mic\r\ndebug1: Next authentication method: gssapi-with-mic\r\ndebug1: Unspecified GSS failure. Minor code may provide more information\nNo Kerberos credentials available (default cache: KEYRING:persistent:1000)\n\r\ndebug1: Unspecified GSS failure. Minor code may provide more information\nNo Kerberos credentials available (default cache: KEYRING:persistent:1000)\n\r\ndebug2: we did not send a packet, disable method\r\ndebug3: authmethod_lookup gssapi-keyex\r\ndebug3: remaining preferred: hostbased,publickey\r\ndebug3: authmethod_is_enabled gssapi-keyex\r\ndebug1: Next authentication method: gssapi-keyex\r\ndebug1: No valid Key exchange context\r\ndebug2: we did not send a packet, disable method\r\ndebug3: authmethod_lookup publickey\r\ndebug3: remaining preferred: ,publickey\r\ndebug3: authmethod_is_enabled publickey\r\ndebug1: Next authentication method: publickey\r\ndebug1: Offering RSA public key: /home/centos/.ssh/id_rsa\r\ndebug3: send_pubkey_test\r\ndebug3: send packet: type 50\r\ndebug2: we sent a publickey packet, wait for reply\r\ndebug3: receive packet: type 60\r\ndebug1: Server accepts key: pkalg rsa-sha2-512 blen 279\r\ndebug2: input_userauth_pk_ok: fp SHA256:sqJ2TYjSf2MvrkYnJHQcJ6MGh47wD4rQ9h/JT5yWzeU\r\ndebug3: sign_and_send_pubkey: RSA SHA256:sqJ2TYjSf2MvrkYnJHQcJ6MGh47wD4rQ9h/JT5yWzeU\r\ndebug3: send packet: type 50\r\ndebug3: receive packet: type 52\r\ndebug1: Enabling compression at level 6.\r\ndebug1: Authentication succeeded (publickey).\r\nAuthenticated to 192.168.33.5 ([192.168.33.5]:22).\r\ndebug1: setting up multiplex master socket\r\ndebug3: muxserver_listen: temporary control path /home/centos/.ansible/cp/35beaf5ddd.ojUOwIBJKqSiOd8t\r\ndebug2: fd 4 setting O_NONBLOCK\r\ndebug3: fd 4 is O_NONBLOCK\r\ndebug3: fd 4 is O_NONBLOCK\r\ndebug1: channel 0: new [/home/centos/.ansible/cp/35beaf5ddd]\r\ndebug3: muxserver_listen: mux listener channel 0 fd 4\r\ndebug2: fd 3 setting TCP_NODELAY\r\ndebug3: ssh_packet_set_tos: set IP_TOS 0x08\r\ndebug1: control_persist_detach: backgrounding master process\r\ndebug2: control_persist_detach: background process is 4610\r\ndebug2: fd 4 setting O_NONBLOCK\r\ndebug1: forking to background\r\ndebug1: Entering interactive session.\r\ndebug1: pledge: id\r\ndebug2: set_control_persist_exit_time: schedule exit in 60 seconds\r\ndebug1: multiplexing control connection\r\ndebug2: fd 5 setting O_NONBLOCK\r\ndebug3: fd 5 is O_NONBLOCK\r\ndebug1: channel 1: new [mux-control]\r\ndebug3: channel_post_mux_listener: new mux channel 1 fd 5\r\ndebug3: mux_master_read_cb: channel 1: hello sent\r\ndebug2: set_control_persist_exit_time: cancel scheduled exit\r\ndebug3: mux_master_read_cb: channel 1 packet type 0x00000001 len 4\r\ndebug2: process_mux_master_hello: channel 1 slave version 4\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_master_read_cb: channel 1 packet type 0x10000004 len 4\r\ndebug2: process_mux_alive_check: channel 1: alive check\r\ndebug3: mux_client_request_alive: done pid = 4612\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_master_read_cb: channel 1 packet type 0x10000002 len 105\r\ndebug2: process_mux_new_session: channel 1: request tty 0, X 0, agent 0, subsys 0, term "xterm-256color", cmd "/bin/sh -c 'echo ~stack && sleep 0'", env 1\r\ndebug3: process_mux_new_session: got fds stdin 6, stdout 7, stderr 8\r\ndebug2: fd 7 setting O_NONBLOCK\r\ndebug2: fd 8 setting O_NONBLOCK\r\ndebug1: channel 2: new [client-session]\r\ndebug2: process_mux_new_session: channel_new: 2 linked to control channel 1\r\ndebug2: channel 2: send open\r\ndebug3: send packet: type 90\r\ndebug3: receive packet: type 80\r\ndebug1: client_input_global_request: rtype [email protected] want_reply 0\r\ndebug3: receive packet: type 91\r\ndebug2: callback start\r\ndebug2: client_session2_setup: id 2\r\ndebug1: Sending environment.\r\ndebug1: Sending env LANG = en_US.UTF-8\r\ndebug2: channel 2: request env confirm 0\r\ndebug3: send packet: type 98\r\ndebug1: Sending command: /bin/sh -c 'echo ~stack && sleep 0'\r\ndebug2: channel 2: request exec confirm 1\r\ndebug3: send packet: type 98\r\ndebug3: mux_session_confirm: sending success reply\r\ndebug2: callback done\r\ndebug2: channel 2: open confirm rwindow 0 rmax 32768\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug2: client_check_window_change: changed\r\ndebug2: client_check_window_change: changed\r\ndebug2: client_check_window_change: changed\r\ndebug2: channel 2: rcvd adjust 2097152\r\ndebug3: receive packet: type 99\r\ndebug2: channel_input_status_confirm: type 99 id 2\r\ndebug2: exec request accepted on channel 2\r\ndebug3: receive packet: type 98\r\ndebug1: client_input_channel_req: channel 2 rtype exit-status reply 0\r\ndebug3: mux_exit_message: channel 2: exit message, exitval 0\r\ndebug3: receive packet: type 98\r\ndebug1: client_input_channel_req: channel 2 rtype [email protected] reply 0\r\ndebug2: channel 2: rcvd eow\r\ndebug2: channel 2: close_read\r\ndebug2: channel 2: input open -> closed\r\ndebug3: receive packet: type 96\r\ndebug2: channel 2: rcvd eof\r\ndebug2: channel 2: output open -> drain\r\ndebug2: channel 2: obuf empty\r\ndebug2: channel 2: close_write\r\ndebug2: channel 2: output drain -> closed\r\ndebug3: receive packet: type 97\r\ndebug2: channel 2: rcvd close\r\ndebug3: channel 2: will not send data after close\r\ndebug2: channel 2: send close\r\ndebug3: send packet: type 97\r\ndebug2: channel 2: is dead\r\ndebug2: channel 2: gc: notify user\r\ndebug3: mux_master_session_cleanup_cb: entering for channel 2\r\ndebug2: channel 1: rcvd close\r\ndebug2: channel 1: output open -> drain\r\ndebug2: channel 1: close_read\r\ndebug2: channel 1: input open -> closed\r\ndebug2: channel 2: gc: user detached\r\ndebug2: channel 2: is dead\r\ndebug2: channel 2: garbage collecting\r\ndebug1: channel 2: free: client-session, nchannels 3\r\ndebug3: channel 2: status: The following connections are open:\r\n #1 mux-control (t16 r-1 i3/0 o1/16 fd 5/5 cc -1)\r\n #2 client-session (t4 r0 i3/0 o3/0 fd -1/-1 cc -1)\r\n\r\ndebug2: channel 1: obuf empty\r\ndebug2: channel 1: close_write\r\ndebug2: channel 1: output drain -> closed\r\ndebug2: channel 1: is dead (local)\r\ndebug2: channel 1: gc: notify user\r\ndebug3: mux_master_control_cleanup_cb: entering for channel 1\r\ndebug2: channel 1: gc: user detached\r\ndebug2: channel 1: is dead (local)\r\ndebug2: channel 1: garbage collecting\r\ndebug1: channel 1: free: mux-control, nchannels 2\r\ndebug3: channel 1: status: The following connections are open:\r\n #1 mux-control (t16 r-1 i3/0 o3/0 fd 5/5 cc -1)\r\n\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: set_control_persist_exit_time: schedule exit in 60 seconds\r\ndebug2: Received exit status from master 0\r\n')
<192.168.33.5> ESTABLISH SSH CONNECTION FOR USER: stack
<192.168.33.5> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="stack"' -o ConnectTimeout=10 -o ControlPath=/home/centos/.ansible/cp/35beaf5ddd 192.168.33.5 '/bin/sh -c '"'"'( umask 77 && mkdir -p "echo /home/stack/.ansible/tmp"&& mkdir /home/stack/.ansible/tmp/ansible-tmp-1597518342.25-4549-187346835488836 && echo ansible-tmp-1597518342.25-4549-187346835488836="echo /home/stack/.ansible/tmp/ansible-tmp-1597518342.25-4549-187346835488836" ) && sleep 0'"'"''
<192.168.33.5> (0, 'ansible-tmp-1597518342.25-4549-187346835488836=/home/stack/.ansible/tmp/ansible-tmp-1597518342.25-4549-187346835488836\n', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4612\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/modules/files/file.py
<192.168.33.5> PUT /home/centos/.ansible/tmp/ansible-local-4535g2t2MZ/tmp4Bha6u TO /home/stack/.ansible/tmp/ansible-tmp-1597518342.25-4549-187346835488836/AnsiballZ_file.py
<192.168.33.5> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="stack"' -o ConnectTimeout=10 -o ControlPath=/home/centos/.ansible/cp/35beaf5ddd '[192.168.33.5]'
<192.168.33.5> (0, 'sftp> put /home/centos/.ansible/tmp/ansible-local-4535g2t2MZ/tmp4Bha6u /home/stack/.ansible/tmp/ansible-tmp-1597518342.25-4549-187346835488836/AnsiballZ_file.py\n', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4612\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 5 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/stack size 0\r\ndebug3: Looking up /home/centos/.ansible/tmp/ansible-local-4535g2t2MZ/tmp4Bha6u\r\ndebug3: Sent message fd 5 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn't stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/stack/.ansible/tmp/ansible-tmp-1597518342.25-4549-187346835488836/AnsiballZ_file.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:13384\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 13384 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.33.5> ESTABLISH SSH CONNECTION FOR USER: stack
<192.168.33.5> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="stack"' -o ConnectTimeout=10 -o ControlPath=/home/centos/.ansible/cp/35beaf5ddd 192.168.33.5 '/bin/sh -c '"'"'chmod u+x /home/stack/.ansible/tmp/ansible-tmp-1597518342.25-4549-187346835488836/ /home/stack/.ansible/tmp/ansible-tmp-1597518342.25-4549-187346835488836/AnsiballZ_file.py && sleep 0'"'"''

<192.168.33.5> (0, '', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4612\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.33.5> ESTABLISH SSH CONNECTION FOR USER: stack
<192.168.33.5> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="stack"' -o ConnectTimeout=10 -o ControlPath=/home/centos/.ansible/cp/35beaf5ddd -tt 192.168.33.5 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-xfwttybrjqfytpbwgkpsbjbjfbhcuxlz ; /opt/kayobe/venvs/kayobe/bin/python /home/stack/.ansible/tmp/ansible-tmp-1597518342.25-4549-187346835488836/AnsiballZ_file.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
fatal: [controller0]: FAILED! => {
"msg": "Timeout (12s) waiting for privilege escalation prompt: "
}

NO MORE HOSTS LEFT *****************************************************************************************************

PLAY RECAP *************************************************************************************************************
controller0 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

Change kayobe-config location to be less error-prone

With kayobe-config stored under config/src/kayobe-config, it is easy to forget to make changes there and change the content of etc/kayobe instead.

We should clone kayobe-config outside of the kayobe source code and run as much as possible of the tutorial from there.

pull-retag-push-images.sh doesn't work without arguments

Because of the change in 1676762, when run without arguments the pull-retag-push-images.sh script calls kayobe with '' after the name of the pull-retag-push playbook, causing the following error:

+ kayobe playbook run /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/ansible/pull-retag-push.yml ''                                                                        
Kayobe playbook  is invalid: Path does not exist

Incorrect sudo password when running [singleplatform-eng.users : Per-user group creation] on the seed VM

Hello,

This seems to apply to stable/train and stable/rocky.

I have a bare-metal server that I'm trying to run UFN on. I have a stock CentOS 7.8 install and have created a centos user with NOPASSWD set for sudo access and the centos user does have a password set.

Everything seems to run fine up to the point where the seed VM is accessed in ./dev/seed-deploy.sh. I originally struggled with "Timeout (12s) waiting for privilege escalation prompt:", but after adjusting the timeout I'm now faced with "Incorrect sudo password." I can't access the VM via ssh from the centos user either. I've also tried this with ANSIBLE_BECOME_ASK_PASS=1.

Any ideas?

Here's a dump of that last action with ANSIBLE_DEBUG=1:

PLAY [Ensure the Kayobe Ansible user account exists] **************************************************************************************************************************************************************
 12775 1602069361.70388: Loading StrategyModule 'linear' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/strategy/linear.py (found_in_cache=True, class_only=False)
 12775 1602069361.70439: getting the remaining hosts for this loop
 12775 1602069361.70451: done getting the remaining hosts for this loop
 12775 1602069361.70464: building list of next tasks for hosts
 12775 1602069361.70473: getting the next task for host seed
 12775 1602069361.70483: done getting next task for host seed
 12775 1602069361.70491:  ^ task is: TASK: Gathering Facts
 12775 1602069361.70499:  ^ state is: HOST STATE: block=0, task=0, rescue=0, always=0, run_state=ITERATING_SETUP, fail_state=FAILED_NONE, pending_setup=True, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
 12775 1602069361.70507: done building task lists
 12775 1602069361.70512: counting tasks in each state of execution
 12775 1602069361.70519: done counting tasks in each state of execution:
        num_setups: 1
        num_tasks: 0
        num_rescue: 0
        num_always: 0
 12775 1602069361.70532: advancing hosts in ITERATING_SETUP
12775 1602069361.70538: starting to advance hosts
 12775 1602069361.70545: getting the next task for host seed
 12775 1602069361.70554: done getting next task for host seed
 12775 1602069361.70562:  ^ task is: TASK: Gathering Facts
 12775 1602069361.70570:  ^ state is: HOST STATE: block=0, task=0, rescue=0, always=0, run_state=ITERATING_SETUP, fail_state=FAILED_NONE, pending_setup=True, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
 12775 1602069361.70578: done advancing hosts to next task
 12775 1602069361.70633: Loading ActionModule 'gather_facts' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/action/gather_facts.py (found_in_cache=False, class_only=True)
 12775 1602069361.70642: getting variables
 12775 1602069361.70649: in VariableManager get_vars()
 12775 1602069361.70701: Loading TestModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
 12775 1602069361.70710: Loading TestModule 'files' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
 12775 1602069361.70720: Loading TestModule 'functional' from /home/centos/kayobe-venv/share/kayobe/ansible/test_plugins/functional.py (found_in_cache=True, class_only=False)
 12775 1602069361.70729: Loading TestModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
 12775 1602069361.70824: Loading FilterModule 'bmc_type' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/bmc_type.py (found_in_cache=True, class_only=False)
 12775 1602069361.70834: Loading FilterModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
 12775 1602069361.70843: Loading FilterModule 'ipaddr' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/ipaddr.py (found_in_cache=True, class_only=False)
 12775 1602069361.70852: Loading FilterModule 'json_query' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/json_query.py (found_in_cache=True, class_only=False)
 12775 1602069361.70860: Loading FilterModule 'k8s' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/k8s.py (found_in_cache=True, class_only=False)
 12775 1602069361.70869: Loading FilterModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
 12775 1602069361.70878: Loading FilterModule 'network' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/network.py (found_in_cache=True, class_only=False)
 12775 1602069361.70887: Loading FilterModule 'networks' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/networks.py (found_in_cache=True, class_only=False)
 12775 1602069361.70896: Loading FilterModule 'switches' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/switches.py (found_in_cache=True, class_only=False)
 12775 1602069361.70904: Loading FilterModule 'urls' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
 12775 1602069361.70912: Loading FilterModule 'urlsplit' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
 12775 1602069361.71027: Calling all_inventory to load vars for seed
 12775 1602069361.71037: Calling groups_inventory to load vars for seed
 12775 1602069361.71051: Calling all_plugins_inventory to load vars for seed
 12775 1602069361.71080: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602069361.71109: Calling all_plugins_play to load vars for seed
 12775 1602069361.71133: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602069361.71159: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/bifrost
 12775 1602069361.71172: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/bmc
 12775 1602069361.71186: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/ceph
 12775 1602069361.71199: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/compute
 12775 1602069361.71212: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/controllers
 12775 1602069361.71225: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/dell-switch-bmp
 12775 1602069361.71238: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/dnf
 12775 1602069361.71251: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/dns
 12775 1602069361.71264: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/docker
 12775 1602069361.71277: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/docker-registry
 12775 1602069361.71290: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/globals
 12775 1602069361.71303: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/grafana
 12775 1602069361.71316: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/idrac
 12775 1602069361.71329: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/inspector
 12775 1602069361.71342: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/ipa
 12775 1602069361.71357: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/ironic
 12775 1602069361.71371: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/kolla
 12775 1602069361.71389: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/monasca
 12775 1602069361.71404: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/monitoring
 12775 1602069361.71419: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/network
 12775 1602069361.71434: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/neutron
 12775 1602069361.71449: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/nova
 12775 1602069361.71464: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/ntp
 12775 1602069361.71479: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/opensm
 12775 1602069361.71494: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/openstack
 12775 1602069361.71509: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/overcloud
 12775 1602069361.71524: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/pip
 12775 1602069361.71538: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/seed
 12775 1602069361.71553: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/seed-hypervisor
 12775 1602069361.71568: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/seed-vm
 12775 1602069361.71584: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/ssh
 12775 1602069361.71599: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/storage
 12775 1602069361.71614: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/swift
 12775 1602069361.71630: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/arista
 12775 1602069361.71647: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/config
 12775 1602069361.71663: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/dell
 12775 1602069361.71680: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/dell-powerconnect
 12775 1602069361.71696: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/junos
 12775 1602069361.71712: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/mellanox
 12775 1602069361.71727: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/users
 12775 1602069361.71743: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/yum
 12775 1602069361.71759: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/yum-cron
 12775 1602069361.71780: Calling groups_plugins_inventory to load vars for seed
 12775 1602069361.71806: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602069361.71878: Loading data from /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/group_vars/seed/ansible-python-interpreter
 12775 1602069361.71891: Loading data from /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/group_vars/seed/network-interfaces
 12775 1602069361.71907: Calling groups_plugins_play to load vars for seed
 12775 1602069361.71933: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602069361.71989: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/ansible-host
 12775 1602069361.72001: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/ansible-user
 12775 1602069361.72013: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/lvm
 12775 1602069361.72024: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/mdadm
 12775 1602069361.72036: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/network
 12775 1602069361.72047: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/sysctl
 12775 1602069361.72059: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/users
 12775 1602069361.72097: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602069361.72141: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602069361.72280: done with get_vars()
 12775 1602069361.72303: done getting variables
 12775 1602069361.72313: sending task start callback, copying the task so we can template it temporarily
 12775 1602069361.72321: done copying, going to template now
 12775 1602069361.72329: done templating
 12775 1602069361.72335: here goes the callback...

TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
 12775 1602069361.72352: sending task start callback
 12775 1602069361.72360: entering _queue_task() for seed/gather_facts
 12775 1602069361.72367: Creating lock for gather_facts
 12775 1602069361.72524: worker is 1 (out of 1 available)
 12775 1602069361.72573: exiting _queue_task() for seed/gather_facts
 12775 1602069361.72705: done queuing things up, now waiting for results queue to drain
 12775 1602069361.72716: waiting for pending results...
 13543 1602069361.72889: running TaskExecutor() for seed/TASK: Gathering Facts
 13543 1602069361.73049: in run() - task 848f69fe-4727-ca6b-d252-00000000005d
 13543 1602069361.73179: calling self._execute()
 13543 1602069361.73727: Loading TestModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
 13543 1602069361.73746: Loading TestModule 'files' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
 13543 1602069361.73766: Loading TestModule 'functional' from /home/centos/kayobe-venv/share/kayobe/ansible/test_plugins/functional.py (found_in_cache=True, class_only=False)
 13543 1602069361.73787: Loading TestModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
 13543 1602069361.74149: Loading FilterModule 'bmc_type' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/bmc_type.py (found_in_cache=True, class_only=False)
 13543 1602069361.74165: Loading FilterModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
 13543 1602069361.74182: Loading FilterModule 'ipaddr' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/ipaddr.py (found_in_cache=True, class_only=False)
 13543 1602069361.74199: Loading FilterModule 'json_query' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/json_query.py (found_in_cache=True, class_only=False)
 13543 1602069361.74217: Loading FilterModule 'k8s' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/k8s.py (found_in_cache=True, class_only=False)
 13543 1602069361.74232: Loading FilterModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
 13543 1602069361.74247: Loading FilterModule 'network' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/network.py (found_in_cache=True, class_only=False)
 13543 1602069361.74263: Loading FilterModule 'networks' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/networks.py (found_in_cache=True, class_only=False)
 13543 1602069361.74281: Loading FilterModule 'switches' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/switches.py (found_in_cache=True, class_only=False)
 13543 1602069361.74296: Loading FilterModule 'urls' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
 13543 1602069361.74309: Loading FilterModule 'urlsplit' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
 13543 1602069361.75287: trying /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/connection
 13543 1602069361.75458: Loading Connection 'ssh' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/connection/ssh.py (found_in_cache=True, class_only=False)
 13543 1602069361.75493: trying /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/shell
 13543 1602069361.75544: Loading ShellModule 'sh' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
 13543 1602069361.75567: Loading ShellModule 'sh' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
 13543 1602069361.75887: Loading ActionModule 'gather_facts' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/action/gather_facts.py (found_in_cache=True, class_only=False)
 13543 1602069361.75912: starting attempt loop
 13543 1602069361.75922: running the handler
 13543 1602069361.75973: _low_level_execute_command(): starting
 13543 1602069361.75994: _low_level_execute_command(): executing: /bin/sh -c 'echo ~centos && sleep 0'
 13543 1602069547.30998: stdout chunk (state=2):
>>>/home/centos
<<<

 13543 1602069547.40082: stdout chunk (state=3):
>>><<<

 13543 1602069547.40100: stderr chunk (state=3):
>>><<<

 13543 1602069547.40129: _low_level_execute_command() done: rc=0, stdout=/home/centos
, stderr=
 13543 1602069547.40151: _low_level_execute_command(): starting
 13543 1602069547.40161: _low_level_execute_command(): executing: /bin/sh -c '( umask 77 && mkdir -p "` echo /home/centos/.ansible/tmp `"&& mkdir /home/centos/.ansible/tmp/ansible-tmp-1602069547.4-13543-134158360891421 && echo ansible-tmp-1602069547.4-13543-134158360891421="` echo 
/home/centos/.ansible/tmp/ansible-tmp-1602069547.4-13543-134158360891421 `" ) && sleep 0'
 13543 1602069628.76756: stdout chunk (state=2):
>>>ansible-tmp-1602069547.4-13543-134158360891421=/home/centos/.ansible/tmp/ansible-tmp-1602069547.4-13543-134158360891421
<<<

 13543 1602069628.86428: stdout chunk (state=3):
>>><<<

 13543 1602069628.86445: stderr chunk (state=3):
>>><<<

 13543 1602069547.40129: _low_level_execute_command() done: rc=0, stdout=/home/centos
, stderr=
 13543 1602069547.40151: _low_level_execute_command(): starting
 13543 1602069547.40161: _low_level_execute_command(): executing: /bin/sh -c '( umask 77 && mkdir -p "` echo /home/centos/.ansible/tmp `"&& mkdir /home/centos/.ansible/tmp/ansible-tmp-1602069547.4-13543-134158360891421 && echo ansible-tmp-1602069547.4-13543-134158360891421="` echo 
/home/centos/.ansible/tmp/ansible-tmp-1602069547.4-13543-134158360891421 `" ) && sleep 0'
 13543 1602069628.76756: stdout chunk (state=2):
>>>ansible-tmp-1602069547.4-13543-134158360891421=/home/centos/.ansible/tmp/ansible-tmp-1602069547.4-13543-134158360891421
<<<

 13543 1602069628.86428: stdout chunk (state=3):
>>><<<

 13543 1602069628.86445: stderr chunk (state=3):
>>><<<

 13543 1602069628.86474: _low_level_execute_command() done: rc=0, stdout=ansible-tmp-1602069547.4-13543-134158360891421=/home/centos/.ansible/tmp/ansible-tmp-1602069547.4-13543-134158360891421
, stderr=
 13543 1602069628.86551: ANSIBALLZ: Using lock for setup
 13543 1602069628.86556: ANSIBALLZ: Acquiring lock
 13543 1602069628.86563: ANSIBALLZ: Lock acquired: 139957566327056
 13543 1602069628.86569: ANSIBALLZ: Creating module
 13543 1602069629.23639: ANSIBALLZ: Writing module
 13543 1602069629.23678: ANSIBALLZ: Renaming module
 13543 1602069629.23689: ANSIBALLZ: Done creating module
 13543 1602069629.23901: transferring module to remote /home/centos/.ansible/tmp/ansible-tmp-1602069547.4-13543-134158360891421/AnsiballZ_setup.py
 13543 1602069629.24745: Sending initial data
 13543 1602069629.24760: Sent initial data (158 bytes)
 13543 1602069710.04786: stdout chunk (state=3):
>>>sftp> put /home/centos/.ansible/tmp/ansible-local-12775RrbMvE/tmpVZfMp3 /home/centos/.ansible/tmp/ansible-tmp-1602069547.4-13543-134158360891421/AnsiballZ_setup.py
<<<

 13543 1602069710.11764: stdout chunk (state=3):
>>><<<

 13543 1602069710.11783: stderr chunk (state=3):
>>><<<

 13543 1602069710.11817: done transferring module to remote
 13543 1602069710.11842: _low_level_execute_command(): starting
 13543 1602069710.11853: _low_level_execute_command(): executing: /bin/sh -c 'chmod u+x /home/centos/.ansible/tmp/ansible-tmp-1602069547.4-13543-134158360891421/ /home/centos/.ansible/tmp/ansible-tmp-1602069547.4-13543-134158360891421/AnsiballZ_setup.py && sleep 0'
 13543 1602069791.44725: stdout chunk (state=2):
>>><<<

 13543 1602069791.44749: stderr chunk (state=2):
>>><<<

 13543 1602069791.44781: _low_level_execute_command() done: rc=0, stdout=, stderr=
 13543 1602069791.44788: _low_level_execute_command(): starting
 13543 1602069791.44802: _low_level_execute_command(): executing: /bin/sh -c '/usr/libexec/platform-python /home/centos/.ansible/tmp/ansible-tmp-1602069547.4-13543-134158360891421/AnsiballZ_setup.py && sleep 0'
 13543 1602069921.74700: stdout chunk (state=2):
>>>
{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_product_serial": "NA", "ansible_form_factor": "Other", "ansible_product_version": "pc-i440fx-2.0", "ansible_fips": false, "ansible_service_mgr": "systemd", "ansible_fibre_channel_wwn": [], "ansible_selinux_python_present": true, "ansible_userspace_bits": "64", "gather_subset": ["all"], "ansible_system_capabilities_enforced": "True", "ansible_domain": "", "ansible_distribution_version": "7.8", "ansible_local": {}, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_user_shell": "/bin/bash", "ansible_virtualization_type": "kvm", "ansible_real_user_id": 1000, "ansible_processor_cores": 1, "ansible_virtualization_role": "guest", "ansible_distribution_file_variety": "RedHat", "ansible_dns": {"nameservers": ["8.8.8.8", "8.8.4.4"]}, "ansible_effective_group_id": 1000, "ansible_bios_version": "0.5.1", "ansible_processor": ["0", "GenuineIntel", "Intel Xeon E312xx (Sandy Bridge)"], "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20201007T112520", "tz": "UTC", "weeknumber": "40", "hour": "11", "year": "2020", "minute": "25", "tz_offset": "+0000", "month": "10", "epoch": "1602069920", "iso8601_micro": "2020-10-07T11:25:21.002766Z", "weekday": "Wednesday", "time": "11:25:20", "date": "2020-10-07", "iso8601": "2020-10-07T11:25:21Z", "day": "07", "iso8601_basic": "20201007T112520996458", "second": "20"}, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": <<<

 13543 1602069921.76095: stdout chunk (state=3):
>>>"on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 3789, "ansible_architecture": "x86_64", "ansible_device_links": {"masters": {}, "labels": {"sr0": ["config-2"]}, "ids": {"sr0": ["ata-QEMU_DVD-ROM_QM00009"]}, "uuids": {"sr0": ["2020-10-07-08-12-39-00"], "vda1": ["6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02"]}}, "ansible_default_ipv4": {"macaddress": "52:54:00:c9:8b:6b", "network": "192.168.33.0", "mtu": 1450, "broadcast": "192.168.33.255", "alias": "eth0", "netmask": "255.255.255.0", "address": "192.168.33.5", "interface": "eth0", "type": "ether", "gateway": "192.168.33.4"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_mounts": [{"block_used": 217498, "uuid": "6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02", "size_total": 53675536384, "block_total": 13104379, "mount": "/", "block_available": 12886881, "size_available": 52784664576, "fstype": "xfs", "inode_total": 26213872, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/vda1", "inode_used": 25687, "block_size": 4096, "inode_available": 26188185}], "ansible_system_vendor": "QEMU", "ansible_apparmor": {"status": "disabled"}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "console": "ttyS0", "net.ifnames": "0", "crashkernel": "auto", "BOOT_IMAGE": "/boot/vmlinuz-3.10.0-1127.el7.x86_64", "ro": true, "root": "UUID=6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02"}, "ansible_effective_user_id": 1000, "ansible_distribution_release": "Core", "ansible_user_gid": 1000, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_distribution_file_parsed": true, "ansible_os_family": "RedHat", "ansible_userspace_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_product_uuid": "NA", "ansible_fqdn": "seed", "ansible_system": "Linux", "ansible_pkg_mgr": "yum", "ansible_memfree_mb": 3465, "ansible_devices": {"vda": {"scheduler_mode": "mq-deadline", "rotational": "1", "vendor": "0x1af4", "sectors": "104857600", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {"vda1": {"sectorsize": 512, "uuid": "6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02"]}, "sectors": "104855519", "start": "2048", "holders": [], "size": "50.00 GB"}}, "holders": [], "size": "50.00 GB"}, "sr0": {"scheduler_mode": "deadline", "rotational": "1", "ven<<<

 13543 1602069921.76300: stdout chunk (state=3):
>>>dor": "QEMU", "sectors": "752", "links": {"masters": [], "labels": ["config-2"], "ids": ["ata-QEMU_DVD-ROM_QM00009"], "uuids": ["2020-10-07-08-12-39-00"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "2048", "removable": "1", "support_discard": "0", "model": "QEMU DVD-ROM", "partitions": {}, "holders": [], "size": "376.00 KB"}, "vdb": {"scheduler_mode": "mq-deadline", "rotational": "1", "vendor": "0x1af4", "sectors": "209715200", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "100.00 GB"}}, "ansible_user_uid": 1000, "ansible_user_id": "centos", "ansible_distribution": "CentOS", "ansible_user_dir": "/home/centos", "ansible_env": {"LANG": "en_ZA.UTF-8", "TERM": "screen", "SHELL": "/bin/bash", "XDG_RUNTIME_DIR": "/run/user/1000", "SHLVL": "2", "SSH_TTY": "/dev/pts/0", "_": "/usr/libexec/platform-python", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "PWD": "/home/centos", "SELINUX_LEVEL_REQUESTED": "", "PATH": "/usr/local/bin:/usr/bin", "SELINUX_ROLE_REQUESTED": "", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "centos", "USER": "centos", "HOME": "/home/centos", "MAIL": "/var/mail/centos", "LS_COLORS": "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:", "XDG_SESSION_ID": "8", "SSH_CLIENT": "192.168.33.4 38808 22", "SSH_CONNECTION": "192.168.33.4 38808 192.168.33.5 22"}, "ansible_distribution_major_version": "7", "module_setup": true, "ansible_iscsi_iqn": "", "ansible_hostname": "seed", "ansible_processor_vcpus": 1, "ansible_processor_count": 1, "ansible_swaptotal_mb": 0, "ansible_lsb": {}, "ansible_real_group_id": 1000, "ansible_proc_cmdline": {"LANG": "en_US.UTF-8", "console": ["tty0", "ttyS0,115200n8", "ttyS0"], "net.ifnames": "0", "crashkernel": "auto", "BOOT_IMAGE": "/boot/vmlinuz-3.10.0-1127.el7.x86_64", "ro": true, "root": "UUID=6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02"}, "ansible_all_ipv6_addresses": ["fe80::5054:ff:fec9:8b6b"], "ansible_interfaces": ["lo", "eth0"], "ansible_uptime_seconds": 11299, "ansible_machine_id": "cab9605edaa5484da7c2f02b8fd10762", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC/JjBoJX1i3lc4VyqatVa7wkLNJisem0bv13Ufr0rxNgIgOEIiS2keX3FMGylMsgAc4Ni5vODDGKzn60EYnMmttnyAxpi+TjeGob1z9e0DjKcx3QHPNhpGTYjjX2QfyLWaJpBQ7TI4x0AZx3KkSebI9fjEgk/xCd6Dh91b/o0yPQ/7DVIUILmL+4AECgxs8ZCFoGZk4cXf4im0U01hQ2lKYsMqdlSZIlBLzvYnhme/t9sWCmhL2hLW6xbsKKJlOuqrQONtAA2pEhU3vpK0lYsydlHPUqY4D3Mso+fk7QcHO3rAaPNAvoLxTbX6vUHc3bKe9IYar9+4jI52IyjG1Q7R", "ansible_memory_mb": {"real": {"total": 3789, "used": 324, "free": 346<<<

 13543 1602069921.76436: stdout chunk (state=3):
>>>5}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 146, "free": 3643}}, "ansible_user_gecos": "Cloud User", "ansible_system_capabilities": [""], "ansible_python": {"executable": "/usr/libexec/platform-python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_kernel": "3.10.0-1127.el7.x86_64", "ansible_processor_threads_per_core": 1, "ansible_is_chroot": true, "ansible_hostnqn": "", "ansible_eth0": {"macaddress": "52:54:00:c9:8b:6b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "pciid": "virtio0", "module": "virtio_net", "mtu": 1450, "device": "eth0", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "192.168.33.255", "netmask": "255.255.255.0", "network": "192.168.33.0", "address": "192.168.33.5"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::5054:ff:fec9:8b6b"}], "active": true, "type": "ether", "hw_timestamp_filters": []}, "ansible_python_version": "2.7.5", "ansible_product_name": "Standard PC (i440FX + PIIX, 1996)", "ansible_machine": "x86_64", "ansible_all_ipv4_addresses": ["192.168.33.5"], "ansible_nodename": "seed"}}
<<<

 13543 1602069922.02614: stderr chunk (state=3):
>>>Shared connection to 192.168.33.5 closed.
<<<

 13543 1602069922.02654: stdout chunk (state=3):
>>><<<

 13543 1602069922.02663: stderr chunk (state=3):
>>><<<

 13543 1602069922.02703: _low_level_execute_command() done: rc=0, stdout=
{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_product_serial": "NA", "ansible_form_factor": "Other", "ansible_product_version": "pc-i440fx-2.0", "ansible_fips": false, "ansible_service_mgr": "systemd", "ansible_fibre_channel_wwn": [], "ansible_selinux_python_present": true, "ansible_userspace_bits": "64", "gather_subset": ["all"], "ansible_system_capabilities_enforced": "True", "ansible_domain": "", "ansible_distribution_version": "7.8", "ansible_local": {}, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_user_shell": "/bin/bash", "ansible_virtualization_type": "kvm", "ansible_real_user_id": 1000, "ansible_processor_cores": 1, "ansible_virtualization_role": "guest", "ansible_distribution_file_variety": "RedHat", "ansible_dns": {"nameservers": ["8.8.8.8", "8.8.4.4"]}, "ansible_effective_group_id": 1000, "ansible_bios_version": "0.5.1", "ansible_processor": ["0", "GenuineIntel", "Intel Xeon E312xx (Sandy Bridge)"], "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20201007T112520", "tz": "UTC", "weeknumber": "40", "hour": "11", "year": "2020", "minute": "25", "tz_offset": "+0000", "month": "10", "epoch": "1602069920", "iso8601_micro": "2020-10-07T11:25:21.002766Z", "weekday": "Wednesday", "time": "11:25:20", "date": "2020-10-07", "iso8601": "2020-10-07T11:25:21Z", "day": "07", "iso8601_basic": "20201007T112520996458", "second": "20"}, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 3789, "ansible_architecture": "x86_64", "ansible_device_links": {"masters": {}, "labels": {"sr0": ["config-2"]}, "ids": {"sr0": ["ata-QEMU_DVD-ROM_QM00009"]}, "uuids": {"sr0": ["2020-10-07-08-12-39-00"], "vda1": ["6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02"]}}, "ansible_default_ipv4": {"macaddress": "52:54:00:c9:8b:6b", "network": "192.168.33.0", "mtu": 1450, "broadcast": "192.168.33.255", "alias": "eth0", "netmask": "255.255.255.0", "address": "192.168.33.5", "interface": "eth0", "type": "ether", "gateway": "192.168.33.4"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_mounts": [{"block_used": 217498, "uuid": "6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02", "size_total": 53675536384, "block_total": 13104379, "mount": "/", "block_available": 12886881, "size_available": 52784664576, "fstype": "xfs", "inode_total": 26213872, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/vda1", "inode_used": 25687, "block_size": 4096, "inode_available": 26188185}], "ansible_system_vendor": "QEMU", "ansible_apparmor": {"status": "disabled"}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "console": "ttyS0", "net.ifnames": "0", "crashkernel": "auto", "BOOT_IMAGE": "/boot/vmlinuz-3.10.0-1127.el7.x86_64", "ro": true, "root": "UUID=6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02"}, "ansible_effective_user_id": 1000, "ansible_distribution_release": "Core", "ansible_user_gid": 1000, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_distribution_file_parsed": true, "ansible_os_family": "RedHat", "ansible_userspace_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_product_uuid": "NA", "ansible_fqdn": "seed", "ansible_system": "Linux", "ansible_pkg_mgr": "yum", "ansible_memfree_mb": 3465, "ansible_devices": {"vda": {"scheduler_mode": "mq-deadline", "rotational": "1", "vendor": "0x1af4", "sectors": "104857600", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {"vda1": {"sectorsize": 512, "uuid": "6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02"]}, "sectors": "104855519", "start": "2048", "holders": [], "size": "50.00 GB"}}, "holders": [], "size": "50.00 GB"}, "sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "QEMU", "sectors": "752", "links": {"masters": [], "labels": ["config-2"], "ids": ["ata-QEMU_DVD-ROM_QM00009"], "uuids": ["2020-10-07-08-12-39-00"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "2048", "removable": "1", "support_discard": "0", "model": "QEMU DVD-ROM", "partitions": {}, "holders": [], "size": "376.00 KB"}, "vdb": {"scheduler_mode": "mq-deadline", "rotational": "1", "vendor": "0x1af4", "sectors": "209715200", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "100.00 GB"}}, "ansible_user_uid": 1000, "ansible_user_id": "centos", "ansible_distribution": "CentOS", "ansible_user_dir": "/home/centos", "ansible_env": {"LANG": "en_ZA.UTF-8", "TERM": "screen", "SHELL": "/bin/bash", "XDG_RUNTIME_DIR": "/run/user/1000", "SHLVL": "2", "SSH_TTY": "/dev/pts/0", "_": "/usr/libexec/platform-python", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "PWD": "/home/centos", "SELINUX_LEVEL_REQUESTED": "", "PATH": "/usr/local/bin:/usr/bin", "SELINUX_ROLE_REQUESTED": "", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "centos", "USER": "centos", "HOME": "/home/centos", "MAIL": "/var/mail/centos", "LS_COLORS": "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:", "XDG_SESSION_ID": "8", "SSH_CLIENT": "192.168.33.4 38808 22", "SSH_CONNECTION": "192.168.33.4 38808 192.168.33.5 22"}, "ansible_distribution_major_version": "7", "module_setup": true, "ansible_iscsi_iqn": "", "ansible_hostname": "seed", "ansible_processor_vcpus": 1, "ansible_processor_count": 1, "ansible_swaptotal_mb": 0, "ansible_lsb": {}, "ansible_real_group_id": 1000, "ansible_proc_cmdline": {"LANG": "en_US.UTF-8", "console": ["tty0", "ttyS0,115200n8", "ttyS0"], "net.ifnames": "0", "crashkernel": "auto", "BOOT_IMAGE": "/boot/vmlinuz-3.10.0-1127.el7.x86_64", "ro": true, "root": "UUID=6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02"}, "ansible_all_ipv6_addresses": ["fe80::5054:ff:fec9:8b6b"], "ansible_interfaces": ["lo", "eth0"], "ansible_uptime_seconds": 11299, "ansible_machine_id": "cab9605edaa5484da7c2f02b8fd10762", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC/JjBoJX1i3lc4VyqatVa7wkLNJisem0bv13Ufr0rxNgIgOEIiS2keX3FMGylMsgAc4Ni5vODDGKzn60EYnMmttnyAxpi+TjeGob1z9e0DjKcx3QHPNhpGTYjjX2QfyLWaJpBQ7TI4x0AZx3KkSebI9fjEgk/xCd6Dh91b/o0yPQ/7DVIUILmL+4AECgxs8ZCFoGZk4cXf4im0U01hQ2lKYsMqdlSZIlBLzvYnhme/t9sWCmhL2hLW6xbsKKJlOuqrQONtAA2pEhU3vpK0lYsydlHPUqY4D3Mso+fk7QcHO3rAaPNAvoLxTbX6vUHc3bKe9IYar9+4jI52IyjG1Q7R", "ansible_memory_mb": {"real": {"total": 3789, "used": 324, "free": 3465}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 146, "free": 3643}}, "ansible_user_gecos": "Cloud User", "ansible_system_capabilities": [""], "ansible_python": {"executable": "/usr/libexec/platform-python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_kernel": "3.10.0-1127.el7.x86_64", "ansible_processor_threads_per_core": 1, "ansible_is_chroot": true, "ansible_hostnqn": "", "ansible_eth0": {"macaddress": "52:54:00:c9:8b:6b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "pciid": "virtio0", "module": "virtio_net", "mtu": 1450, "device": "eth0", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "192.168.33.255", "netmask": "255.255.255.0", "network": "192.168.33.0", "address": "192.168.33.5"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::5054:ff:fec9:8b6b"}], "active": true, "type": "ether", "hw_timestamp_filters": []}, "ansible_python_version": "2.7.5", "ansible_product_name": "Standard PC (i440FX + PIIX, 1996)", "ansible_machine": "x86_64", "ansible_all_ipv4_addresses": ["192.168.33.5"], "ansible_nodename": "seed"}}
, stderr=Shared connection to 192.168.33.5 closed.

 13543 1602069922.03618: done with _execute_module (setup, {'_ansible_version': '2.8.16', '_ansible_selinux_special_fs': ['fuse', 'nfs', 'vboxsf', 'ramfs', '9p'], '_ansible_no_log': False, 'gather_timeout': 10, '_ansible_module_name': 'setup', '_ansible_remote_tmp': u'~/.ansible/tmp', '_ansible_verbosity': 0, '_ansible_keep_remote_files': False, '_ansible_syslog_facility': u'LOG_USER', '_ansible_socket': None, '_ansible_string_conversion_action': u'warn', 'gather_subset': ['all'], '_ansible_diff': False, '_ansible_debug': True, '_ansible_shell_executable': u'/bin/sh', '_ansible_check_mode': False, '_ansible_tmpdir': u'/home/centos/.ansible/tmp/ansible-tmp-1602069547.4-13543-134158360891421/'})
 13543 1602069922.03637: _low_level_execute_command(): starting
 13543 1602069922.03648: _low_level_execute_command(): executing: /bin/sh -c 'rm -f -r /home/centos/.ansible/tmp/ansible-tmp-1602069547.4-13543-134158360891421/ > /dev/null 2>&1 && sleep 0'
 13543 1602069963.43839: stdout chunk (state=2):
>>><<<

 13543 1602069963.43864: stderr chunk (state=2):
>>><<<

 13543 1602069963.43896: _low_level_execute_command() done: rc=0, stdout=, stderr=
 13543 1602069963.43908: handler run complete
 13543 1602069963.46185: attempt loop complete, returning result
 13543 1602069963.46204: _execute() done
 13543 1602069963.46210: dumping result to json
 13543 1602069963.46274: done dumping result, returning
 13543 1602069963.46291: done running TaskExecutor() for seed/TASK: Gathering Facts [848f69fe-4727-ca6b-d252-00000000005d]
 13543 1602069963.46309: sending task result for task 848f69fe-4727-ca6b-d252-00000000005d
 13543 1602069963.46376: done sending task result for task 848f69fe-4727-ca6b-d252-00000000005d
 13543 1602069963.46381: WORKER PROCESS EXITING
ok: [seed]
 12775 1602069963.48039: no more pending results, returning what we have
 12775 1602069963.48055: results queue empty
 12775 1602069963.48065: checking for any_errors_fatal
 12775 1602069963.48077: done checking for any_errors_fatal
 12775 1602069963.48086: checking for max_fail_percentage
 12775 1602069963.48095: done checking for max_fail_percentage
 12775 1602069963.48104: checking to see if all hosts have failed and the running result is not ok
 12775 1602069963.48112: done checking to see if all hosts have failed
 12775 1602069963.48120: getting the remaining hosts for this loop
 12775 1602069963.48141: done getting the remaining hosts for this loop
 12775 1602069963.48159: building list of next tasks for hosts
 12775 1602069963.48169: getting the next task for host seed
 12775 1602069963.48185: done getting next task for host seed
 12775 1602069963.48200:  ^ task is: TASK: meta (flush_handlers)
 12775 1602069963.48212:  ^ state is: HOST STATE: block=1, task=1, rescue=0, always=0, run_state=ITERATING_TASKS, fail_state=FAILED_NONE, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
 12775 1602069963.48221: done building task lists
 12775 1602069963.48229: counting tasks in each state of execution
 12775 1602069963.48238: done counting tasks in each state of execution:
        num_setups: 0
        num_tasks: 1
        num_rescue: 0
        num_always: 0
 12775 1602069963.48252: advancing hosts in ITERATING_TASKS
 12775 1602069963.48260: starting to advance hosts
 12775 1602069963.48268: getting the next task for host seed
 12775 1602069963.48279: done getting next task for host seed
 12775 1602069963.48291:  ^ task is: TASK: meta (flush_handlers)
 12775 1602069963.48300:  ^ state is: HOST STATE: block=1, task=1, rescue=0, always=0, run_state=ITERATING_TASKS, fail_state=FAILED_NONE, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
 12775 1602069963.48308: done advancing hosts to next task
 12775 1602069963.48356: done queuing things up, now waiting for results queue to drain
 12775 1602069963.48366: results queue empty
 12775 1602069963.48373: checking for any_errors_fatal
 12775 1602069963.48385: done checking for any_errors_fatal
 12775 1602069963.48393: checking for max_fail_percentage
 12775 1602069963.48400: done checking for max_fail_percentage
 12775 1602069963.48407: checking to see if all hosts have failed and the running result is not ok
 12775 1602069963.48414: done checking to see if all hosts have failed
 12775 1602069963.48421: getting the remaining hosts for this loop
 12775 1602069963.48433: done getting the remaining hosts for this loop
 12775 1602069963.48448: building list of next tasks for hosts
 12775 1602069963.48456: getting the next task for host seed
 12775 1602069963.48467: done getting next task for host seed
 12775 1602069963.48481:  ^ task is: TASK: singleplatform-eng.users : Creating groups
 12775 1602069963.48491:  ^ state is: HOST STATE: block=2, task=1, rescue=0, always=0, run_state=ITERATING_TASKS, fail_state=FAILED_NONE, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
 12775 1602069963.48499: done building task lists
 12775 1602069963.48506: counting tasks in each state of execution
 12775 1602069963.48514: done counting tasks in each state of execution:
        num_setups: 0
        num_tasks: 1
        num_rescue: 0
        num_always: 0
12775 1602069963.48526: advancing hosts in ITERATING_TASKS
 12775 1602069963.48534: starting to advance hosts
 12775 1602069963.48541: getting the next task for host seed
 12775 1602069963.48552: done getting next task for host seed
 12775 1602069963.48563:  ^ task is: TASK: singleplatform-eng.users : Creating groups
 12775 1602069963.48572:  ^ state is: HOST STATE: block=2, task=1, rescue=0, always=0, run_state=ITERATING_TASKS, fail_state=FAILED_NONE, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
 12775 1602069963.48581: done advancing hosts to next task
 12775 1602069963.48601: getting variables
 12775 1602069963.48611: in VariableManager get_vars()
 12775 1602069963.48708: Loading TestModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
 12775 1602069963.48720: Loading TestModule 'files' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
 12775 1602069963.48733: Loading TestModule 'functional' from /home/centos/kayobe-venv/share/kayobe/ansible/test_plugins/functional.py (found_in_cache=True, class_only=False)
 12775 1602069963.48743: Loading TestModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
 12775 1602069963.48879: Loading FilterModule 'bmc_type' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/bmc_type.py (found_in_cache=True, class_only=False)
 12775 1602069963.48890: Loading FilterModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
 12775 1602069963.48901: Loading FilterModule 'ipaddr' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/ipaddr.py (found_in_cache=True, class_only=False)
 12775 1602069963.48912: Loading FilterModule 'json_query' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/json_query.py (found_in_cache=True, class_only=False)
 12775 1602069963.48922: Loading FilterModule 'k8s' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/k8s.py (found_in_cache=True, class_only=False)
 12775 1602069963.48932: Loading FilterModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
 12775 1602069963.48942: Loading FilterModule 'network' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/network.py (found_in_cache=True, class_only=False)
 12775 1602069963.48952: Loading FilterModule 'networks' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/networks.py (found_in_cache=True, class_only=False)
 12775 1602069963.48963: Loading FilterModule 'switches' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/switches.py (found_in_cache=True, class_only=False)
 12775 1602069963.48973: Loading FilterModule 'urls' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
 12775 1602069963.48983: Loading FilterModule 'urlsplit' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
 12775 1602069963.49215: Calling all_inventory to load vars for seed
 12775 1602069963.49227: Calling groups_inventory to load vars for seed
 12775 1602069963.49243: Calling all_plugins_inventory to load vars for seed
 12775 1602069963.49279: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602069963.49316: Calling all_plugins_play to load vars for seed
 12775 1602069963.49345: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602069963.49381: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/bifrost
 12775 1602069963.49397: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/bmc
 12775 1602069963.49412: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/ceph
 12775 1602069963.49427: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/compute
 12775 1602069963.49443: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/controllers
 12775 1602070024.10999: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/dell-switch-bmp
 12775 1602070024.11028: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/dnf
 12775 1602070024.11052: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/dns
 12775 1602070024.11074: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/docker
 12775 1602070024.11099: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/docker-registry
 12775 1602070024.11121: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/globals
 12775 1602070024.11143: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/grafana
 12775 1602070024.11165: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/idrac
 12775 1602070024.11191: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/inspector
 12775 1602070024.11215: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/ipa
 12775 1602070024.11240: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/ironic
 12775 1602070024.11264: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/kolla
 12775 1602070024.11297: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/monasca
 12775 1602070024.11324: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/monitoring
 12775 1602070024.11350: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/network
 12775 1602070024.11379: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/neutron
 12775 1602070024.11408: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/nova
 12775 1602070024.11434: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/ntp
 12775 1602070024.11458: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/opensm
 12775 1602070024.11486: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/openstack
 12775 1602070024.11512: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/overcloud
 12775 1602070024.11538: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/pip
 12775 1602070024.11563: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/seed
 12775 1602070024.11591: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/seed-hypervisor
 12775 1602070024.11618: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/seed-vm
 12775 1602070024.11644: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/ssh
 12775 1602070024.11671: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/storage
 12775 1602070024.11702: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/swift
 12775 1602070024.11733: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/arista
 12775 1602070024.11763: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/config
 12775 1602070024.11793: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/dell
 12775 1602070024.11822: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/dell-powerconnect
 12775 1602070024.11851: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/junos
 12775 1602070024.11881: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/mellanox
 12775 1602070024.11906: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/users
 12775 1602070024.11931: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/yum
 12775 1602070024.11957: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/yum-cron
 12775 1602070024.11996: Calling groups_plugins_inventory to load vars for seed
 12775 1602070024.12047: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602070024.12180: Loading data from /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/group_vars/seed/ansible-python-interpreter
 12775 1602070024.12205: Loading data from /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/group_vars/seed/network-interfaces
 12775 1602070024.12228: Calling groups_plugins_play to load vars for seed
 12775 1602070024.12273: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602070024.12379: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/ansible-host
 12775 1602070024.12400: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/ansible-user
 12775 1602070024.12421: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/lvm
 12775 1602070024.12442: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/mdadm
 12775 1602070024.12462: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/network
 12775 1602070024.12484: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/sysctl
 12775 1602070024.12505: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/users
 12775 1602070024.12574: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602070024.12661: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602070024.16437: done with get_vars()
 12775 1602070024.16475: done getting variables
 12775 1602070024.16492: sending task start callback, copying the task so we can template it temporarily
 12775 1602070024.16502: done copying, going to template now
 12775 1602070024.16513: done templating
 12775 1602070024.16521: here goes the callback...

TASK [singleplatform-eng.users : Creating groups] *****************************************************************************************************************************************************************
 12775 1602070024.16553: sending task start callback
 12775 1602070024.16565: entering _queue_task() for seed/group
 12775 1602070024.16577: Creating lock for group
 12775 1602070024.16830: worker is 1 (out of 1 available)
 12775 1602070024.16899: exiting _queue_task() for seed/group
 12775 1602070024.17086: done queuing things up, now waiting for results queue to drain
 12775 1602070024.17102: waiting for pending results...
 23216 1602070024.17330: running TaskExecutor() for seed/TASK: singleplatform-eng.users : Creating groups
 23216 1602070024.17562: in run() - task 848f69fe-4727-ca6b-d252-000000000038
 23216 1602070024.18010: Loading TestModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
 23216 1602070024.18034: Loading TestModule 'files' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
 23216 1602070024.18060: Loading TestModule 'functional' from /home/centos/kayobe-venv/share/kayobe/ansible/test_plugins/functional.py (found_in_cache=True, class_only=False)
 23216 1602070024.18083: Loading TestModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
 23216 1602070024.18325: Loading FilterModule 'bmc_type' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/bmc_type.py (found_in_cache=True, class_only=False)
 23216 1602070024.18345: Loading FilterModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
 23216 1602070024.18366: Loading FilterModule 'ipaddr' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/ipaddr.py (found_in_cache=True, class_only=False)
 23216 1602070024.18391: Loading FilterModule 'json_query' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/json_query.py (found_in_cache=True, class_only=False)
 23216 1602070024.18412: Loading FilterModule 'k8s' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/k8s.py (found_in_cache=True, class_only=False)
 23216 1602070024.18430: Loading FilterModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
 23216 1602070024.18448: Loading FilterModule 'network' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/network.py (found_in_cache=True, class_only=False)
 23216 1602070024.18467: Loading FilterModule 'networks' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/networks.py (found_in_cache=True, class_only=False)
 23216 1602070024.18490: Loading FilterModule 'switches' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/switches.py (found_in_cache=True, class_only=False)
 23216 1602070024.18509: Loading FilterModule 'urls' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
 23216 1602070024.18526: Loading FilterModule 'urlsplit' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
 23216 1602070024.19095: Loading FilterModule 'bmc_type' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/bmc_type.py (found_in_cache=True, class_only=False)
 23216 1602070024.19112: Loading FilterModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
 23216 1602070024.19130: Loading FilterModule 'ipaddr' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/ipaddr.py (found_in_cache=True, class_only=False)
 23216 1602070024.19147: Loading FilterModule 'json_query' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/json_query.py (found_in_cache=True, class_only=False)
 23216 1602070024.19161: Loading FilterModule 'k8s' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/k8s.py (found_in_cache=True, class_only=False)
 23216 1602070024.19177: Loading FilterModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
 23216 1602070024.19191: Loading FilterModule 'network' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/network.py (found_in_cache=True, class_only=False)
 23216 1602070024.19207: Loading FilterModule 'networks' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/networks.py (found_in_cache=True, class_only=False)
 23216 1602070024.19222: Loading FilterModule 'switches' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/switches.py (found_in_cache=True, class_only=False)
 23216 1602070024.19237: Loading FilterModule 'urls' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
 23216 1602070024.19250: Loading FilterModule 'urlsplit' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
 23216 1602070024.19319: Loading TestModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
 23216 1602070024.19334: Loading TestModule 'files' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
 23216 1602070024.19348: Loading TestModule 'functional' from /home/centos/kayobe-venv/share/kayobe/ansible/test_plugins/functional.py (found_in_cache=True, class_only=False)
 23216 1602070024.19363: Loading TestModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
 23216 1602070024.19445: Loading LookupModule 'items' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/lookup/items.py (found_in_cache=True, class_only=False)
 23216 1602070024.19475: dumping result to json
 23216 1602070024.19495: done dumping result, returning
 23216 1602070024.19513: done running TaskExecutor() for seed/TASK: singleplatform-eng.users : Creating groups [848f69fe-4727-ca6b-d252-000000000038]
 23216 1602070024.19549: sending task result for task 848f69fe-4727-ca6b-d252-000000000038
 23216 1602070024.19657: done sending task result for task 848f69fe-4727-ca6b-d252-000000000038
 23216 1602070024.19739: WORKER PROCESS EXITING
 12775 1602070024.19913: no more pending results, returning what we have
 12775 1602070024.19928: results queue empty
 12775 1602070024.19937: checking for any_errors_fatal
 12775 1602070024.19948: done checking for any_errors_fatal
 12775 1602070024.19957: checking for max_fail_percentage
 12775 1602070024.19966: done checking for max_fail_percentage
 12775 1602070024.19975: checking to see if all hosts have failed and the running result is not ok
 12775 1602070024.19982: done checking to see if all hosts have failed
 12775 1602070024.19990: getting the remaining hosts for this loop
 12775 1602070024.20007: done getting the remaining hosts for this loop
 12775 1602070024.20023: building list of next tasks for hosts
 12775 1602070024.20033: getting the next task for host seed
 12775 1602070024.20046: done getting next task for host seed
 12775 1602070024.20061:  ^ task is: TASK: singleplatform-eng.users : Per-user group creation
 12775 1602070024.20072:  ^ state is: HOST STATE: block=2, task=2, rescue=0, always=0, run_state=ITERATING_TASKS, fail_state=FAILED_NONE, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
 12775 1602070024.20082: done building task lists
 12775 1602070024.20090: counting tasks in each state of execution
 12775 1602070024.20098: done counting tasks in each state of execution:
       num_setups: 0
        num_tasks: 1
        num_rescue: 0
        num_always: 0
 12775 1602070024.20112: advancing hosts in ITERATING_TASKS
 12775 1602070024.20120: starting to advance hosts
 12775 1602070024.20128: getting the next task for host seed
 12775 1602070024.20138: done getting next task for host seed
 12775 1602070024.20149:  ^ task is: TASK: singleplatform-eng.users : Per-user group creation
 12775 1602070024.20158:  ^ state is: HOST STATE: block=2, task=2, rescue=0, always=0, run_state=ITERATING_TASKS, fail_state=FAILED_NONE, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
 12775 1602070024.20166: done advancing hosts to next task
 12775 1602070024.20194: getting variables
 12775 1602070024.20203: in VariableManager get_vars()
 12775 1602070024.20290: Loading TestModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
 12775 1602070024.20302: Loading TestModule 'files' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
 12775 1602070024.20314: Loading TestModule 'functional' from /home/centos/kayobe-venv/share/kayobe/ansible/test_plugins/functional.py (found_in_cache=True, class_only=False)
 12775 1602070024.20326: Loading TestModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
 12775 1602070024.20557: Loading FilterModule 'bmc_type' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/bmc_type.py (found_in_cache=True, class_only=False)
 12775 1602070024.20569: Loading FilterModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
 12775 1602070024.20581: Loading FilterModule 'ipaddr' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/ipaddr.py (found_in_cache=True, class_only=False)
 12775 1602070024.20592: Loading FilterModule 'json_query' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/json_query.py (found_in_cache=True, class_only=False)
 12775 1602070024.20602: Loading FilterModule 'k8s' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/k8s.py (found_in_cache=True, class_only=False)
 12775 1602070024.20612: Loading FilterModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
 12775 1602070024.20623: Loading FilterModule 'network' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/network.py (found_in_cache=True, class_only=False)
 12775 1602070024.20635: Loading FilterModule 'networks' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/networks.py (found_in_cache=True, class_only=False)
 12775 1602070024.20645: Loading FilterModule 'switches' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/switches.py (found_in_cache=True, class_only=False)
 12775 1602070024.20655: Loading FilterModule 'urls' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
 12775 1602070024.20666: Loading FilterModule 'urlsplit' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
 12775 1602070024.20863: Calling all_inventory to load vars for seed
 12775 1602070024.20876: Calling groups_inventory to load vars for seed
 12775 1602070024.20893: Calling all_plugins_inventory to load vars for seed
 12775 1602070024.20927: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602070024.20962: Calling all_plugins_play to load vars for seed
 12775 1602070024.20992: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602070024.21025: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/bifrost
 12775 1602070024.21040: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/bmc
 12775 1602070024.21057: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/ceph
 12775 1602070024.21072: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/compute
 12775 1602070024.21089: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/controllers
 12775 1602070024.21106: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/dell-switch-bmp
 12775 1602070024.21121: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/dnf
 12775 1602070024.21136: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/dns
 12775 1602070024.21153: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/docker
 12775 1602070024.21168: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/docker-registry
 12775 1602070024.21186: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/globals
 12775 1602070024.21202: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/grafana
 12775 1602070024.21214: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/idrac
 12775 1602070024.21226: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/inspector
 12775 1602070024.21239: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/ipa
 12775 1602070024.21253: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/ironic
 12775 1602070024.21266: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/kolla
 12775 1602070024.21283: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/monasca
 12775 1602070024.21298: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/monitoring
 12775 1602070024.21312: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/network
 12775 1602070024.21327: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/neutron
 12775 1602070024.21341: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/nova
 12775 1602070024.21355: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/ntp
 12775 1602070024.21369: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/opensm
 12775 1602070024.21384: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/openstack
 12775 1602070024.21398: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/overcloud
 12775 1602070024.21412: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/pip
 12775 1602070024.21426: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/seed
 12775 1602070024.21440: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/seed-hypervisor
 12775 1602070024.21454: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/seed-vm
 12775 1602070024.21468: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/ssh
 12775 1602070024.21483: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/storage
 12775 1602070024.21498: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/swift
 12775 1602070024.21515: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/arista
 12775 1602070024.21531: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/config
 12775 1602070024.21545: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/dell
 12775 1602070024.21560: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/dell-powerconnect
 12775 1602070024.21576: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/junos
 12775 1602070024.21591: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/switches/mellanox
 12775 1602070024.21607: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/users
 12775 1602070024.21622: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/yum
12775 1602070024.21636: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/all/yum-cron
 12775 1602070024.21656: Calling groups_plugins_inventory to load vars for seed
 12775 1602070024.21682: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602070024.21750: Loading data from /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/group_vars/seed/ansible-python-interpreter
 12775 1602070024.21762: Loading data from /home/centos/kayobe/config/src/kayobe-config/etc/kayobe/inventory/group_vars/seed/network-interfaces
 12775 1602070024.21778: Calling groups_plugins_play to load vars for seed
 12775 1602070024.21801: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602070024.21856: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/ansible-host
 12775 1602070024.21867: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/ansible-user
 12775 1602070024.21882: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/lvm
 12775 1602070024.21893: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/mdadm
 12775 1602070024.21903: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/network
 12775 1602070024.21914: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/sysctl
 12775 1602070024.21925: Loading data from /home/centos/kayobe-venv/share/kayobe/ansible/group_vars/seed/users
 12775 1602070024.21963: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602070024.22009: Loading VarsModule 'host_group_vars' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
 12775 1602070024.24275: done with get_vars()
 12775 1602070024.24299: done getting variables
 12775 1602070024.24310: sending task start callback, copying the task so we can template it temporarily
 12775 1602070024.24317: done copying, going to template now
 12775 1602070024.24324: done templating
 12775 1602070024.24330: here goes the callback...

TASK [singleplatform-eng.users : Per-user group creation] *********************************************************************************************************************************************************
 12775 1602070024.24346: sending task start callback
 12775 1602070024.24353: entering _queue_task() for seed/group
 12775 1602070024.24475: worker is 1 (out of 1 available)
 12775 1602070024.24521: exiting _queue_task() for seed/group
 12775 1602070024.24624: done queuing things up, now waiting for results queue to drain
 12775 1602070024.24634: waiting for pending results...
 23218 1602070024.24925: running TaskExecutor() for seed/TASK: singleplatform-eng.users : Per-user group creation
 23218 1602070024.25108: in run() - task 848f69fe-4727-ca6b-d252-000000000039
 23218 1602070024.25467: Loading TestModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
 23218 1602070024.25489: Loading TestModule 'files' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
 23218 1602070024.25507: Loading TestModule 'functional' from /home/centos/kayobe-venv/share/kayobe/ansible/test_plugins/functional.py (found_in_cache=True, class_only=False)
 23218 1602070024.25522: Loading TestModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
 23218 1602070024.25696: Loading FilterModule 'bmc_type' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/bmc_type.py (found_in_cache=True, class_only=False)
 23218 1602070024.25710: Loading FilterModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
 23218 1602070024.25724: Loading FilterModule 'ipaddr' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/ipaddr.py (found_in_cache=True, class_only=False)
 23218 1602070024.25740: Loading FilterModule 'json_query' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/json_query.py (found_in_cache=True, class_only=False)
 23218 1602070024.25754: Loading FilterModule 'k8s' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/k8s.py (found_in_cache=True, class_only=False)
 23218 1602070024.25768: Loading FilterModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
 23218 1602070024.25784: Loading FilterModule 'network' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/network.py (found_in_cache=True, class_only=False)
 23218 1602070024.25799: Loading FilterModule 'networks' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/networks.py (found_in_cache=True, class_only=False)
 23218 1602070024.25815: Loading FilterModule 'switches' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/switches.py (found_in_cache=True, class_only=False)
 23218 1602070024.25829: Loading FilterModule 'urls' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
 23218 1602070024.25840: Loading FilterModule 'urlsplit' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
 23218 1602070024.27968: Loaded config def from plugin (lookup/env)
 23218 1602070024.27987: Loading LookupModule 'env' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/lookup/env.py
 23218 1602070024.28907: Loaded config def from plugin (lookup/file)
 23218 1602070024.28920: Loading LookupModule 'file' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/lookup/file.py
 23218 1602070024.28933: File lookup term: /home/centos/.ssh/id_rsa.pub
 23218 1602070024.29043: Loading FilterModule 'bmc_type' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/bmc_type.py (found_in_cache=True, class_only=False)
 23218 1602070024.29053: Loading FilterModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
 23218 1602070024.29063: Loading FilterModule 'ipaddr' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/ipaddr.py (found_in_cache=True, class_only=False)
 23218 1602070024.29073: Loading FilterModule 'json_query' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/json_query.py (found_in_cache=True, class_only=False)
 23218 1602070024.29085: Loading FilterModule 'k8s' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/k8s.py (found_in_cache=True, class_only=False)
 23218 1602070024.29095: Loading FilterModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
 23218 1602070024.29104: Loading FilterModule 'network' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/network.py (found_in_cache=True, class_only=False)
 23218 1602070024.29115: Loading FilterModule 'networks' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/networks.py (found_in_cache=True, class_only=False)
 23218 1602070024.29125: Loading FilterModule 'switches' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/switches.py (found_in_cache=True, class_only=False)
 23218 1602070024.29135: Loading FilterModule 'urls' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
 23218 1602070024.29144: Loading FilterModule 'urlsplit' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
 23218 1602070024.29191: Loading TestModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
 23218 1602070024.29201: Loading TestModule 'files' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
 23218 1602070024.29211: Loading TestModule 'functional' from /home/centos/kayobe-venv/share/kayobe/ansible/test_plugins/functional.py (found_in_cache=True, class_only=False)
 23218 1602070024.29220: Loading TestModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
 23218 1602070024.29294: Loading LookupModule 'items' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/lookup/items.py (found_in_cache=True, class_only=False)
 23218 1602070024.29639: Loading TestModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
 23218 1602070024.29650: Loading TestModule 'files' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
 23218 1602070024.29660: Loading TestModule 'functional' from /home/centos/kayobe-venv/share/kayobe/ansible/test_plugins/functional.py (found_in_cache=True, class_only=False)
 23218 1602070024.29670: Loading TestModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
 23218 1602070024.29773: Loading FilterModule 'bmc_type' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/bmc_type.py (found_in_cache=True, class_only=False)
 23218 1602070024.29784: Loading FilterModule 'core' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
 23218 1602070024.29793: Loading FilterModule 'ipaddr' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/ipaddr.py (found_in_cache=True, class_only=False)
 23218 1602070024.29803: Loading FilterModule 'json_query' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/json_query.py (found_in_cache=True, class_only=False)
 23218 1602070024.29813: Loading FilterModule 'k8s' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/k8s.py (found_in_cache=True, class_only=False)
 23218 1602070024.29823: Loading FilterModule 'mathstuff' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
 23218 1602070024.29831: Loading FilterModule 'network' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/network.py (found_in_cache=True, class_only=False)
 23218 1602070024.29842: Loading FilterModule 'networks' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/networks.py (found_in_cache=True, class_only=False)
 23218 1602070024.29852: Loading FilterModule 'switches' from /home/centos/kayobe-venv/share/kayobe/ansible/filter_plugins/switches.py (found_in_cache=True, class_only=False)
 23218 1602070024.29860: Loading FilterModule 'urls' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
 23218 1602070024.29870: Loading FilterModule 'urlsplit' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
 23218 1602070024.31399: trying /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/connection
 23218 1602070024.31519: Loading Connection 'ssh' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/connection/ssh.py (found_in_cache=True, class_only=False)
 23218 1602070024.31544: trying /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/shell
 23218 1602070024.31582: Loading ShellModule 'sh' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
 23218 1602070024.31598: Loading ShellModule 'sh' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
 23218 1602070024.31624: trying /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/become
 23218 1602070024.31682: Loading BecomeModule 'sudo' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/become/sudo.py (found_in_cache=True, class_only=False)
 23218 1602070024.32337: Loading ActionModule 'normal' from /home/centos/kayobe-venv/lib/python2.7/site-packages/ansible/plugins/action/normal.py
 23218 1602070024.32355: starting attempt loop
 23218 1602070024.32363: running the handler
 23218 1602070024.32416: _low_level_execute_command(): starting
 23218 1602070024.32428: _low_level_execute_command(): executing: /bin/sh -c 'echo ~centos && sleep 0'
 23218 1602070209.93723: stdout chunk (state=2):
>>>/home/centos
<<<

 23218 1602070210.04945: stdout chunk (state=3):
>>><<<

 23218 1602070210.04971: stderr chunk (state=3):
>>><<<

 23218 1602070210.05024: _low_level_execute_command() done: rc=0, stdout=/home/centos
, stderr=
 23218 1602070210.05084: _low_level_execute_command(): starting
 23218 1602070210.05105: _low_level_execute_command(): executing: /bin/sh -c '( umask 77 && mkdir -p "` echo /home/centos/.ansible/tmp `"&& mkdir /home/centos/.ansible/tmp/ansible-tmp-1602070210.05-23218-34422041703975 && echo ansible-tmp-1602070210.05-23218-34422041703975="` echo /home/centos/.ansible/tmp/ansible-tmp-1602070210.05-23218-34422041703975 `" ) && sleep 0'
 23218 1602070291.58973: stdout chunk (state=2):
>>>ansible-tmp-1602070210.05-23218-34422041703975=/home/centos/.ansible/tmp/ansible-tmp-1602070210.05-23218-34422041703975
<<<

 23218 1602070291.69613: stdout chunk (state=3):
>>><<<

 23218 1602070291.69636: stderr chunk (state=3):
>>><<<

 23218 1602070291.69685: _low_level_execute_command() done: rc=0, stdout=ansible-tmp-1602070210.05-23218-34422041703975=/home/centos/.ansible/tmp/ansible-tmp-1602070210.05-23218-34422041703975
, stderr=
 23218 1602070291.69840: ANSIBALLZ: Using lock for group
 23218 1602070291.69853: ANSIBALLZ: Acquiring lock
 23218 1602070291.69869: ANSIBALLZ: Lock acquired: 139957479612560
 23218 1602070291.69886: ANSIBALLZ: Creating module
 23218 1602070291.97066: ANSIBALLZ: Writing module
 23218 1602070291.97097: ANSIBALLZ: Renaming module
 23218 1602070291.97107: ANSIBALLZ: Done creating module
 23218 1602070291.97215: transferring module to remote /home/centos/.ansible/tmp/ansible-tmp-1602070210.05-23218-34422041703975/AnsiballZ_group.py
 23218 1602070291.98090: Sending initial data
 23218 1602070291.98107: Sent initial data (158 bytes)
 23218 1602070372.99129: stdout chunk (state=3):
>>>sftp> put /home/centos/.ansible/tmp/ansible-local-12775RrbMvE/tmpNmNSfx /home/centos/.ansible/tmp/ansible-tmp-1602070210.05-23218-34422041703975/AnsiballZ_group.py
<<<

 23218 1602070373.04339: stdout chunk (state=3):
>>><<<

 23218 1602070373.04364: stderr chunk (state=3):
>>><<<

 23218 1602070373.04428: done transferring module to remote
 23218 1602070373.04496: _low_level_execute_command(): starting
 23218 1602070373.04520: _low_level_execute_command(): executing: /bin/sh -c 'chmod u+x /home/centos/.ansible/tmp/ansible-tmp-1602070210.05-23218-34422041703975/ /home/centos/.ansible/tmp/ansible-tmp-1602070210.05-23218-34422041703975/AnsiballZ_group.py && sleep 0'
 23218 1602070454.58436: stdout chunk (state=2):
>>><<<

 23218 1602070454.58484: stderr chunk (state=2):
>>><<<

 23218 1602070454.58540: _low_level_execute_command() done: rc=0, stdout=, stderr=
 23218 1602070454.58559: _low_level_execute_command(): starting
 23218 1602070454.58581: _low_level_execute_command(): using become for this command
 23218 1602070454.58638: _low_level_execute_command(): executing: /bin/sh -c 'sudo -H -S -n  -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-ehsrdsrqcdlqiorlvgpvnvtiybeszcod ; /usr/libexec/platform-python /home/centos/.ansible/tmp/ansible-tmp-1602070210.05-23218-34422041703975/AnsiballZ_group.py'"'"' && sleep 0'
 23218 1602070454.59678: Initial state: awaiting_escalation: BECOME-SUCCESS-ehsrdsrqcdlqiorlvgpvnvtiybeszcod
 23218 1602070536.97097: stdout chunk (state=1):
>>>sudo: a password is required
<<<

 23218 1602070536.97244: become_nopasswd_error: (source=stdout, state=awaiting_escalation): 'sudo: a password is required'
 23218 1602070536.97419: done running TaskExecutor() for seed/TASK: singleplatform-eng.users : Per-user group creation [848f69fe-4727-ca6b-d252-000000000039]
 23218 1602070536.97463: sending task result for task 848f69fe-4727-ca6b-d252-000000000039
 23218 1602070536.97565: done sending task result for task 848f69fe-4727-ca6b-d252-000000000039
 23218 1602070536.97627: WORKER PROCESS EXITING
 12775 1602070536.97779: marking seed as failed
 12775 1602070536.97807: marking host seed failed, current state: HOST STATE: block=2, task=2, rescue=0, always=0, run_state=ITERATING_TASKS, fail_state=FAILED_NONE, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
 12775 1602070536.97825: ^ failed state is now: HOST STATE: block=2, task=2, rescue=0, always=0, run_state=ITERATING_COMPLETE, fail_state=FAILED_TASKS, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
 12775 1602070536.97838: getting the next task for host seed
 12775 1602070536.97850: host seed is done iterating, returning
fatal: [seed]: FAILED! => {"msg": "Missing sudo password"}
 12775 1602070536.97917: no more pending results, returning what we have
 12775 1602070536.97933: results queue empty
 12775 1602070536.97942: checking for any_errors_fatal
 12775 1602070536.97957: done checking for any_errors_fatal
 12775 1602070536.97967: checking for max_fail_percentage
 12775 1602070536.97979: done checking for max_fail_percentage
 12775 1602070536.97989: checking to see if all hosts have failed and the running result is not ok
 12775 1602070536.97998: done checking to see if all hosts have failed
 12775 1602070536.98007: getting the remaining hosts for this loop
 12775 1602070536.98025: done getting the remaining hosts for this loop
 12775 1602070536.98045: building list of next tasks for hosts
 12775 1602070536.98055: getting the next task for host seed
 12775 1602070536.98066: host seed is done iterating, returning
 12775 1602070536.98077: done building task lists
 12775 1602070536.98086: counting tasks in each state of execution
 12775 1602070536.98096: done counting tasks in each state of execution:
        num_setups: 0
        num_tasks: 0
        num_rescue: 0
        num_always: 0
 12775 1602070536.98114: all hosts are done, so returning None's for all hosts
 12775 1602070536.98125: done queuing things up, now waiting for results queue to drain
 12775 1602070536.98134: results queue empty
 12775 1602070536.98143: checking for any_errors_fatal
 12775 1602070536.98152: done checking for any_errors_fatal
 12775 1602070536.98161: checking for max_fail_percentage
 12775 1602070536.98169: done checking for max_fail_percentage
 12775 1602070536.98181: checking to see if all hosts have failed and the running result is not ok
 12775 1602070536.98189: done checking to see if all hosts have failed
 12775 1602070536.98203: getting the next task for host seed
 12775 1602070536.98214: host seed is done iterating, returning
 12775 1602070536.98224: running handlers

PLAY RECAP ********************************************************************************************************************************************************************************************************
seed                       : ok=6    changed=0    unreachable=0    failed=1    skipped=2    rescued=0    ignored=0

Cannot access host after deploying seed service

Hello,

I am trying to deploy OpenStack (Wallaby) on a CentOS 8.4 VM hosted on JetStream. After running the command kayobe seed service deploy, I get kicked out of my ssh session and I cannot access the host again. Even if I restart the host, I am not able to ssh to it anymore. I have to delete the instance a create a new one.

Any ideas of what could be the issue?

Log:
(kayobe-venv) [rmadridr@js-156-237 kayobe]$ kayobe seed service deploy
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
[WARNING]: Found both group and host with same name: seed
[WARNING]: Found both group and host with same name: seed-hypervisor

PLAY [Ensure defined container images are deployed on Seed node] ***************************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************************************************************************
ok: [seed]

TASK [deploy-containers : Deploy containers (loop)] ****************************************************************************************************************************

PLAY RECAP *********************************************************************************************************************************************************************
seed : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0

[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
[WARNING]: Found both group and host with same name: seed
[WARNING]: Found both group and host with same name: seed-hypervisor

PLAY [Gather facts for localhost] **********************************************************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************************************************************************
[DEPRECATION WARNING]: Distribution centos 8.4.2105 on host localhost should use /usr/libexec/platform-python, but is using /usr/bin/python for backward compatibility with
prior Ansible releases. A future Ansible release will default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more information. This feature will be removed in version 2.12. Deprecation warnings
can be disabled by setting deprecation_warnings=False in ansible.cfg.
ok: [localhost]

PLAY [Validate configuration options for kolla-ansible] ************************************************************************************************************************

PLAY [Ensure Kolla Ansible is configured] **************************************************************************************************************************************

TASK [Look for environment file in Kolla configuration path] *******************************************************************************************************************
ok: [localhost]

TASK [Flag that the Kolla configuration path has been used by another environment] *********************************************************************************************
skipping: [localhost]

TASK [Check whether a Kolla extra globals configuration file exists] ***********************************************************************************************************
ok: [localhost]

TASK [Read the Kolla extra globals configuration file] *************************************************************************************************************************
ok: [localhost]

TASK [Validate Kolla Ansible API address configuration] ************************************************************************************************************************
skipping: [localhost] => (item={'var_name': 'kolla_internal_vip_address', 'description': 'Internal API VIP address', 'required': True})
skipping: [localhost] => (item={'var_name': 'kolla_internal_fqdn', 'description': 'Internal API Fully Qualified Domain Name (FQDN)', 'required': True})
skipping: [localhost] => (item={'var_name': 'kolla_external_vip_address', 'description': 'external API VIP address', 'required': True})
skipping: [localhost] => (item={'var_name': 'kolla_external_fqdn', 'description': 'External API Fully Qualified Domain Name (FQDN)', 'required': True})

TASK [kolla-ansible : Check whether the legacy Kolla overcloud inventory files exist] ******************************************************************************************
ok: [localhost] => (item=seed)
ok: [localhost] => (item=overcloud)

TASK [kolla-ansible : Ensure the legacy Kolla overcloud inventory file is absent] **********************************************************************************************
skipping: [localhost] => (item=seed)
skipping: [localhost] => (item=overcloud)

TASK [kolla-ansible : Ensure the Kolla Ansible configuration directories exist] ************************************************************************************************
ok: [localhost] => (item=/home/rmadridr/kayobe/config/src/kayobe-config/etc/kolla)
ok: [localhost] => (item=/home/rmadridr/kayobe/config/src/kayobe-config/etc/kolla/inventory/seed)
ok: [localhost] => (item=/home/rmadridr/kayobe/config/src/kayobe-config/etc/kolla/inventory/overcloud/group_vars)
ok: [localhost] => (item=/home/rmadridr/kayobe/config/src/kayobe-config/etc/kolla/config)
[WARNING]: The value "1000" (type int) was converted to "'1000'" (type string). If this does not look like what you expect, quote the entire value to ensure it does not
change.

TASK [kolla-ansible : Write environment file into Kolla configuration path] ****************************************************************************************************
skipping: [localhost]

TASK [kolla-ansible : Ensure the Kolla global configuration file exists] *******************************************************************************************************
ok: [localhost]

TASK [kolla-ansible : Ensure the Kolla seed inventory file exists] *************************************************************************************************************
ok: [localhost]

TASK [kolla-ansible : Ensure the Kolla overcloud inventory file exists] ********************************************************************************************************
ok: [localhost]

TASK [kolla-ansible : Look for custom Kolla overcloud group vars] **************************************************************************************************************
ok: [localhost]

TASK [kolla-ansible : Copy over custom Kolla overcloud group vars] *************************************************************************************************************
skipping: [localhost]

TASK [kolla-ansible : Ensure the Kolla passwords file exists] ******************************************************************************************************************
ok: [localhost]

TASK [kolla-ansible : Ensure the Kolla passwords file is copied into place] ****************************************************************************************************
ok: [localhost]

TASK [kolla-ansible : Ensure external HAProxy TLS directory exists] ************************************************************************************************************
skipping: [localhost]

TASK [kolla-ansible : Ensure the external HAProxy TLS certificate bundle is copied into place] *********************************************************************************
skipping: [localhost]

TASK [kolla-ansible : Ensure internal HAProxy TLS directory exists] ************************************************************************************************************
skipping: [localhost]

TASK [kolla-ansible : Ensure the internal HAProxy TLS certificate bundle is copied into place] *********************************************************************************
skipping: [localhost]

TASK [kolla-ansible : Find certificates] ***************************************************************************************************************************************
ok: [localhost]

TASK [kolla-ansible : Find previously copied certificates] *********************************************************************************************************************
ok: [localhost]

TASK [kolla-ansible : Ensure certificates exist] *******************************************************************************************************************************
skipping: [localhost]

TASK [kolla-ansible : Ensure unnecessary certificates are absent] **************************************************************************************************************

PLAY [Generate Kolla Ansible host vars for the seed host] **********************************************************************************************************************

TASK [Set Kolla Ansible host variables] ****************************************************************************************************************************************
ok: [seed]

TASK [kolla-ansible-host-vars : Ensure the Kolla Ansible host vars directory exists] *******************************************************************************************
ok: [seed]

TASK [kolla-ansible-host-vars : Ensure the Kolla Ansible host vars file exists] ************************************************************************************************
ok: [seed]

PLAY [Generate Kolla Ansible host vars for overcloud hosts] ********************************************************************************************************************
skipping: no hosts matched

PLAY RECAP *********************************************************************************************************************************************************************
localhost : ok=14 changed=0 unreachable=0 failed=0 skipped=11 rescued=0 ignored=0
seed : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
[WARNING]: Found both group and host with same name: seed
[WARNING]: Found both group and host with same name: seed-hypervisor

PLAY [Ensure Kolla Bifrost is configured] **************************************************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************************************************************************
[DEPRECATION WARNING]: Distribution centos 8.4.2105 on host localhost should use /usr/libexec/platform-python, but is using /usr/bin/python for backward compatibility with
prior Ansible releases. A future Ansible release will default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more information. This feature will be removed in version 2.12. Deprecation warnings
can be disabled by setting deprecation_warnings=False in ansible.cfg.
ok: [localhost]

TASK [Check whether a Kolla Bifrost extra globals configuration file exists] ***************************************************************************************************
ok: [localhost]

TASK [Read the Kolla Bifrost extra globals configuration file] *****************************************************************************************************************
ok: [localhost]

TASK [kolla-bifrost : Ensure the Kolla Bifrost configuration directories exist] ************************************************************************************************
changed: [localhost]

TASK [kolla-bifrost : Ensure the Kolla Bifrost configuration files exist] ******************************************************************************************************
changed: [localhost] => (item={'src': 'bifrost.yml.j2', 'dest': 'bifrost.yml'})
changed: [localhost] => (item={'src': 'dib.yml.j2', 'dest': 'dib.yml'})
changed: [localhost] => (item={'src': 'servers.yml.j2', 'dest': 'servers.yml'})

PLAY RECAP *********************************************************************************************************************************************************************
localhost : ok=5 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Deploying Bifrost : ansible-playbook -i /home/rmadridr/kayobe/config/src/kayobe-config/etc/kolla/inventory/seed -e @/home/rmadridr/kayobe/config/src/kayobe-config/etc/kolla/globals.yml -e @/home/rmadridr/kayobe/config/src/kayobe-config/etc/kolla/passwords.yml -e CONFIG_DIR=/home/rmadridr/kayobe/config/src/kayobe-config/etc/kolla -e kolla_action=deploy /home/rmadridr/kayobe/config/venvs/kolla-ansible/share/kolla-ansible/ansible/bifrost.yml
[WARNING]: Found both group and host with same name: seed

PLAY [Apply role bifrost] ******************************************************************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************************************************************************
ok: [seed]

TASK [bifrost : include_tasks] *************************************************************************************************************************************************
included: /home/rmadridr/kayobe/config/venvs/kolla-ansible/share/kolla-ansible/ansible/roles/bifrost/tasks/deploy.yml for seed

TASK [bifrost : Ensuring config directories exist] *****************************************************************************************************************************
changed: [seed] => (item=bifrost)

TASK [bifrost : Generate bifrost configs] **************************************************************************************************************************************
changed: [seed] => (item=bifrost)
changed: [seed] => (item=dib)
changed: [seed] => (item=servers)

TASK [bifrost : Template ssh keys] *********************************************************************************************************************************************
changed: [seed] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})
changed: [seed] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})
changed: [seed] => (item={'src': 'ssh_config', 'dest': 'ssh_config'})

TASK [bifrost : Starting bifrost deploy container] *****************************************************************************************************************************
changed: [seed]

TASK [bifrost : Ensure log directories exist] **********************************************************************************************************************************
changed: [seed]

TASK [bifrost : Bootstrap bifrost (this may take several minutes)] *************************************************************************************************************
client_loop: send disconnect: Broken pipe

Docker registry not enabled by default, results in impedement when following tutorial

While running the pull-retag-push-images.sh script, the /pull-retag-push.yml/
playbook fails on the Push container images (may take a long time) task.

Inspection of the docker logs on the seed vm shows that it fails to connect to the
registry:

May 16 11:21:39 seed dockerd[29263]: time="2024-05-16T11:21:39.784093624Z" level=info msg="Attempting next endpoint for push after error: Get \"https://192.168.33.5:4000/v2/\": dial tcp 192.168.33.5:4000: connect: connection refused" spanID=56f2c85d5a1eb66e traceID=1d69a163f4ea809d25bbc3043b651a88
May 16 11:21:39 seed dockerd[29263]: time="2024-05-16T11:21:39.784305527Z" level=info msg="Attempting next endpoint for push after error: Get \"http://192.168.33.5:4000/v2/\": dial tcp 192.168.33.5:4000: connect: connection refused" spanID=56f2c85d5a1eb66e traceID=1d69a163f4ea809d25bbc3043b651a88

Inspection of docker in the seed vm shows that the registry container is not running.

To debug this, rerun the previous step with some verbose output kayobe seed host configure -vvv

Line 8491 in the following output shows 'enabled': False. This is due to the

...
8467
8468 TASK [docker-registry : Ensure Docker registry container is running] ***********
8469 task path: /home/rocky/kayobe/ansible/roles/docker-registry/tasks/deploy.yml:4
8470 redirecting (type: modules) ansible.builtin.docker_container to community.docker.docker_container
8471 <192.168.33.5> ESTABLISH SSH CONNECTION FOR USER: stack
8472 <192.168.33.5> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="stac»
8473 <192.168.33.5> (0, b'/home/stack\n', b'')
8474 <192.168.33.5> ESTABLISH SSH CONNECTION FOR USER: stack
8475 <192.168.33.5> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="stac»
8476 <192.168.33.5> (0, b'ansible-tmp-1715859604.244917-107649-217300013138574=/home/stack/.ansible/tmp/ansible-tmp-1715859604.244917-107649-217300013138574\n', b'')
8477 redirecting (type: modules) ansible.builtin.docker_container to community.docker.docker_container
8478 Using module file /home/rocky/kayobe-venv/lib64/python3.9/site-packages/ansible_collections/community/docker/plugins/modules/docker_container.py
8479 <192.168.33.5> PUT /home/rocky/.ansible/tmp/ansible-local-106305qo27bpff/tmpc7a4w7y2 TO /home/stack/.ansible/tmp/ansible-tmp-1715859604.244917-107649-217300013138574/AnsiballZ_docker_container.py
8480 <192.168.33.5> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User»
8481 <192.168.33.5> (0, b'sftp> put /home/rocky/.ansible/tmp/ansible-local-106305qo27bpff/tmpc7a4w7y2 /home/stack/.ansible/tmp/ansible-tmp-1715859604.244917-107649-217300013138574/AnsiballZ_docker_container.py\n', b'')
8482 <192.168.33.5> ESTABLISH SSH CONNECTION FOR USER: stack
8483 <192.168.33.5> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="stac»
8484 <192.168.33.5> (0, b'', b'')
8485 <192.168.33.5> ESTABLISH SSH CONNECTION FOR USER: stack
8486 <192.168.33.5> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="stac»
8487 <192.168.33.5> (0, b'\r\n{"changed": false, "invocation": {"module_args": {"env": {"REGISTRY_HTTP_ADDR": "0.0.0.0:4000"}, "image": "registry:latest", "name": "docker_registry", "network_mode": "host", "ports": [], "restart_policy":»
8488 <192.168.33.5> ESTABLISH SSH CONNECTION FOR USER: stack
8489 <192.168.33.5> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="stac»
8490 <192.168.33.5> (0, b'', b'')
8491 ok: [seed] => (item={'key': 'docker_registry', 'value': {'container_name': 'docker_registry', 'env': {'REGISTRY_HTTP_ADDR': '0.0.0.0:4000'}, 'enabled': False, 'image': 'registry:latest', 'network_mode': 'host', 'ports': [], 'volume»
8492     "ansible_loop_var": "item",
8493     "changed": false,
8494     "invocation": {
8495         "module_args": {
8496             "api_version": "auto",
8497             "auto_remove": null,
...

Recursive grepping for the docker_registry_enabled variable shows that it is
being set in a couple of places. Importantly, the default (False) is set in
ansible/inventory/group_vars/all/docker-registry.

This value does not get overwritten by the role default due to the precedence
rules of ansible variables.1 So the behaviour that the variable defaults to
False is expected, but nevertheless surprising when you follow the tutorial! The following output shows what files touch the variable:

[kayobe-venv] rocky@ad-univ-mu ~/kayobe  (stable/2023.1)
> rg registry_enabled
etc/kayobe/docker.yml
28:# Default is false, unless docker_registry_enabled is true and

etc/kayobe/docker-registry.yml
6:#docker_registry_enabled:

doc/source/configuration/reference/docker-registry.rst
18:``docker_registry_enabled``

ansible/roles/docker-registry/defaults/main.yml
9:docker_registry_enabled: true
48:    enabled: "{{ docker_registry_enabled }}"

ansible/roles/docker-registry/README.md
18:``docker_registry_enabled``: Whether the Docker registry is enabled. Defaults

ansible/inventory/group_vars/all/docker-registry
6:docker_registry_enabled: False

ansible/inventory/group_vars/all/docker
28:# Default is false, unless docker_registry_enabled is true and
30:docker_registry_insecure: "{{ docker_registry_enabled | bool and not docker_registry_enable_tls | bool }}"

config/src/kayobe-config/etc/kayobe/kolla.yml
89:# images. Default is false, unless docker_registry_enabled is true and

config/src/kayobe-config/etc/kayobe/docker.yml
29:# Default is false, unless docker_registry_enabled is true and

config/src/kayobe-config/etc/kayobe/docker-registry.yml
6:#docker_registry_enabled:

Workaround

Set the following in config/src/kayobe-config/etc/kayobe/docker-registry.yml

docker_registry_enabled: true

Possible solution

Maybe just mention this in the top level readme?

Footnotes

  1. https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#understanding-variable-precedence

kayobe seed vm provision fails to start seed-vm when using Rocky 9 image

Check out the master branch and follow the tutorial: the step kayobe seed vm provision fails on the following task:

TASK [stackhpc.libvirt-vm : include_tasks] ******************************************************************
included: /home/rocky/kayobe/ansible/roles/stackhpc.libvirt-vm/tasks/volumes.yml
for seed-hypervisor => (item={'name': 'seed', 'memory_mb': '4096', 'vcpus': '1',
'volumes': [{'name': 'seed-root', 'pool': 'default', 'capacity': '50G',
'format': 'qcow2', 'image': '
https://dl.rockylinux.org/pub/rocky/9/images/x86_64/Rocky-9-GenericCloud.latest.x86_64.qcow2'},
{'name': 'seed-data', 'pool': 'default', 'capacity': '100G', 'format': 'qcow2'},
{'name': 'seed-configdrive', 'pool': 'default', 'capacity': '385024', 'device':
'cdrom', 'format': 'raw', 'image': '/opt/kayobe/images/seed.iso'}],
'interfaces': [{'network': 'aio', 'net_name': 'aio'}], 'console_log_enabled':
True})

TASK [Wait for SSH access to the seed VM] *******************************************************************
fatal: [seed-hypervisor -> localhost]: FAILED! => {"changed": false, "elapsed":
360, "msg": "Timeout when waiting for 192.168.33.5:22"}

Inspecting the /var/log/libvirt-consoles/seed-console.log shows the VM is
looping in the boot stage:

Machine UUID 27e2ea68-c8ea-49a6-96f4-c5ebc1102097


iPXE (http://ipxe.org) 00:02.0 C000 PCI2.10 PnP PMM+BEFCCA80+BEF0CA80 C000
Press Ctrl-B to configure iPXE (PCI 00:02.0)...
                                                                               


Booting from Hard Disk...
GRUB loading.......
�[0;30;47mWelcome to GRUB

Workaround

Run kayobe seed vm deprovision

Edit the variable seed_vm_root_image to use a newer image.

From within the source tree described in the README, the variable is found in
the file ~/kayobe/config/src/kayobe-config/etc/kayobe/seed-vm.yml

Set this variable to point to a rocky 9.3 image:

diff --git a/etc/kayobe/seed-vm.yml b/etc/kayobe/seed-vm.yml
index fe68cee..48a0e6c 100644
--- a/etc/kayobe/seed-vm.yml
+++ b/etc/kayobe/seed-vm.yml
@@ -33,7 +33,7 @@ seed_vm_vcpus: 1
 # or
 # "https://cloud.centos.org/centos/8-stream/x86_64/images/CentOS-Stream-GenericCloud-8-20210603.0.x86_64.qcow2"
 # otherwise.
-#seed_vm_root_image:
+seed_vm_root_image: "https://dl.rockylinux.org/vault/rocky/9.3/images/x86_64/Rocky-9-GenericCloud-Base.latest.x86_64.qcow2"

 # Capacity of the seed VM data volume.
 #seed_vm_data_capacity:

Rerun kayobe seed vm provision, the playbook now succeeds.

Hetzner root server install

Hi: Has this been tested on a Hetzner root server? I tried recently but the networking broke down and I couldn't finish the install. Basically it failed during installation of the second section when creating the seed. It tries to download Chrony but the connection to the internet fails. Any pointers appreciated. Cheers, Dave

Seed VM doesn't start after reboot

Despite being configured to autostart, the seed VM is found shut off after reboot or first boot from snapshot:

$ sudo virsh list --all
 Id    Name                           State         
----------------------------------------------------
 -     seed                           shut off

System logs show:

libvirtd[1298]: 2020-08-25 16:20:02.795+0000: 1412: error : qemuAutostartDomain:258 : internal error: Failed to autostart VM 'seed': Unable to delete file /var/log/libvirt-consoles//seed-console.log: Permission denied

This happens if we follow instructions, which only disable SELinux temporarily. We need to make the change permanent.

Failed TASK [stackhpc.libvirt-vm : Ensure the VM volumes exist]

Hi, i'm following the tutorial but the ./dev/seed-deploy.sh phase fails with this error(sorry for italian error in ansible)

the host is a bare metal 4C 32GB Centos7 latest install

TASK [stackhpc.libvirt-vm : Ensure the VM volumes exist] ********************************************************************************************************
failed: [seed-hypervisor] (item={u'image': u'https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2', u'capacity': u'50G', u'name': u'seed-root', u'pool': u'default', u'format': u'qcow2'}) => {"ansible_loop_var": "item", "changed": false, "item": {"capacity": "50G", "format": "qcow2", "image": "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2", "name": "seed-root", "pool": "default"}, "msg": "non-zero return code", "rc": 1, "stderr": "Shared connection to 192.168.33.4 closed.\r\n", "stderr_lines": ["Shared connection to 192.168.33.4 closed."], "stdout": "Unexpected error while getting volume info\r\nerrore: impossibile ottenere il vol 'seed-root'\r\nerrore: Volume di storage non trovato: nessun vol di storage con percorso corrispondente a 'seed-root'\r\n", "stdout_lines": ["Unexpected error while getting volume info", "errore: impossibile ottenere il vol 'seed-root'", "errore: Volume di storage non trovato: nessun vol di storage con percorso corrispondente a 'seed-root'"]}
failed: [seed-hypervisor] (item={u'capacity': u'100G', u'name': u'seed-data', u'pool': u'default', u'format': u'qcow2'}) => {"ansible_loop_var": "item", "changed": false, "item": {"capacity": "100G", "format": "qcow2", "name": "seed-data", "pool": "default"}, "msg": "non-zero return code", "rc": 1, "stderr": "Shared connection to 192.168.33.4 closed.\r\n", "stderr_lines": ["Shared connection to 192.168.33.4 closed."], "stdout": "Unexpected error while getting volume info\r\nerrore: impossibile ottenere il vol 'seed-data'\r\nerrore: Volume di storage non trovato: nessun vol di storage con percorso corrispondente a 'seed-data'\r\n", "stdout_lines": ["Unexpected error while getting volume info", "errore: impossibile ottenere il vol 'seed-data'", "errore: Volume di storage non trovato: nessun vol di storage con percorso corrispondente a 'seed-data'"]}
failed: [seed-hypervisor] (item={u'capacity': u'389120', u'name': u'seed-configdrive', u'format': u'raw', u'device': u'cdrom', u'image': u'/opt/kayobe/images/seed.iso', u'pool': u'default'}) => {"ansible_loop_var": "item", "changed": false, "item": {"capacity": "389120", "device": "cdrom", "format": "raw", "image": "/opt/kayobe/images/seed.iso", "name": "seed-configdrive", "pool": "default"}, "msg": "non-zero return code", "rc": 1, "stderr": "Shared connection to 192.168.33.4 closed.\r\n", "stderr_lines": ["Shared connection to 192.168.33.4 closed."], "stdout": "Unexpected error while getting volume info\r\nerrore: impossibile ottenere il vol 'seed-configdrive'\r\nerrore: Volume di storage non trovato: nessun vol di storage con percorso corrispondente a 'seed-configdrive'\r\n", "stdout_lines": ["Unexpected error while getting volume info", "errore: impossibile ottenere il vol 'seed-configdrive'", "errore: Volume di storage non trovato: nessun vol di storage con percorso corrispondente a 'seed-configdrive'"]}

Missing python-setuptools

Ubuntu 20.04 deployed via MAAS
./dev/seed-hypervisor-deploy.sh

TASK [kolla-ansible : Ensure the latest version of pip is installed] ***********************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named pkg_resources
failed: [localhost] (item={'name': 'pip'}) => {"ansible_loop_var": "item", "changed": false, "item": {"name": "pip"}, "msg": "Failed to import the required Python library (setuptools) on kvm01's Python /usr/bin/python. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"}

commit 688d8e167a6d4f1fd65f3d83566a1f56adf388c7 (HEAD -> stable/victoria)
Author: Pierre Riteau [email protected]
Date: Thu May 27 15:31:06 2021 +0200

Fix host configuration due to missing python3

We recently backported a change to stop using platform-python [1].
However, CentOS cloud images come without python3: only platform-python
is available. We need to install python3 to ensure host configuration
works.

Cherry picked from commit 1c458c24a904b9f54ebf026d7227c801f22f49dd.

[1] https://review.opendev.org/c/openstack/kayobe/+/789753

Change-Id: If105c9a0c4a8ce7de6fd8b7b4da43cde48169f37
Story: 2008930
Task: 42532

the same behavior for commit 37ab622cde3ad7b0e1b3b703f688cd24a0899ba2 (origin/stable/victoria)

To resolve it - apt install python-setuptools

kayobe/kolla-ansible - Ensure selinux Python package is linked into the virtualenv

TASK [kolla-ansible : Ensure selinux Python package is linked into the virtualenv] *********************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "src file does not exist, use \"force=yes\" if you really want to create the link: /usr/lib64/python2.7/site-packages/selinux", "path": "/root/kolla-venv/lib/python2.7/site-packages/selinux", "src": "/usr/lib64/python2.7/site-packages/selinux", "state": "absent"}
        to retry, use: --limit @/root/kayobe-venv/share/kayobe/ansible/kolla-ansible.retry

Line 70 in ~/kayobe-venv/share/kayobe/ansible/roles/kolla-ansible/tasks/install.yml

# This is a workaround for the lack of a python package for libselinux-python
# on PyPI. Without using --system-site-packages to create the virtualenv, it
# seems difficult to ensure the selinux python module is available. It is a
# dependency for Ansible when selinux is enabled.
- name: Ensure selinux Python package is linked into the virtualenv
  file:
    src: "/usr/lib64/python2.7/site-packages/selinux"
    dest: "{{ kolla_ansible_venv }}/lib/python2.7/site-packages/selinux"
    state: link
  when:
    - ansible_os_family == 'RedHat'
    - ansible_selinux != False
    - ansible_selinux.status != 'disabled'

See also:

What can i do? Its possible to upgrade on python 3?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.