Giter VIP home page Giter VIP logo

network's People

Contributors

35niavlys avatar a-mere-peasant avatar acardace avatar cathay4t avatar dependabot[bot] avatar elvgarrui avatar ffmancera avatar i386x avatar jaedolph avatar kwevers avatar laddp avatar larskarlitski avatar liangwen12year avatar marituhone avatar nhosoi avatar pcahyna avatar razaloc avatar richm avatar roolebo avatar rpabel avatar samdoran avatar smasc avatar spetrosi avatar systemroller avatar tabowling avatar thom311 avatar tyll avatar vbenes avatar vcrhonek avatar yontalcar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

network's Issues

network - state:absent followed by state:up only works part of time

# cat network.yml 
---
- hosts: host.domain.com
  vars:
    network_connections:
      - name: enp5s0f0
        state: absent
      - name: enp5s0f0
        interface_name: enp5s0f0
        type: ethernet
        state: up
        autoconnect: yes
        ip:
          dhcp4: no
          auto6: no
          address:
            - 10.0.0.1/30
  roles:
    - role: network 
# ansible-playbook -l host.domain.com network.yml 

Working output:

TASK [network : Configure networking connection profiles] *************************************************************************
[WARNING]: #0, state:absent, "enp5s0f0": delete connection enp5s0f0, 51292573-07da-4146-9b06-954f52a5f8d9
[WARNING]: #1, state:up, "enp5s0f0": add connection enp5s0f0, 51292573-07da-4146-9b06-954f52a5f8d9
[WARNING]: #1, state:up, "enp5s0f0": up connection enp5s0f0, 51292573-07da-4146-9b06-954f52a5f8d9

Failing output:

TASK [network : Configure networking connection profiles] *************************************************************************
TASK [network : Configure networking connection profiles]
[WARNING]: #0, state:absent, "enp5s0f0": delete connection enp5s0f0, 6d2a7eff-eadb-45c4-8c4c-6d2add08ede9
[WARNING]: #1, state:up, "enp5s0f0": add connection enp5s0f0, 6d2a7eff-eadb-45c4-8c4c-6d2add08ede9
[WARNING]: #1, state:up, "enp5s0f0": up connection enp5s0f0, 6d2a7eff-eadb-45c4-8c4c-6d2add08ede9
[WARNING]: #1, state:up, "enp5s0f0": failure: 'ActiveConnection' object has no attribute 'get_state_reason' [[Traceback (most recent call last): File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1776, in run self.run_state_up(idx)
File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1942, in run_state_up self.nmutil.connection_activate_wait(ac, wait_time) File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1498, in connection_activate_wait complete,
failure_reason = check_activated(ac, dev) File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1468, in check_activated ac_reason = ac.get_state_reason() AttributeError: 'ActiveConnection' object has no attribute 'get_state_reason' ]]
[WARNING]: exception: Traceback (most recent call last): File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 2103, in Cmd.create().run() File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1776, in run
self.run_state_up(idx) File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1942, in run_state_up self.nmutil.connection_activate_wait(ac, wait_time) File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1498, in
connection_activate_wait complete, failure_reason = check_activated(ac, dev) File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1468, in check_activated ac_reason = ac.get_state_reason() AttributeError: 'ActiveConnection' object has no
attribute 'get_state_reason'

fatal: [host.domain.com]: FAILED! => {"changed": false, "failed": true, "msg": "fatal error: 'ActiveConnection' object has no attribute 'get_state_reason'"}
to retry, use: --limit @/usr/local/ansible/network.retry

Hardware configureing

# lspci
05:00.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]

# ethtool enp5s0f0
Settings for enp5s0f0:
	Supported ports: [ FIBRE ]
	Supported pause frame use: Symmetric Receive-only
	Supports auto-negotiation: No
	Advertised pause frame use: No
	Advertised auto-negotiation: No
	Speed: 100000Mb/s
	Duplex: Full
	Port: Direct Attach Copper
	PHYAD: 0
	Transceiver: internal
	Auto-negotiation: off
	Supports Wake-on: d
	Wake-on: d
	Link detected: yes

[RFE] refactor logging of network module

Currently, the module collects all logging statements, and at the end returns them as "warnings", so that they are shown by ansible. Obviously, these are not really warnings, but rather debug information.

Instead, the logging messages should be returned in a different json field that is ignored by ansible. Then, the tasks/main.yml should have a follow-up debug task that prints the returned variable.

I guess, in the failure case, the network_connections task must run ignoring failures to reach the debug statement. Then, a follow up task should check whether the network_connections task failed and abort.

Role does not work with '--become-method su'

Module throws an exception when run as a non-privileged user who is privilege escalating using '--become-method su'

Example playbook

---
- hosts: all
  become: yes
  become_method: su
  become_user: root
  vars:
    network_connections:
    - name: eth0
      state: up
      type: ethernet
      interface_name: eth0
      autoconnect: yes
      ip:
        address: 192.168.1.12/24
        gateway4: 192.168.1.1
        dhcp4: no
        auto6: no
  roles:
  - role: linux-system-roles.network

Output

[xxx@xxx]$ ansible-playbook playbook.yml -i '192.168.1.12,' -bKk
SSH password: 
BECOME password[defaults to SSH password]: 

PLAY [all] *********************************************************************

TASK [Gathering Facts] *********************************************************
ok: [192.168.1.12]

TASK [linux-system-roles.network : Check which services are running] ***********
ok: [192.168.1.12]

TASK [linux-system-roles.network : Check which packages are installed] *********
ok: [192.168.1.12]

TASK [linux-system-roles.network : Print network provider] *********************
ok: [192.168.1.12] => {
    "msg": "Using network provider: nm"
}

TASK [linux-system-roles.network : Install packages] ***************************
skipping: [192.168.1.12]

TASK [linux-system-roles.network : Enable and start NetworkManager] ************
ok: [192.168.1.12]

TASK [linux-system-roles.network : Enable network service] *********************
skipping: [192.168.1.12]

TASK [linux-system-roles.network : Ensure initscripts network file dependency is present] ***
skipping: [192.168.1.12]

TASK [linux-system-roles.network : Configure networking connection profiles] ***
 [WARNING]: exception: Traceback (most recent call last):   File
"/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 2366, in
main     cmd.run()   File
"/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 1738, in
run     self.run_prepare()   File
"/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 1868, in
run_prepare     Cmd.run_prepare(self)   File
"/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 1787, in
run_prepare     ifname=connection["interface_name"]   File
"/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 170, in
link_info_find     for li in cls.link_infos(refresh).values():   File
"/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 151, in
link_infos     b = SysUtil._link_infos_fetch()   File
"/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 130, in
_link_infos_fetch     "perm-address": SysUtil._link_read_permaddress(ifname),
File "/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 107,
in _link_read_permaddress     out = Util.check_output(["ethtool", "-P",
ifname])   File "/tmp/ansible_network_connections_payload_j0L_vd/ansible_networ
k_connections_payload.zip/ansible/module_utils/network_lsr/utils.py", line 35,
in check_output     p = subprocess.Popen(argv, stdout=subprocess.PIPE,
stderr=DEVNULL, env=env)   File "/usr/lib64/python2.7/subprocess.py", line 711,
in __init__     errread, errwrite)   File "/usr/lib64/python2.7/subprocess.py",
line 1327, in _execute_child     raise child_exception OSError: [Errno 2] No
such file or directory

fatal: [192.168.1.12]: FAILED! => {"changed": false, "msg": "fatal error: [Errno 2] No such file or directory"}

PLAY RECAP *********************************************************************
192.168.1.12               : ok=5    changed=0    unreachable=0    failed=1    skipped=3    rescued=0    ignored=0  

not idempotent when nic is already configured.

When running a playbook a second time, it not detecting that the interface is already up and errors out.

# ansible-playbook -l util6vm net_demo.yml  -vv
ansible-playbook 2.5.1
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /bin/ansible-playbook
  python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
Using /etc/ansible/ansible.cfg as config file

PLAYBOOK: net_demo.yml ******************************************************************************************************************************
1 plays in net_demo.yml

PLAY [all] ******************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************
task path: /home/tbowling/src/virt-demo/ansible/net_demo.yml:7
ok: [util6vm]
META: ran handlers

TASK [linux-system-roles.network : Set version specific variables] **********************************************************************************
task path: /etc/ansible/roles/linux-system-roles.network/tasks/main.yml:1
ok: [util6vm] => (item=/etc/ansible/roles/linux-system-roles.network/vars/RedHat-6.yml) => {"ansible_facts": {"network_provider_default": "initscripts"}, "ansible_included_var_files": ["/etc/ansible/roles/linux-system-roles.network/vars/RedHat-6.yml"], "changed": false, "item": "/etc/ansible/roles/linux-system-roles.network/vars/RedHat-6.yml"}

TASK [linux-system-roles.network : Install packages] ************************************************************************************************
task path: /etc/ansible/roles/linux-system-roles.network/tasks/main.yml:9
ok: [util6vm] => {"changed": false, "msg": "", "rc": 0, "results": []}

TASK [linux-system-roles.network : Enable network service] ******************************************************************************************
task path: /etc/ansible/roles/linux-system-roles.network/tasks/main.yml:14
ok: [util6vm] => {"changed": false, "enabled": true, "name": "network", "state": "started"}

TASK [linux-system-roles.network : Configure networking connection profiles] ************************************************************************
task path: /etc/ansible/roles/linux-system-roles.network/tasks/main.yml:20
 [WARNING]: [015] <info>  #0, state:up, "net1": ifcfg-rh profile "net1" already up to date

 [WARNING]: [016] <info>  #0, state:up, "net1": up connection net1 (not-active)

 [WARNING]: [017] <info>  #0, state:up, "net1": call `ifup net1`: rc=1, out=" Determining IP information for eth1... failed. ", err="dhclient(29031)
is already running - exiting.   This version of ISC DHCP is based on the release available on ftp.isc.org.  Features have been added and other
changes have been made to the base software release in order to make it work better with this distribution.  Please report for this software via the
Red Hat Bugzilla site:     http://bugzilla.redhat.com  exiting. "

 [WARNING]: [018] <error> #0, state:up, "net1": call `ifup net1` failed with exit status 1

fatal: [util6vm]: FAILED! => {"changed": true, "msg": "error: call `ifup net1` failed with exit status 1"}
	to retry, use: --limit @/home/tbowling/src/virt-demo/ansible/net_demo.retry

PLAY RECAP ******************************************************************************************************************************************
util6vm                    : ok=4    changed=0    unreachable=0    failed=1   

Trying to change state to down says conn doesnt exists

error:
TASK [rhel-system-roles.network : Configure networking connection profiles] *********
fatal: [satellite]: FAILED! => {"changed": false, "msg": "configuration error: connections[0].name: state "down" references non-existing connection "ens192""}

playbook:

  • hosts: satellite
    vars:
    network_connections:
    - name: ens192
    state: down
    roles:
    • role: rhel-system-roles.network

System:

cat /etc/redhat-release

Red Hat Enterprise Linux Server release 7.4 (Maipo)

nmcli co show

NOMBRE UUID TIPO DISPOSITIVO
ens192 2ee0adce-59b5-49c3-93f9-7b1df88b2bae 802-3-ethernet ens192
ens224 e4014630-448b-5ad3-4992-f4678202147c 802-3-ethernet ens224

nmcli co down ens192

La conexiรณn 'ens192' fue desactivada correctamente (ruta activa D-Bus: /org/freedesktop/NetworkManager/ActiveConnection/21)
Error: no se encontraron todas las conexiones activas.

[RFE] add profile setting ignore-error-if-absent to ignore errors when downing a profile that does not exist

Currently the role fails when ask to set a profile to down when the profile is not completely defined in the profile and does not exist on the target system. One could argue that a profile is down when it does not exist on the system. However, one could also argue that the role should report this as an error in case there is a typo. Introduce a setting called ignore-error-if-absent that when it is true ignores the error when a profile is undefined and should be set to down.

error setting mtu for vlan interface despite success (eventually)

Playbook:

---
- hosts: all
  vars:
    network_connections:
      - name: eth1
        type: ethernet
        autoconnect: no
        state: up
        mtu: 1492
        ip:
          dhcp4: no
          auto6: no

      - name: eth1.90
        parent: eth1
        type: vlan
        vlan_id: 90
        mtu: 1280
        state: up
        ip:
          dhcp4: no
          auto6: no

  tasks:
    - name: Run network role
      import_role:
        name: linux-system-roles.network

ansible-playbook output:

ansible-playbook -i rhel76-cloud, vlan-mtu.yml 

PLAY [all] ***********************************************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************************
ok: [rhel76-cloud]

TASK [linux-system-roles.network : Check which services are running] *************************************************************************************************************************************
ok: [rhel76-cloud]

TASK [linux-system-roles.network : Check which packages are installed] ***********************************************************************************************************************************
ok: [rhel76-cloud]

TASK [linux-system-roles.network : Print network provider] ***********************************************************************************************************************************************
ok: [rhel76-cloud] => {
    "msg": "Using network provider: nm"
}

TASK [linux-system-roles.network : Install packages] *****************************************************************************************************************************************************
skipping: [rhel76-cloud]

TASK [linux-system-roles.network : Enable network service] ***********************************************************************************************************************************************
ok: [rhel76-cloud]

TASK [linux-system-roles.network : Configure networking connection profiles] *****************************************************************************************************************************
 [WARNING]: [005] <info>  #0, state:up persistent_state:present, 'eth1': add connection eth1, dc5eb498-f5b1-4124-b393-a7cfaec4e51d

 [WARNING]: [006] <info>  #0, state:up persistent_state:present, 'eth1': up connection eth1, dc5eb498-f5b1-4124-b393-a7cfaec4e51d (not-active)

 [WARNING]: [007] <info>  #1, state:up persistent_state:present, 'eth1.90': add connection eth1.90, 226409d1-c1a5-456c-8f67-e4494cf540ac

 [WARNING]: [008] <info>  #1, state:up persistent_state:present, 'eth1.90': up connection eth1.90, 226409d1-c1a5-456c-8f67-e4494cf540ac (not-active)

 [WARNING]: [009] <error> #1, state:up persistent_state:present, 'eth1.90': up connection failed: failure to activate connection: nm-manager-error-quark: Failed to find a compatible device for this
connection (3)

fatal: [rhel76-cloud]: FAILED! => {"changed": true, "msg": "error: up connection failed: failure to activate connection: nm-manager-error-quark: Failed to find a compatible device for this connection (3)"}

PLAY RECAP ***********************************************************************************************************************************************************************************************
rhel76-cloud               : ok=5    changed=0    unreachable=0    failed=1   

NM logs:

Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <info>  [1550252720.1114] audit: op="connection-activate" uuid="226409d1-c1a5-456c-8f67-e4494cf540ac" name="eth1.90" result="fail" reason="Failed to find a compatible device for this connection"
Feb 15 18:45:20 rhel76-cloud ansible-network_connections[17596]: [WARNING] [005] <info>  #0, state:up persistent_state:present, 'eth1': add connection eth1, dc5eb498-f5b1-4124-b393-a7cfaec4e51d
Feb 15 18:45:20 rhel76-cloud ansible-network_connections[17596]: [WARNING] [006] <info>  #0, state:up persistent_state:present, 'eth1': up connection eth1, dc5eb498-f5b1-4124-b393-a7cfaec4e51d (not-active)
Feb 15 18:45:20 rhel76-cloud ansible-network_connections[17596]: [WARNING] [007] <info>  #1, state:up persistent_state:present, 'eth1.90': add connection eth1.90, 226409d1-c1a5-456c-8f67-e4494cf540ac
Feb 15 18:45:20 rhel76-cloud ansible-network_connections[17596]: [WARNING] [008] <info>  #1, state:up persistent_state:present, 'eth1.90': up connection eth1.90, 226409d1-c1a5-456c-8f67-e4494cf540ac (not-active)
Feb 15 18:45:20 rhel76-cloud ansible-network_connections[17596]: [WARNING] [009] <error> #1, state:up persistent_state:present, 'eth1.90': up connection failed: failure to activate connection: nm-manager-error-quark: Failed to find a compatible device for this connection (3)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2112] platform-linux: event-notification: RTM_NEWADDR, flags 0, seq 0: fe80::5054:ff:fedc:e5f1/64 lft forever pref forever lifetime 622492-0[4294967295,4294967295] dev 38 flags permanent src kernel
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.2112] platform: signal: address 6   added: fe80::5054:ff:fedc:e5f1/64 lft forever pref forever lifetime 622492-0[4294967295,4294967295] dev 38 flags permanent src kernel
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.2112] device[0x5608d2167890] (eth1.90): queued IP6 config change
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2112] platform-linux: event-notification: RTM_NEWROUTE, flags 0, seq 0: ignore
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.2113] device[0x5608d2167890] (eth1.90): ip6-config: update (commit=0, new-config=0x5608d21d5000)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.2114] device[0x5608d2167890] (eth1.90): ip6-config: update IP Config instance (/org/freedesktop/NetworkManager/IP6Config/25)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.2114] dns-mgr: (device_ip_config_changed): queueing DNS updates (1)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2114] policy: set-hostname: updating hostname (ip6 conf)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2114] hostname: transient hostname retrieval failed
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2114] policy: get-hostname: "rhel76-cloud"
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2114] hostname: transient hostname retrieval failed
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2114] policy: get-hostname: "rhel76-cloud"
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2114] policy: set-hostname: hostname already set to 'rhel76-cloud' (from system configuration)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.2114] dns-mgr: (device_ip_config_changed): DNS configuration did not change
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.2114] dns-mgr: (device_ip_config_changed): no DNS changes to commit (0)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8086] platform-linux: event-notification: RTM_NEWADDR, flags 0, seq 0: fe80::5054:ff:fedc:e5f1/64 lft forever pref forever lifetime 622492-0[4294967295,4294967295] dev 3 flags permanent src kernel
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.8087] platform: signal: address 6   added: fe80::5054:ff:fedc:e5f1/64 lft forever pref forever lifetime 622492-0[4294967295,4294967295] dev 3 flags permanent src kernel
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.8088] device[0x5608d2153250] (eth1): queued IP6 config change
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8088] platform-linux: event-notification: RTM_NEWROUTE, flags 0, seq 0: ignore
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.8091] device[0x5608d2153250] (eth1): ip6-config: update (commit=0, new-config=0x5608d212f390)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.8096] device[0x5608d2153250] (eth1): ip6-config: update IP Config instance (/org/freedesktop/NetworkManager/IP6Config/23)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.8096] dns-mgr: (device_ip_config_changed): queueing DNS updates (1)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8097] policy: set-hostname: updating hostname (ip6 conf)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8097] hostname: transient hostname retrieval failed
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8097] policy: get-hostname: "rhel76-cloud"
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8097] hostname: transient hostname retrieval failed
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8098] policy: get-hostname: "rhel76-cloud"
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8098] policy: set-hostname: hostname already set to 'rhel76-cloud' (from system configuration)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.8098] dns-mgr: (device_ip_config_changed): DNS configuration did not change
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.8098] dns-mgr: (device_ip_config_changed): no DNS changes to commit (0)
Feb 15 18:45:25 rhel76-cloud NetworkManager[26901]: <trace> [1550252725.0507] device[0x5608d2153250] (eth1): remove_pending_action (0): 'carrier-wait' not pending (expected)
Feb 15 18:45:25 rhel76-cloud NetworkManager[26901]: <trace> [1550252725.2010] device[0x5608d2167890] (eth1.90): remove_pending_action (0): 'carrier-wait' not pending (expected)

This is with NetworkManager-1.12.0-6.el7

@thom311 any ideas for the [WARNING]: [009] <error> #1, state:up persistent_state:present, 'eth1.90': up connection failed: failure to activate connection: nm-manager-error-quark: Failed to find a compatible device for this connection (3) error? Is the eth1 device not ready, yet, when the vlan is activated?

[RFE] don't do any changes to network configuration for state present/absent

The states present and absent are supposed to only deply/remove the persistant networking profiles on the host. They are not supposed to actually change the networking in any way. That is what the states up and down are for.

For initscripts, this is rather trivially the case, because the present/absent states just ensure that the ifcfg files are there or absent.

For NetworkManager, it's more complicated.

  • deleting a profile that is currently active causes the device to go down. That is, because in NetworkManager an active device must always have a profile. The solution in 1.10 is Update2() with the NM_SETTINGS_UPDATE2_FLAG_VOLATILE flag. Volatile means, that the connection is in-memory only, and will be automatically deleted once it goes down. If you make a connection volatile that currently is not active, it will be deleted right away.

  • adding or modifying a profile might always make it a candidate for autoconnecting. For state present, we would like to add/modify the profile without any changes. For that, 1.10 supports Update2() flag NM_SETTINGS_UPDATE2_FLAG_BLOCK_AUTOCONNECT. That works for the update case, but doesn't work for the add case. To add a connection that has connection.autoconnect=yes without activating it right away, we would either need new NetworkManager D-Bus API to block autoconnect form the start, or we first add the connection with connection.autoconnect=no, and then follow up with an Update2 call that sets connection.autoconnect=yes and uses NM_SETTINGS_UPDATE2_FLAG_BLOCK_AUTOCONNECT.

  • modifying the properties connection.metered or connection.zone takes effect immediately. There is currently no way to prevent that. NetworkManager should get a new flag for Update2 that allows to modify the profile and prevent these two properties from updating right away. Then, the role should use this new API.

not idempotent against /etc/resolv.conf with initscripts provider

I tested the role with initscripts network provider in CentOS 7.6.

If I change the ip->dns variable, /etc/resolv.conf is rewritten with the new dns values, but the dns_search value is always appended, never overwritten.

After a few executions of the role, here is the resulting resolv.conf

search domain.org domain.org domain.org domain.org domain.org
nameserver 192.168.121.2
nameserver 192.168.121.3

Moreover, it is a pity that using this role the diff feature of ansible does not work. It would be nice to see the changes applied to the configuration files.

MTU for vlan iface.

Hello,

I have setup a bond interface eg. bond0 and then I create a vlan on the bond0 eg. bond0.100, however the MTU for bond0 and its slave is correctly set to 9000, however I cannot set the MTU for the vlan interface and it defaults to 1500.

network_provider: nm
network_connections:
  # Bridge interface
  - name: br0
    state: up
    type: bridge
    interface_name: br0
    ip:
      dhcp4: no
      auto6: no
      gateway4: 1.0.0.1
      address:
        - "1.0.0.50/26"

  # Bond interface
  - name: bond0
    state: up
    type: bond
    interface_name: "bond0"
    bond:
      mode: 802.3ad
      miimon: 100
    mtu: 9000
    master: br0
    slave_type: bridge

  # Slave ethernet interfaces
  - name: em2
    state: up
    type: ethernet
    mtu: 9000
    interface_name: em2
    master: bond0

  - name: em1
    state: up
    type: ethernet
    mtu: 9000
    interface_name: em1
    master: bond0

  # VLAN on bond interface
  - name: bond0.100
    state: up
    type: vlan
    parent: bond0
    vlan_id: 100
    ip:
      address:
        - "10.10.10.51/24"

How can I solve this?

Thanks.

Not all network_provider: nm options are supported.

Many of the nmcli connection show options available do not appear to be supported. Even several of them that supported in nm-connection-editor don't appear to be present...

Doing a quick look think some examples are:

connection.lld
connection.autoconnect <-------------- in nm-connection-editor (general)
connection.autoconnect-priority <----- in nm-connection-editor (general)
connection.autoconnect-retries
connection.zone <--------------------- in nm-connection-editor (general)

802-3-ethernet.cloned-mac-address <--- in nm-connection-editor (ethernet)
802-3-ethernet.mtu <------------------ in nm-connection-editor (ethernet)
802-3-ethernet.wake-on-lan-password <- in nm-connection-editor (ethernet)
802-3-ethernet.auto-negotiate <------- in nm-connection-editor (ethernet)
802-3-ethernet.speed <---------------- in nm-connection-editor (ethernet)
802-3-ethernet.duplex <--------------- in nm-connection-editor (ethernet)

ipv4.may-fail <----------------------- in nm-connection-editor (ipv4)
ipv4.ignore-auto-routes <------------- in nm-connection-editor (ipv4 routes)
ipv4.ignore-auto-dns
ipv4.never-default

ipv6.may-fail<----------------------- in nm-connection-editor (ipv6)
ipv6.ignore-auto-routes <------------- in nm-connection-editor (ipv6 routes)
ipv6.ignore-auto-dns
ipv6.never-default

Then some DHCP options:

  ipv4.dhcp-client-id
  ipv4.dhcp-timeout
  ipv4.dhcp-send-hostname
  ipv4.dhcp-hostname
  ipv4.dhcp-fqdn
  ipv6.dhcp-send-hostname
  ipv6.dhcp-hostname

[RFE] Ability to manage or modify based on hardware address

Need the ability to execute operations based only on hardware address or naming (mac, pci address, kernel device name) as opposed to only by "connection profile" name, which is very weak when using initscript style naming.

For example, in my hardware inventory, I know all of my MAC addrs, but possibly due to inconsistent net device naming or other reasons, the connection names are inconsistent.

Need the ability to [down/up, delete/rename conn name, modify attributes] of any NICs based on MAC or pci addr

RFE: Support wake on lan options

Wanting to switch from editing /etc/sysconfig/network-scripts/ifcfg-โ€ฆ I am wondering what the clean way to get

ETHTOOL_OPTS="wol g"

configured with linux-system-roles / network

The current README.md does not seem to contain either the string wol or the string wake.

Since enabling wake on LAN (p|u|m|b|a|g|s|f|d) is a rather common procedure, it would be nice if this was covered in the documentation of the module.

[RFE] implement async module to support changing IP address

Commonly, the role will connect to the host via SSH. Hence, it cannot currently support to change the IP address of the host, because it will cut itself off.

ansible supports reboot of the system or changing the TCP port of the SSH server via async requests and poll for completion. Something similar should be possible for the network role.

The user should be able to specify that the role works asynchronously, and that afterwards the host is reachable via a new IP address.

Also interesting is to combine this with the CheckPoint feature of NetworkManager.

Allow downing of removed profiles in initscripts

on RHEL 6, when removing a bond using state=absent, it removes the config files but does not actually down the bond interface.

Subsequent attempts to down the bond via playbook fails as it states that there is no connection defined.

Needs the ability to modify a physical or virtual interface, regardless of a configuration or "connect profile" existing.

[RFE] with NetworkManager try to use Reapply as a graceful way to change configuration

Currently, the role does the corresponding of nmcli con up when it notices changes to the profile. If the profile already is active, this caues NM to first tear the device down, before reactivating it. This is more disruptive than desired. The Reapply D-Bus command should be used where possible.

Note that with Reapply, NetworkManager first checks whether it can perform the desired actions, and abort with failure if it cannot (without touching the system). Hence, a suitable way to do this, is first try to reapply, and if that fails, fall back to full activation.

Note also, how reapply changes the "applied-connection", and there is also a version-id, that allows to perform the reapply only, if the state is as expected.

Maybe older NM versions don't yet sufficiently support Reapply. It needs to be seen whether the role needs some special handling.

Use proper reserved IP and MAC addresses in documentation

README.md currently uses the d6:06:b9:56:12:5d MAC address in an example. examples/inventory use the 52:54:00:44:9f:ba and 52:54:00:05:f5:b3 MAC addresses.
The examples under examples/ use the 192.168.174. network prefix.
Is there a special reason why those values were chosen? If not, it might be better to use the addresses which are reserved for use in documentation by IANA. For IP adresses those are the 192.0.2.0/24, 198.51.100.0/24 and 203.0.113.0/24 blocks (RFC 5735). For MAC adresses, this is the 00-00-5E-00-53-00 through 00-00-5E-00-53-FF block (RFC 7042).
Note that the firewall role uses example addresses as well, it would be good to agree on a common standard.

Support for ETHTOOL_OPTS such as ethtool -K|--features|--offload and others

Currently there is no way via ansible to support device specific options set via ethtool command. This allows for setting a variety of NIC parameters from speed, duplex, variety offloads, and even buffers. modifying these values is critical to network performance in many enterprise environments, and thus would be good if linux-system-roles supported configuring them.

KeyError "type" in connection

When a connection is to be removed connection['type'] causes a KeyError when using the initscripts backend and no type is defined. This should fix it:

diff --git a/library/network_connections.py b/library/network_connections.py
index 271ab40..4922b13 100755
--- a/library/network_connections.py
+++ b/library/network_connections.py
@@ -2650,7 +2650,7 @@ class Cmd_initscripts(Cmd):
     def run_prepare(self):
         Cmd.run_prepare(self)
         for idx, connection in enumerate(self.connections):
-            if connection['type'] in [ 'macvlan' ]:
+            if connection.get('type') in [ 'macvlan' ]:
                 self.log_fatal(idx, 'unsupported type %s for initscripts provider' % (connection['type']))
 
     def check_name(self, idx, name = None):

The exception was: File "/tmp/ansible_jD4yqb/ansible_module_network_connections.py", line 2813, in <module> cmd.run() File "/tmp/ansible_jD4yqb/ansible_module_network_connections.py", line 2398, in run self.run_prepare() File "/tmp/ansible_jD4yqb/ansible_module_network_connections.py", line 2654, in run_prepare if connection['type'] in [ 'macvlan' ]: KeyError: 'type'

Ansible 2.7 breaks tests

Ansible 2.7 does not allow to specify variables when including tasks anymore, this breaks this syntax:

- name: Remove interfaces
  hosts: all
  tasks:
    - include_tasks: tasks/manage-test-interface.yml state=absent

Possible alternative is:

- name: Remove interfaces
  hosts: all
  tasks:
    - include_tasks: tasks/manage-test-interface.yml 
      vars:
           state: absent

document MTU

There seem to be support for MTU in the code (but see #19), but I can't find it in the README.

Support for macvlan connections`

I need to create macvlan interfaces and patched the module to support these interfaces. It's just 20 lines and it works for me (tm). You can find it here: macvlan.diff.txt. I've so far used it on a Centos 7 with NetworkManager.

A problem with the initscripts provider could be that by default, there don't seem to be if{down,up}-macvlan scripts. A version of these can be found here: https://github.com/larsks/initscripts-macvlan
I haven't tested these, but they look simple enough.
Another approach would be to simply not support macvlan connections on initscripts systems.

I'll be happy to adapt my patch if needed.
Best,
Roland

Traceback when running as non-root when ethtool is not in path

 [WARNING]: exception: Traceback (most recent call last):   File "/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 2812, in <module>     cmd.run()   File
"/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 2398, in run     self.run_prepare()   File "/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 2473, in run_prepare
Cmd.run_prepare(self)   File "/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 2443, in run_prepare     li_ifname = SysUtil.link_info_find(ifname = connection['interface_name'])   File
"/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 404, in link_info_find     for li in cls.link_infos(refresh).values():   File "/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line
387, in link_infos     b = SysUtil._link_infos_fetch()   File "/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 368, in _link_infos_fetch     'perm-address':
SysUtil._link_read_permaddress(ifname),   File "/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 346, in _link_read_permaddress     out = Util.check_output(['ethtool', '-P', ifname])   File
"/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 76, in check_output     p = subprocess.Popen(argv, stdout=subprocess.PIPE, stderr=DEVNULL, env=ev)   File "/usr/lib64/python2.7/subprocess.py",
line 711, in __init__     errread, errwrite)   File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child     raise child_exception OSError: [Errno 2] No such file or directory

fatal: [ansibletest]: FAILED! => {"changed": false, "msg": "fatal error: [Errno 2] No such file or directory"}

Happened on CentOS 7.

[RFE] use NetworkManager's Checkpoint feature to rollback on error

NetworkManager's Checkpoint feature allows to record the current configuration (snapshot), together with a timeout. Commonly, the user/script then would go ahead and change the networking configuration. If the timeout expires without the checkpoint being expired, the configuration of the snapshot is restored. The user can also manually issue a rollback before the timeout expires, or just cancel the timeout (in which case the snapshot is forgotten).

This can be used for two cases:

  • if the module operates and notices an issue in the middle of the operation, it can manually issue a rollback to restore the previous (working) setup.
  • if the module succeeds, but the user is unable to reach the host afterwards -- that is, the new configuration cut the host off the network -- then the role won't be able to cancel the checkpoint, and the timeout will hit -- to restore the previously working configuration.

Ability to delete and down all connection profiles for a given interface

When installing rhel on a new virtual machine with 6 network interfaces, all interfaces are auto configured and activated with profile names that may or may not match the linux device name. Because I do not want these and instead want my own connection profile names, I would like the ability to delete all configuration profiles for each network device by mac or interface_name

[root@rhel7 ~]# nmcli d
DEVICE  TYPE      STATE      CONNECTION 
ens10   ethernet  connected  ens10      
ens11   ethernet  connected  ens11      
ens12   ethernet  connected  ens12      
ens13   ethernet  connected  ens13      
eth4    ethernet  connected  eth2       
eth5    ethernet  connected  eth3       
lo      loopback  unmanaged  --         

However, this does not work:

        network_connections:
          - name: ens11
            #mac: "{{ hostvars[inventory_hostname].net1_mac }}"
            interface_name: "{{ hostvars[inventory_hostname].net1_dev }}"
            state: down
            persistent_state: absent

No matter if I use mac or interface_name, I get the following errors:

TASK [rhel-system-roles.network : Configure networking connection profiles] ***************
fatal: [rhel7]: FAILED! => {"changed": false, "msg": "fatal error: configuration error: connections[0].interface_name: property is not allowed for state 'down' and persistent_state 'absent'"}

TASK [rhel-system-roles.network : Configure networking connection profiles] ***************
fatal: [rhel7]: FAILED! => {"changed": false, "msg": "fatal error: configuration error: connections[0].mac: property is not allowed for state 'down' and persistent_state 'absent'"}

Not all network_provider: initscripts options are supported, some crucial for enterprise.

Many of the /etc/sysconfig/network-scripts/ifcfg- options documented in /usr/share/doc/initscripts-*/sysconfig.txt do not appear to be supported. Some key/crucial ones that appear to be missing for enterprise servers are:

1. ETHTOOL_OPTS
2. LINKDELAY
3. IPV4_FAILURE_FATAL

Some others may include (have not tested to validate missing, just grep of code):

ARPCHECK
DHCLIENTARGS
DHCLIENT_IGNORE_GATEWAY
DHCP_FQDN
DHCP_HOSTNAME
DHCPRELEASE
HOTPLUG
MACADDR
NETMASK
NO_DHCP_HOSTNAME
NOZEROCONF
PEERDNS
PERSISTENT_DHCLIENT
SCOPE
SRCADDR
USERCTL
WINDOW
ZONE

RFE: Allow user-defined attributes in network_connections structure

The network_connections struct is a good general data model for network connections, and could be used by more than just the linux-system-roles.network role. The value of this struct would be greatly increased if other attributes could be added, beyond what is useful by this role. However, adding user-defined attributes to network_connections that are not recognized by this role generates a failure.

Please consider tolerating user-defined attributes within the network_connections struct, without generating a failure in the linux-system-roles.network role

Example below. Note addition of non-standard "libvirt_dev", for use by another role in the same playbook. I use this to create a VM by generating a domain.xml with matching network devices which I supply to the libvirt module. Then when the VM is running, I could apply the linux-system-roles.network role to the guest. However, this does not work, because linux-system-roles.network generates a failure because of my addition "libvirt_dev" attribute.

host_vars/foo.example.com/main.yml:

network_connections:

  • name: "eth0"
    type: "ethernet"
    mac: "12:34:56:78:9a:bc"
    ip:
    dhcp4: yes
    libvirt_dev: eno1.2

Playbook execution:

TASK [linux-system-roles.network : Configure networking connection profiles] ***************************************************************************************************
fatal: [foo.example.com]: FAILED! => {"changed": false, "failed": true, "msg": "configuration error: connections[0]: invalid key "libvirt_dev""}

If it is necessary to maintain a level of restrictiveness to the schema, I'd request either creating a new "network_user_attrs" data element which contains a list of user-defined attributes to be allowed within the struct, or use of a prefix such as "user_" for user-defined attributes.

connections[0].ip: property is not allowed for state 'up' and persistent_state 'present'

Hoping that I did not misunderstand a fundamental aspect of how this role works, I encountered the following issue:

TASK [network : Configure networking connection profiles] *********************************************************************************************************************************************************************************** fatal: [redacted]: FAILED! => {"changed": false, "msg": "fatal error: configuration error: connections[0].ip: property is not allowed for state 'up' and persistent_state 'present'"}

Relevant variables:

network_connections:
  - name: "ens3"
    state: up
    ip:
      address:
        - x.x.x.x/32
        - x:x:x::/64

Incidentally the specified IPs match the current system configuration, so no actual changes should be taking place anyway but it still fails. If you need any other info or logs from the system, please let me know.

Fedora 28 Server, current master (a10e72b)

[RFE] improve role's declarative structure

The role aims to accept the desired networking configuration in a declarative way. The user doesn't say to issue a certain command or series of steps (like nmcli). Instead, the user describes the desired outcome in a declarative way, and the role makes it happen (depending on the configured provider).

This is the case with the states present and absent. These states intend to only deploy persistent configuration on the host, but not actually changing the current networking configuration. Essentially they create/delete connection profiles in NetworkManager or write/delete ifcfg files. For these states, the order doesn't really matter (unless you specify the same profile more then once -- "same" in the sense of having the same name).

Then there are the states up and down, which basically translate to nmcli connection up|down or ifup|ifdown calls. For NetworkManager provider, the order in which you activate/deactivate a set of profiles shouldn't matter too much. NetworkManager should make it happen correctly, in either order. That might not be entirely true, for example if you activate a slave connection first or after the master, makes an actual difference. For initscripts, this is much more a problem though. The role doesn't really know what the current state of the system is, and neither do the initscripts. It essentially leaves the decision of getting the right order for calling ifup to the user.

Currently, the order is the same as specified in the playbook. This should be improved. The user should just say which profiles should be up/down, and the role should figure out how to get that desired outcome (and in which order to call ifup/nmcli connection up).

See also the variable force_state_change. The role should become smarter about this, so that we require it less.

See also bug https://bugzilla.redhat.com/show_bug.cgi?id=1508614

rhel-system-roles does not reload the network profile upon profile modification

SUMMARY

rhel-system-roles network configuration does not reload the existing network profile upon modification when the interface will be makred as a part of bond slave.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

rhel-system-roles-1.0-5.el7.noarch
rhel-system-roles.network

ANSIBLE VERSION
ansible 2.7.8
CONFIGURATION

OS / ENVIRONMENT
RHEL 7.5
kernel-3.10.0-862.14.4.el7.x86_64
NetworkManager-1.12.0-10.el7_6.x86_64

STEPS TO REPRODUCE
  • Create a playbook and use "rhel-system-roles.network" module for network configuration.
$ cat inventory
[test]
192.168.122.16

$ cat role_test_bond.yml
---

#- name: Test a network system role
- hosts: test
  vars:
#   network_providers: initscripts
   network_connections:

    - autoconnect: 'yes'
      bond:
        miimon: 100
        mode: active-backup
      interface_name: bond0
      ip:
        address:
          - 192.168.40.1/24
        auto6: 'no'
        dhcp4: 'no'
      name: bond0
      state: up
      type: bond
    - name: ens9
      interface_name: ens9
      master: bond0
      type: ethernet
      state: up

  roles:
    - rhel-system-roles.network

$ ansible-playbook role_test_bond.yml
SSH password:

PLAY [test] ********************************************************************

TASK [Gathering Facts] *********************************************************
ok: [192.168.122.16]

TASK [rhel-system-roles.network : Check which services are running] ************
ok: [192.168.122.16]

TASK [rhel-system-roles.network : Check which packages are installed] **********
ok: [192.168.122.16]

TASK [rhel-system-roles.network : Print network provider] **********************
ok: [192.168.122.16] => {
    "msg": "Using network provider: nm"
}

TASK [rhel-system-roles.network : Install packages] ****************************
skipping: [192.168.122.16]

TASK [rhel-system-roles.network : Enable network service] **********************
ok: [192.168.122.16]

TASK [rhel-system-roles.network : Configure networking connection profiles] ****
 [WARNING]: [005] <info>  #0, state:up persistent_state:present, 'bond0': add
connection bond0, 46072568-87ae-456b-b9f9-47b9624bdd24

 [WARNING]: [006] <info>  #0, state:up persistent_state:present, 'bond0': up
connection bond0, 46072568-87ae-456b-b9f9-47b9624bdd24 (not-active)

 [WARNING]: [007] <info>  #1, state:up persistent_state:present, 'ens9': update
connection ens9, 5a8d6d84-a7d7-4aba-a378-0e6db147c008

 [WARNING]: [008] <info>  #1, state:up persistent_state:present, 'ens9': up
connection ens9, 5a8d6d84-a7d7-4aba-a378-0e6db147c008 (is-modified)

changed: [192.168.122.16]

TASK [rhel-system-roles.network : Re-test connectivity] ************************
ok: [192.168.122.16]

PLAY RECAP *********************************************************************
192.168.122.16             : ok=7    changed=1    unreachable=0    failed=0  

EXPECTED RESULTS
# ip -o a | grep -iw inet
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: ens3    inet 192.168.122.16/24 brd 192.168.122.255 scope global noprefixroute dynamic ens3\       valid_lft 3597sec preferred_lft 3597sec
10: bond0    inet 192.168.40.1/24 brd 192.168.40.255 scope global noprefixroute bond0\       valid_lft forever preferred_lft forever
ACTUAL RESULTS
  • Before the playbook execution, the managed node is having below IP configuration.
# ip -o a | grep -iw inet
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: ens3    inet 192.168.122.16/24 brd 192.168.122.255 scope global noprefixroute dynamic ens3\       valid_lft 3509sec preferred_lft 3509sec
4: ens9    inet 192.168.122.221/24 brd 192.168.122.255 scope global noprefixroute dynamic ens9\       valid_lft 3585sec preferred_lft 3585sec
  • After the playbook execution, the managed node is having below IP configuration. Actually ens9 configuration should have been modified to bond slave but it doesn't in memory.
# ip -o a | grep -iw inet
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: ens3    inet 192.168.122.16/24 brd 192.168.122.255 scope global noprefixroute dynamic ens3\       valid_lft 3108sec preferred_lft 3108sec
4: ens9    inet 192.168.122.221/24 brd 192.168.122.255 scope global noprefixroute dynamic ens9\       valid_lft 3458sec preferred_lft 3458sec
9: bond0    inet 192.168.40.1/24 brd 192.168.40.255 scope global noprefixroute bond0\       valid_lft forever preferred_lft forever
ADDITIONAL INFORMATION

A network restart resolves the issue and implements the change.

Failure message with "state: up" option when creating bonded interface.

Example playbook:

---
- hosts: rhel7-latest
  vars_files:
  become: yes
  become_method: sudo
  become_user: root
  vars:
  roles:
    - role: linux-system-roles.network   # Configure Networking
      network:
        provider: nm    # or initscripts
        connections:

          - name: DBnic
            state: up
            type: ethernet
            interface_name: eth1
            autoconnect: yes
            ip:
              dhcp4: yes
              auto6: no

          - name: WebBond
            type: bond
            autoconnect: yes
            ip:
              dhcp4: yes
              auto6: no

          - name: WebBond-linkA
            type: ethernet
            interface_name: eth2
            master: WebBond
            slave_type: bond

          - name: WebBond-linkB
            type: ethernet
            interface_name: eth3
            master: WebBond
            slave_type: bond

Results:

  1. Single, regular connection DBnic is created and brought up with the "state: up" option, though it does produce an unexpected

[WARNING]: <warn> #0, state:up, "DBnic": wait for activation is not yet implemented
2. WebBond interface creates error message when created and brought up with the "state: up" option, though it does complete with active state.

TASK [linux-system-roles.network : Configure networking connection profiles] *****************************************************************
task path: /etc/ansible/roles/linux-system-roles.network/tasks/main.yml:20
 [WARNING]: <info>  #0, state:up, "DBnic": add connection DBnic, 1e2ea869-fbe0-49c1-aa7a-65e44bace54a

 [WARNING]: <warn>  #0, state:up, "DBnic": wait for activation is not yet implemented

 [WARNING]: <info>  #0, state:up, "DBnic": up connection DBnic, 1e2ea869-fbe0-49c1-aa7a-65e44bace54a

 [WARNING]: <info>  #1, state:present, "WebBond": add connection WebBond, 56592fb3-3feb-47ae-b908-bd8079758d50

 [WARNING]: <info>  #2, state:present, "WebBond-linkA": add connection WebBond-linkA, 31cdf4b5-7be3-4ec0-b600-a86412328441

 [WARNING]: <info>  #3, state:present, "WebBond-linkB": add connection WebBond-linkB, cefdeb90-22d5-4991-a6e4-cbac42b45ebd

 [WARNING]: <info>  #4, state:up, "MyApp-Team": add connection MyApp-Team, be90452c-7c56-4df0-bf59-d365685835fd

 [WARNING]: <warn>  #4, state:up, "MyApp-Team": wait for activation is not yet implemented

 [WARNING]: <info>  #4, state:up, "MyApp-Team": up connection MyApp-Team, be90452c-7c56-4df0-bf59-d365685835fd

 [WARNING]: <error> #4, state:up, "MyApp-Team": up connection failed: failure to activate connection: nm-client-error-quark: Active
connection removed before it was initialized (2)

fatal: [rhel7-latest]: FAILED! => {"changed": false, "failed": true, "msg": "error: up connection failed: failure to activate connection: nm-client-error-quark: Active connection removed before it was initialized (2)"}
	to retry, use: --limit @/home/tbowling/src/virt-demo/ansible/example-linux-system-roles.network.retry

PLAY RECAP ***********************************************************************************************************************************
rhel7-latest               : ok=4    changed=0    unreachable=0    failed=1   

Expected results:
It seems that when creating an interface, the default behavior is to also activate it. However, the expecation is that if "state: up" is specified, there should be no errors or warnings.

undefined variables in playbook result in unrelated error message

When trying to run a playbook with an undefined variable, the role fails with an error message such as:

TASK [linux-system-roles.network : Install packages] ************************************************************************************************************************************************
fatal: [rhel7b-cloud]: FAILED! => {"msg": "The conditional check 'not network_packages is subset(ansible_facts.packages.keys())' failed. The error was: error while evaluating conditional (not network_packages is subset(ansible_facts.packages.keys())): 'network_iphost' is undefined\n\nThe error appears to have been in '/home/till/scm/network-lsr/tests/roles/linux-system-roles.network/tasks/main.yml': line 14, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# Therefore install packages only when rpm does not find them\n- name: Install packages\n  ^ here\n"}

This is the playbook:

# SPDX-License-Identifier: BSD-3-Clause
---
- hosts: all
  vars:
    network_connections:

      # Create a profile for the underlying device of the VLAN.
      - name: prod2
        type: ethernet
        autoconnect: no
        interface_name: "{{ network_interface_name2 }}"
        ip:
          dhcp4: no
          auto6: no

      # on top of it, create a VLAN with ID 100 and static
      # addressing
      - name: prod2.100
        state: up
        type: vlan
        parent: prod2
        vlan_id: 100
        ip:
          address:
            - "192.0.2.{{ network_iphost }}/24"

  roles:
    - linux-system-roles.network

Commandline:

ansible-playbook -i rhel7b-cloud, -e "network_interface_name2=eth1" eth-with-vlan.yml 

Ideally there would be an independent check for the network_connections to create a clear error message. Maybe just referencing network_connections early helps.

[RFE] implement some checks for connectivity

When the role runs and configures the network, it determines success based on whether it was able to perform all requested configurations.

But it would be useful, to extend this and determine success based on certain properties by proping the network. For example, is a certain LLDP neighbour visible? Did we get a certain address from DHCP? Is a certain host reachable via ping/arping?

In conjunction with #33, such an extended connectivity check is useful to determine success/failure, and whether to restore the previous configuration (rollback).

Maybe these checks should be implemented as a separate module, not inside network_connections.py.

when using initscripts, brctl needs to be present to configure bridges

The initscripts backend requires brctl for bridge support, however brctl might be missing on for example on CentOS6 images:

TASK [linux-system-roles.network : Configure networking connection profiles] ***
task path: /tmp/tmptjhidpez/tests/roles/linux-system-roles.network/tasks/main.yml:21
 [WARNING]: [003] <info>  #0, state:up, "LSR-TST-br31": add ifcfg-rh profile
"LSR-TST-br31"
 [WARNING]: [004] <info>  #0, state:up, "LSR-TST-br31": up connection LSR-TST-
br31 (not-active)
 [WARNING]: [005] <info>  #0, state:up, "LSR-TST-br31": call `ifup LSR-TST-
br31`: rc=1, out="Bridge support not available: brctl not found ", err=""
 [WARNING]: [006] <error> #0, state:up, "LSR-TST-br31": call `ifup LSR-TST-
br31` failed with exit status 1
fatal: [/cache/CentOS-6-x86_64-GenericCloud-1804_02.qcow2c]: FAILED! => {"changed": true, "msg": "error: call `ifup LSR-TST-br31` failed with exit status 1"}
	to retry, use: --limit @/tmp/tmptjhidpez/tests/tests_bridge.retry

I guess we have these options:

  • Always install brctl when the initscripts backend is used via a task in the role
  • Only install it in the module when bridges need to be installed
  • Create a new module to install missing packages after analysing the network state

set interface_name to name by default unless mac is specified

name used to specify only the profile name and interface_name or mac could be specified to define the interface that the profile is restricted, too. This seems to be unexpected, since a profile called eth0 might configure eth2 if no interface_name or mac is specified. Therefore change it to

  • by default name is used as interface_name
  • if interface_name is explicitly set empty, it will not be set
  • if mac is specified, interface_name defaults to not being set

RFE: allow for network configuration of offline VM images

Tools like virt-builder and virt-customize run commands within libguestfs managed qemu images to configure linux systems. I thought it might be possible to use the network system role to configure networking within a VM image prior to booting, however the role seems to want to enable NetworkManager, which is something we don't want to do at image configure time. It would be great if the network role would configure the networking scripts in an offline VM image.

For what it's worth, I'm running...

LIBGUESTFS_BACKEND=direct virt-customize -a testvm.qcow2  --copy-in `pwd`/pb.yml:/root --run-command 'ansible-playbook /root/pb.yml'

where pb.yml looks like:

---
- hosts: 127.0.0.1
  connection: local
  vars:
    network_connections:
      - name: eth0
        state: up
        type: ethernet
        interface_name: eth0
        autoconnect: yes
        ip:
          dhcp4: no
          auto6: no
          gateway4: 10.0.0.1
          dns:
            - 10.0.0.2
          dns_search:
            - atgreen.lab
          address:
            - 10.0.0.112/24
  roles:
    - role: rhel-system-roles.network

And the error I get is...

TASK [rhel-system-roles.network : Enable network service] **********************
fatal: [127.0.0.1]: FAILED! => {"changed": false, "msg": "Could not find the requested service NetworkManager: "}
	to retry, use: --limit @/root/pb.retry

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.