linux-system-roles / network Goto Github PK
View Code? Open in Web Editor NEWAn ansible role to configure networking
Home Page: https://linux-system-roles.github.io/network/
License: BSD 3-Clause "New" or "Revised" License
An ansible role to configure networking
Home Page: https://linux-system-roles.github.io/network/
License: BSD 3-Clause "New" or "Revised" License
# cat network.yml
---
- hosts: host.domain.com
vars:
network_connections:
- name: enp5s0f0
state: absent
- name: enp5s0f0
interface_name: enp5s0f0
type: ethernet
state: up
autoconnect: yes
ip:
dhcp4: no
auto6: no
address:
- 10.0.0.1/30
roles:
- role: network
# ansible-playbook -l host.domain.com network.yml
Working output:
TASK [network : Configure networking connection profiles] *************************************************************************
[WARNING]: #0, state:absent, "enp5s0f0": delete connection enp5s0f0, 51292573-07da-4146-9b06-954f52a5f8d9
[WARNING]: #1, state:up, "enp5s0f0": add connection enp5s0f0, 51292573-07da-4146-9b06-954f52a5f8d9
[WARNING]: #1, state:up, "enp5s0f0": up connection enp5s0f0, 51292573-07da-4146-9b06-954f52a5f8d9
Failing output:
TASK [network : Configure networking connection profiles] *************************************************************************
TASK [network : Configure networking connection profiles]
[WARNING]: #0, state:absent, "enp5s0f0": delete connection enp5s0f0, 6d2a7eff-eadb-45c4-8c4c-6d2add08ede9
[WARNING]: #1, state:up, "enp5s0f0": add connection enp5s0f0, 6d2a7eff-eadb-45c4-8c4c-6d2add08ede9
[WARNING]: #1, state:up, "enp5s0f0": up connection enp5s0f0, 6d2a7eff-eadb-45c4-8c4c-6d2add08ede9
[WARNING]: #1, state:up, "enp5s0f0": failure: 'ActiveConnection' object has no attribute 'get_state_reason' [[Traceback (most recent call last): File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1776, in run self.run_state_up(idx)
File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1942, in run_state_up self.nmutil.connection_activate_wait(ac, wait_time) File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1498, in connection_activate_wait complete,
failure_reason = check_activated(ac, dev) File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1468, in check_activated ac_reason = ac.get_state_reason() AttributeError: 'ActiveConnection' object has no attribute 'get_state_reason' ]]
[WARNING]: exception: Traceback (most recent call last): File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 2103, in Cmd.create().run() File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1776, in run
self.run_state_up(idx) File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1942, in run_state_up self.nmutil.connection_activate_wait(ac, wait_time) File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1498, in
connection_activate_wait complete, failure_reason = check_activated(ac, dev) File "/tmp/ansible_xWURw9/ansible_module_network_connections.py", line 1468, in check_activated ac_reason = ac.get_state_reason() AttributeError: 'ActiveConnection' object has no
attribute 'get_state_reason'fatal: [host.domain.com]: FAILED! => {"changed": false, "failed": true, "msg": "fatal error: 'ActiveConnection' object has no attribute 'get_state_reason'"}
to retry, use: --limit @/usr/local/ansible/network.retry
Hardware configureing
# lspci
05:00.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]
# ethtool enp5s0f0
Settings for enp5s0f0:
Supported ports: [ FIBRE ]
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: No
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 100000Mb/s
Duplex: Full
Port: Direct Attach Copper
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Link detected: yes
Currently, the module collects all logging statements, and at the end returns them as "warnings", so that they are shown by ansible. Obviously, these are not really warnings, but rather debug information.
Instead, the logging messages should be returned in a different json field that is ignored by ansible. Then, the tasks/main.yml
should have a follow-up debug
task that prints the returned variable.
I guess, in the failure case, the network_connections
task must run ignoring failures to reach the debug
statement. Then, a follow up task should check whether the network_connections
task failed and abort.
Module throws an exception when run as a non-privileged user who is privilege escalating using '--become-method su'
Example playbook
---
- hosts: all
become: yes
become_method: su
become_user: root
vars:
network_connections:
- name: eth0
state: up
type: ethernet
interface_name: eth0
autoconnect: yes
ip:
address: 192.168.1.12/24
gateway4: 192.168.1.1
dhcp4: no
auto6: no
roles:
- role: linux-system-roles.network
Output
[xxx@xxx]$ ansible-playbook playbook.yml -i '192.168.1.12,' -bKk
SSH password:
BECOME password[defaults to SSH password]:
PLAY [all] *********************************************************************
TASK [Gathering Facts] *********************************************************
ok: [192.168.1.12]
TASK [linux-system-roles.network : Check which services are running] ***********
ok: [192.168.1.12]
TASK [linux-system-roles.network : Check which packages are installed] *********
ok: [192.168.1.12]
TASK [linux-system-roles.network : Print network provider] *********************
ok: [192.168.1.12] => {
"msg": "Using network provider: nm"
}
TASK [linux-system-roles.network : Install packages] ***************************
skipping: [192.168.1.12]
TASK [linux-system-roles.network : Enable and start NetworkManager] ************
ok: [192.168.1.12]
TASK [linux-system-roles.network : Enable network service] *********************
skipping: [192.168.1.12]
TASK [linux-system-roles.network : Ensure initscripts network file dependency is present] ***
skipping: [192.168.1.12]
TASK [linux-system-roles.network : Configure networking connection profiles] ***
[WARNING]: exception: Traceback (most recent call last): File
"/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 2366, in
main cmd.run() File
"/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 1738, in
run self.run_prepare() File
"/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 1868, in
run_prepare Cmd.run_prepare(self) File
"/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 1787, in
run_prepare ifname=connection["interface_name"] File
"/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 170, in
link_info_find for li in cls.link_infos(refresh).values(): File
"/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 151, in
link_infos b = SysUtil._link_infos_fetch() File
"/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 130, in
_link_infos_fetch "perm-address": SysUtil._link_read_permaddress(ifname),
File "/tmp/ansible_network_connections_payload_j0L_vd/__main__.py", line 107,
in _link_read_permaddress out = Util.check_output(["ethtool", "-P",
ifname]) File "/tmp/ansible_network_connections_payload_j0L_vd/ansible_networ
k_connections_payload.zip/ansible/module_utils/network_lsr/utils.py", line 35,
in check_output p = subprocess.Popen(argv, stdout=subprocess.PIPE,
stderr=DEVNULL, env=env) File "/usr/lib64/python2.7/subprocess.py", line 711,
in __init__ errread, errwrite) File "/usr/lib64/python2.7/subprocess.py",
line 1327, in _execute_child raise child_exception OSError: [Errno 2] No
such file or directory
fatal: [192.168.1.12]: FAILED! => {"changed": false, "msg": "fatal error: [Errno 2] No such file or directory"}
PLAY RECAP *********************************************************************
192.168.1.12 : ok=5 changed=0 unreachable=0 failed=1 skipped=3 rescued=0 ignored=0
When running a playbook a second time, it not detecting that the interface is already up and errors out.
# ansible-playbook -l util6vm net_demo.yml -vv
ansible-playbook 2.5.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible-playbook
python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: net_demo.yml ******************************************************************************************************************************
1 plays in net_demo.yml
PLAY [all] ******************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************
task path: /home/tbowling/src/virt-demo/ansible/net_demo.yml:7
ok: [util6vm]
META: ran handlers
TASK [linux-system-roles.network : Set version specific variables] **********************************************************************************
task path: /etc/ansible/roles/linux-system-roles.network/tasks/main.yml:1
ok: [util6vm] => (item=/etc/ansible/roles/linux-system-roles.network/vars/RedHat-6.yml) => {"ansible_facts": {"network_provider_default": "initscripts"}, "ansible_included_var_files": ["/etc/ansible/roles/linux-system-roles.network/vars/RedHat-6.yml"], "changed": false, "item": "/etc/ansible/roles/linux-system-roles.network/vars/RedHat-6.yml"}
TASK [linux-system-roles.network : Install packages] ************************************************************************************************
task path: /etc/ansible/roles/linux-system-roles.network/tasks/main.yml:9
ok: [util6vm] => {"changed": false, "msg": "", "rc": 0, "results": []}
TASK [linux-system-roles.network : Enable network service] ******************************************************************************************
task path: /etc/ansible/roles/linux-system-roles.network/tasks/main.yml:14
ok: [util6vm] => {"changed": false, "enabled": true, "name": "network", "state": "started"}
TASK [linux-system-roles.network : Configure networking connection profiles] ************************************************************************
task path: /etc/ansible/roles/linux-system-roles.network/tasks/main.yml:20
[WARNING]: [015] <info> #0, state:up, "net1": ifcfg-rh profile "net1" already up to date
[WARNING]: [016] <info> #0, state:up, "net1": up connection net1 (not-active)
[WARNING]: [017] <info> #0, state:up, "net1": call `ifup net1`: rc=1, out=" Determining IP information for eth1... failed. ", err="dhclient(29031)
is already running - exiting. This version of ISC DHCP is based on the release available on ftp.isc.org. Features have been added and other
changes have been made to the base software release in order to make it work better with this distribution. Please report for this software via the
Red Hat Bugzilla site: http://bugzilla.redhat.com exiting. "
[WARNING]: [018] <error> #0, state:up, "net1": call `ifup net1` failed with exit status 1
fatal: [util6vm]: FAILED! => {"changed": true, "msg": "error: call `ifup net1` failed with exit status 1"}
to retry, use: --limit @/home/tbowling/src/virt-demo/ansible/net_demo.retry
PLAY RECAP ******************************************************************************************************************************************
util6vm : ok=4 changed=0 unreachable=0 failed=1
error:
TASK [rhel-system-roles.network : Configure networking connection profiles] *********
fatal: [satellite]: FAILED! => {"changed": false, "msg": "configuration error: connections[0].name: state "down" references non-existing connection "ens192""}
System:
Red Hat Enterprise Linux Server release 7.4 (Maipo)
NOMBRE UUID TIPO DISPOSITIVO
ens192 2ee0adce-59b5-49c3-93f9-7b1df88b2bae 802-3-ethernet ens192
ens224 e4014630-448b-5ad3-4992-f4678202147c 802-3-ethernet ens224
La conexiรณn 'ens192' fue desactivada correctamente (ruta activa D-Bus: /org/freedesktop/NetworkManager/ActiveConnection/21)
Error: no se encontraron todas las conexiones activas.
Currently initscripts is only tested on RHEL 6 systems.
Currently the role fails when ask to set a profile to down
when the profile is not completely defined in the profile and does not exist on the target system. One could argue that a profile is down
when it does not exist on the system. However, one could also argue that the role should report this as an error in case there is a typo. Introduce a setting called ignore-error-if-absent
that when it is true ignores the error when a profile is undefined and should be set to down
.
Playbook:
---
- hosts: all
vars:
network_connections:
- name: eth1
type: ethernet
autoconnect: no
state: up
mtu: 1492
ip:
dhcp4: no
auto6: no
- name: eth1.90
parent: eth1
type: vlan
vlan_id: 90
mtu: 1280
state: up
ip:
dhcp4: no
auto6: no
tasks:
- name: Run network role
import_role:
name: linux-system-roles.network
ansible-playbook output:
ansible-playbook -i rhel76-cloud, vlan-mtu.yml
PLAY [all] ***********************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************
ok: [rhel76-cloud]
TASK [linux-system-roles.network : Check which services are running] *************************************************************************************************************************************
ok: [rhel76-cloud]
TASK [linux-system-roles.network : Check which packages are installed] ***********************************************************************************************************************************
ok: [rhel76-cloud]
TASK [linux-system-roles.network : Print network provider] ***********************************************************************************************************************************************
ok: [rhel76-cloud] => {
"msg": "Using network provider: nm"
}
TASK [linux-system-roles.network : Install packages] *****************************************************************************************************************************************************
skipping: [rhel76-cloud]
TASK [linux-system-roles.network : Enable network service] ***********************************************************************************************************************************************
ok: [rhel76-cloud]
TASK [linux-system-roles.network : Configure networking connection profiles] *****************************************************************************************************************************
[WARNING]: [005] <info> #0, state:up persistent_state:present, 'eth1': add connection eth1, dc5eb498-f5b1-4124-b393-a7cfaec4e51d
[WARNING]: [006] <info> #0, state:up persistent_state:present, 'eth1': up connection eth1, dc5eb498-f5b1-4124-b393-a7cfaec4e51d (not-active)
[WARNING]: [007] <info> #1, state:up persistent_state:present, 'eth1.90': add connection eth1.90, 226409d1-c1a5-456c-8f67-e4494cf540ac
[WARNING]: [008] <info> #1, state:up persistent_state:present, 'eth1.90': up connection eth1.90, 226409d1-c1a5-456c-8f67-e4494cf540ac (not-active)
[WARNING]: [009] <error> #1, state:up persistent_state:present, 'eth1.90': up connection failed: failure to activate connection: nm-manager-error-quark: Failed to find a compatible device for this
connection (3)
fatal: [rhel76-cloud]: FAILED! => {"changed": true, "msg": "error: up connection failed: failure to activate connection: nm-manager-error-quark: Failed to find a compatible device for this connection (3)"}
PLAY RECAP ***********************************************************************************************************************************************************************************************
rhel76-cloud : ok=5 changed=0 unreachable=0 failed=1
NM logs:
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <info> [1550252720.1114] audit: op="connection-activate" uuid="226409d1-c1a5-456c-8f67-e4494cf540ac" name="eth1.90" result="fail" reason="Failed to find a compatible device for this connection"
Feb 15 18:45:20 rhel76-cloud ansible-network_connections[17596]: [WARNING] [005] <info> #0, state:up persistent_state:present, 'eth1': add connection eth1, dc5eb498-f5b1-4124-b393-a7cfaec4e51d
Feb 15 18:45:20 rhel76-cloud ansible-network_connections[17596]: [WARNING] [006] <info> #0, state:up persistent_state:present, 'eth1': up connection eth1, dc5eb498-f5b1-4124-b393-a7cfaec4e51d (not-active)
Feb 15 18:45:20 rhel76-cloud ansible-network_connections[17596]: [WARNING] [007] <info> #1, state:up persistent_state:present, 'eth1.90': add connection eth1.90, 226409d1-c1a5-456c-8f67-e4494cf540ac
Feb 15 18:45:20 rhel76-cloud ansible-network_connections[17596]: [WARNING] [008] <info> #1, state:up persistent_state:present, 'eth1.90': up connection eth1.90, 226409d1-c1a5-456c-8f67-e4494cf540ac (not-active)
Feb 15 18:45:20 rhel76-cloud ansible-network_connections[17596]: [WARNING] [009] <error> #1, state:up persistent_state:present, 'eth1.90': up connection failed: failure to activate connection: nm-manager-error-quark: Failed to find a compatible device for this connection (3)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2112] platform-linux: event-notification: RTM_NEWADDR, flags 0, seq 0: fe80::5054:ff:fedc:e5f1/64 lft forever pref forever lifetime 622492-0[4294967295,4294967295] dev 38 flags permanent src kernel
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.2112] platform: signal: address 6 added: fe80::5054:ff:fedc:e5f1/64 lft forever pref forever lifetime 622492-0[4294967295,4294967295] dev 38 flags permanent src kernel
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.2112] device[0x5608d2167890] (eth1.90): queued IP6 config change
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2112] platform-linux: event-notification: RTM_NEWROUTE, flags 0, seq 0: ignore
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.2113] device[0x5608d2167890] (eth1.90): ip6-config: update (commit=0, new-config=0x5608d21d5000)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.2114] device[0x5608d2167890] (eth1.90): ip6-config: update IP Config instance (/org/freedesktop/NetworkManager/IP6Config/25)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.2114] dns-mgr: (device_ip_config_changed): queueing DNS updates (1)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2114] policy: set-hostname: updating hostname (ip6 conf)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2114] hostname: transient hostname retrieval failed
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2114] policy: get-hostname: "rhel76-cloud"
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2114] hostname: transient hostname retrieval failed
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2114] policy: get-hostname: "rhel76-cloud"
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.2114] policy: set-hostname: hostname already set to 'rhel76-cloud' (from system configuration)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.2114] dns-mgr: (device_ip_config_changed): DNS configuration did not change
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.2114] dns-mgr: (device_ip_config_changed): no DNS changes to commit (0)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8086] platform-linux: event-notification: RTM_NEWADDR, flags 0, seq 0: fe80::5054:ff:fedc:e5f1/64 lft forever pref forever lifetime 622492-0[4294967295,4294967295] dev 3 flags permanent src kernel
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.8087] platform: signal: address 6 added: fe80::5054:ff:fedc:e5f1/64 lft forever pref forever lifetime 622492-0[4294967295,4294967295] dev 3 flags permanent src kernel
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.8088] device[0x5608d2153250] (eth1): queued IP6 config change
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8088] platform-linux: event-notification: RTM_NEWROUTE, flags 0, seq 0: ignore
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.8091] device[0x5608d2153250] (eth1): ip6-config: update (commit=0, new-config=0x5608d212f390)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.8096] device[0x5608d2153250] (eth1): ip6-config: update IP Config instance (/org/freedesktop/NetworkManager/IP6Config/23)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.8096] dns-mgr: (device_ip_config_changed): queueing DNS updates (1)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8097] policy: set-hostname: updating hostname (ip6 conf)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8097] hostname: transient hostname retrieval failed
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8097] policy: get-hostname: "rhel76-cloud"
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8097] hostname: transient hostname retrieval failed
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8098] policy: get-hostname: "rhel76-cloud"
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <trace> [1550252720.8098] policy: set-hostname: hostname already set to 'rhel76-cloud' (from system configuration)
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.8098] dns-mgr: (device_ip_config_changed): DNS configuration did not change
Feb 15 18:45:20 rhel76-cloud NetworkManager[26901]: <debug> [1550252720.8098] dns-mgr: (device_ip_config_changed): no DNS changes to commit (0)
Feb 15 18:45:25 rhel76-cloud NetworkManager[26901]: <trace> [1550252725.0507] device[0x5608d2153250] (eth1): remove_pending_action (0): 'carrier-wait' not pending (expected)
Feb 15 18:45:25 rhel76-cloud NetworkManager[26901]: <trace> [1550252725.2010] device[0x5608d2167890] (eth1.90): remove_pending_action (0): 'carrier-wait' not pending (expected)
This is with NetworkManager-1.12.0-6.el7
@thom311 any ideas for the [WARNING]: [009] <error> #1, state:up persistent_state:present, 'eth1.90': up connection failed: failure to activate connection: nm-manager-error-quark: Failed to find a compatible device for this connection (3)
error? Is the eth1 device not ready, yet, when the vlan is activated?
The states present
and absent
are supposed to only deply/remove the persistant networking profiles on the host. They are not supposed to actually change the networking in any way. That is what the states up
and down
are for.
For initscripts, this is rather trivially the case, because the present
/absent
states just ensure that the ifcfg files are there or absent.
For NetworkManager, it's more complicated.
deleting a profile that is currently active causes the device to go down. That is, because in NetworkManager an active device must always have a profile. The solution in 1.10 is Update2()
with the NM_SETTINGS_UPDATE2_FLAG_VOLATILE
flag. Volatile means, that the connection is in-memory only, and will be automatically deleted once it goes down. If you make a connection volatile that currently is not active, it will be deleted right away.
adding or modifying a profile might always make it a candidate for autoconnecting. For state present, we would like to add/modify the profile without any changes. For that, 1.10 supports Update2()
flag NM_SETTINGS_UPDATE2_FLAG_BLOCK_AUTOCONNECT
. That works for the update case, but doesn't work for the add case. To add a connection that has connection.autoconnect=yes
without activating it right away, we would either need new NetworkManager D-Bus API to block autoconnect form the start, or we first add the connection with connection.autoconnect=no
, and then follow up with an Update2
call that sets connection.autoconnect=yes
and uses NM_SETTINGS_UPDATE2_FLAG_BLOCK_AUTOCONNECT
.
modifying the properties connection.metered
or connection.zone
takes effect immediately. There is currently no way to prevent that. NetworkManager should get a new flag for Update2
that allows to modify the profile and prevent these two properties from updating right away. Then, the role should use this new API.
I tested the role with initscripts network provider in CentOS 7.6.
If I change the ip->dns variable, /etc/resolv.conf is rewritten with the new dns values, but the dns_search value is always appended, never overwritten.
After a few executions of the role, here is the resulting resolv.conf
search domain.org domain.org domain.org domain.org domain.org
nameserver 192.168.121.2
nameserver 192.168.121.3
Moreover, it is a pity that using this role the diff feature of ansible does not work. It would be nice to see the changes applied to the configuration files.
Hello,
I have setup a bond interface eg. bond0 and then I create a vlan on the bond0 eg. bond0.100, however the MTU for bond0 and its slave is correctly set to 9000, however I cannot set the MTU for the vlan interface and it defaults to 1500.
network_provider: nm
network_connections:
# Bridge interface
- name: br0
state: up
type: bridge
interface_name: br0
ip:
dhcp4: no
auto6: no
gateway4: 1.0.0.1
address:
- "1.0.0.50/26"
# Bond interface
- name: bond0
state: up
type: bond
interface_name: "bond0"
bond:
mode: 802.3ad
miimon: 100
mtu: 9000
master: br0
slave_type: bridge
# Slave ethernet interfaces
- name: em2
state: up
type: ethernet
mtu: 9000
interface_name: em2
master: bond0
- name: em1
state: up
type: ethernet
mtu: 9000
interface_name: em1
master: bond0
# VLAN on bond interface
- name: bond0.100
state: up
type: vlan
parent: bond0
vlan_id: 100
ip:
address:
- "10.10.10.51/24"
How can I solve this?
Thanks.
Many of the nmcli connection show options available do not appear to be supported. Even several of them that supported in nm-connection-editor don't appear to be present...
Doing a quick look think some examples are:
connection.lld
connection.autoconnect <-------------- in nm-connection-editor (general)
connection.autoconnect-priority <----- in nm-connection-editor (general)
connection.autoconnect-retries
connection.zone <--------------------- in nm-connection-editor (general)
802-3-ethernet.cloned-mac-address <--- in nm-connection-editor (ethernet)
802-3-ethernet.mtu <------------------ in nm-connection-editor (ethernet)
802-3-ethernet.wake-on-lan-password <- in nm-connection-editor (ethernet)
802-3-ethernet.auto-negotiate <------- in nm-connection-editor (ethernet)
802-3-ethernet.speed <---------------- in nm-connection-editor (ethernet)
802-3-ethernet.duplex <--------------- in nm-connection-editor (ethernet)
ipv4.may-fail <----------------------- in nm-connection-editor (ipv4)
ipv4.ignore-auto-routes <------------- in nm-connection-editor (ipv4 routes)
ipv4.ignore-auto-dns
ipv4.never-default
ipv6.may-fail<----------------------- in nm-connection-editor (ipv6)
ipv6.ignore-auto-routes <------------- in nm-connection-editor (ipv6 routes)
ipv6.ignore-auto-dns
ipv6.never-default
Then some DHCP options:
ipv4.dhcp-client-id
ipv4.dhcp-timeout
ipv4.dhcp-send-hostname
ipv4.dhcp-hostname
ipv4.dhcp-fqdn
ipv6.dhcp-send-hostname
ipv6.dhcp-hostname
Need the ability to execute operations based only on hardware address or naming (mac, pci address, kernel device name) as opposed to only by "connection profile" name, which is very weak when using initscript style naming.
For example, in my hardware inventory, I know all of my MAC addrs, but possibly due to inconsistent net device naming or other reasons, the connection names are inconsistent.
Need the ability to [down/up, delete/rename conn name, modify attributes] of any NICs based on MAC or pci addr
The documentation states that wait: -1
is the default but it does not seem to be possible to set this. Also it needs to be completely implemented, if it is not but would be useful.
Wanting to switch from editing /etc/sysconfig/network-scripts/ifcfg-โฆ
I am wondering what the clean way to get
ETHTOOL_OPTS="wol g"
configured with linux-system-roles / network
The current README.md does not seem to contain either the string wol or the string wake.
Since enabling wake on LAN (p|u|m|b|a|g|s|f|d) is a rather common procedure, it would be nice if this was covered in the documentation of the module.
Commonly, the role will connect to the host via SSH. Hence, it cannot currently support to change the IP address of the host, because it will cut itself off.
ansible supports reboot of the system or changing the TCP port of the SSH server via async requests and poll for completion. Something similar should be possible for the network role.
The user should be able to specify that the role works asynchronously, and that afterwards the host is reachable via a new IP address.
Also interesting is to combine this with the CheckPoint feature of NetworkManager.
on RHEL 6, when removing a bond using state=absent, it removes the config files but does not actually down the bond interface.
Subsequent attempts to down the bond via playbook fails as it states that there is no connection defined.
Needs the ability to modify a physical or virtual interface, regardless of a configuration or "connect profile" existing.
Currently, the role does the corresponding of nmcli con up
when it notices changes to the profile. If the profile already is active, this caues NM to first tear the device down, before reactivating it. This is more disruptive than desired. The Reapply
D-Bus command should be used where possible.
Note that with Reapply
, NetworkManager first checks whether it can perform the desired actions, and abort with failure if it cannot (without touching the system). Hence, a suitable way to do this, is first try to reapply, and if that fails, fall back to full activation.
Note also, how reapply changes the "applied-connection", and there is also a version-id, that allows to perform the reapply only, if the state is as expected.
Maybe older NM versions don't yet sufficiently support Reapply. It needs to be seen whether the role needs some special handling.
README.md currently uses the d6:06:b9:56:12:5d MAC address in an example. examples/inventory use the 52:54:00:44:9f:ba and 52:54:00:05:f5:b3 MAC addresses.
The examples under examples/ use the 192.168.174. network prefix.
Is there a special reason why those values were chosen? If not, it might be better to use the addresses which are reserved for use in documentation by IANA. For IP adresses those are the 192.0.2.0/24, 198.51.100.0/24 and 203.0.113.0/24 blocks (RFC 5735). For MAC adresses, this is the 00-00-5E-00-53-00 through 00-00-5E-00-53-FF block (RFC 7042).
Note that the firewall role uses example addresses as well, it would be good to agree on a common standard.
Originally reported by @hartsjc in #17 (comment)
Hello,
try_count is declared and checked, but not incremented here.
network/library/network_connections.py
Line 397 in ed1d30a
Currently there is no way via ansible to support device specific options set via ethtool command. This allows for setting a variety of NIC parameters from speed, duplex, variety offloads, and even buffers. modifying these values is critical to network performance in many enterprise environments, and thus would be good if linux-system-roles supported configuring them.
Instead of requiring bond slaves to be specified as individual profiles it might make sense to mention the members directly in the master configuration. This is also suggested here: https://bugzilla.redhat.com/show_bug.cgi?id=1508614#c4
When a connection is to be removed connection['type']
causes a KeyError when using the initscripts backend and no type is defined. This should fix it:
diff --git a/library/network_connections.py b/library/network_connections.py
index 271ab40..4922b13 100755
--- a/library/network_connections.py
+++ b/library/network_connections.py
@@ -2650,7 +2650,7 @@ class Cmd_initscripts(Cmd):
def run_prepare(self):
Cmd.run_prepare(self)
for idx, connection in enumerate(self.connections):
- if connection['type'] in [ 'macvlan' ]:
+ if connection.get('type') in [ 'macvlan' ]:
self.log_fatal(idx, 'unsupported type %s for initscripts provider' % (connection['type']))
def check_name(self, idx, name = None):
The exception was: File "/tmp/ansible_jD4yqb/ansible_module_network_connections.py", line 2813, in <module> cmd.run() File "/tmp/ansible_jD4yqb/ansible_module_network_connections.py", line 2398, in run self.run_prepare() File "/tmp/ansible_jD4yqb/ansible_module_network_connections.py", line 2654, in run_prepare if connection['type'] in [ 'macvlan' ]: KeyError: 'type'
Ansible 2.7 does not allow to specify variables when including tasks anymore, this breaks this syntax:
- name: Remove interfaces
hosts: all
tasks:
- include_tasks: tasks/manage-test-interface.yml state=absent
Possible alternative is:
- name: Remove interfaces
hosts: all
tasks:
- include_tasks: tasks/manage-test-interface.yml
vars:
state: absent
There seem to be support for MTU in the code (but see #19), but I can't find it in the README.
I need to create macvlan interfaces and patched the module to support these interfaces. It's just 20 lines and it works for me (tm). You can find it here: macvlan.diff.txt. I've so far used it on a Centos 7 with NetworkManager.
A problem with the initscripts provider could be that by default, there don't seem to be if{down,up}-macvlan scripts. A version of these can be found here: https://github.com/larsks/initscripts-macvlan
I haven't tested these, but they look simple enough.
Another approach would be to simply not support macvlan connections on initscripts systems.
I'll be happy to adapt my patch if needed.
Best,
Roland
[WARNING]: exception: Traceback (most recent call last): File "/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 2812, in <module> cmd.run() File
"/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 2398, in run self.run_prepare() File "/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 2473, in run_prepare
Cmd.run_prepare(self) File "/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 2443, in run_prepare li_ifname = SysUtil.link_info_find(ifname = connection['interface_name']) File
"/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 404, in link_info_find for li in cls.link_infos(refresh).values(): File "/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line
387, in link_infos b = SysUtil._link_infos_fetch() File "/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 368, in _link_infos_fetch 'perm-address':
SysUtil._link_read_permaddress(ifname), File "/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 346, in _link_read_permaddress out = Util.check_output(['ethtool', '-P', ifname]) File
"/tmp/ansible_dkZGH7/ansible_module_network_connections.py", line 76, in check_output p = subprocess.Popen(argv, stdout=subprocess.PIPE, stderr=DEVNULL, env=ev) File "/usr/lib64/python2.7/subprocess.py",
line 711, in __init__ errread, errwrite) File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory
fatal: [ansibletest]: FAILED! => {"changed": false, "msg": "fatal error: [Errno 2] No such file or directory"}
Happened on CentOS 7.
network-scripts in F29 and newer uses iproute to setup bridges instead of brctl, therefore brctl does not need to be installed there anymore to support bridges.
NetworkManager's Checkpoint feature allows to record the current configuration (snapshot), together with a timeout. Commonly, the user/script then would go ahead and change the networking configuration. If the timeout expires without the checkpoint being expired, the configuration of the snapshot is restored. The user can also manually issue a rollback before the timeout expires, or just cancel the timeout (in which case the snapshot is forgotten).
This can be used for two cases:
When installing rhel on a new virtual machine with 6 network interfaces, all interfaces are auto configured and activated with profile names that may or may not match the linux device name. Because I do not want these and instead want my own connection profile names, I would like the ability to delete all configuration profiles for each network device by mac
or interface_name
[root@rhel7 ~]# nmcli d
DEVICE TYPE STATE CONNECTION
ens10 ethernet connected ens10
ens11 ethernet connected ens11
ens12 ethernet connected ens12
ens13 ethernet connected ens13
eth4 ethernet connected eth2
eth5 ethernet connected eth3
lo loopback unmanaged --
However, this does not work:
network_connections:
- name: ens11
#mac: "{{ hostvars[inventory_hostname].net1_mac }}"
interface_name: "{{ hostvars[inventory_hostname].net1_dev }}"
state: down
persistent_state: absent
No matter if I use mac
or interface_name
, I get the following errors:
TASK [rhel-system-roles.network : Configure networking connection profiles] ***************
fatal: [rhel7]: FAILED! => {"changed": false, "msg": "fatal error: configuration error: connections[0].interface_name: property is not allowed for state 'down' and persistent_state 'absent'"}
TASK [rhel-system-roles.network : Configure networking connection profiles] ***************
fatal: [rhel7]: FAILED! => {"changed": false, "msg": "fatal error: configuration error: connections[0].mac: property is not allowed for state 'down' and persistent_state 'absent'"}
Many of the /etc/sysconfig/network-scripts/ifcfg- options documented in /usr/share/doc/initscripts-*/sysconfig.txt do not appear to be supported. Some key/crucial ones that appear to be missing for enterprise servers are:
1. ETHTOOL_OPTS
2. LINKDELAY
3. IPV4_FAILURE_FATAL
Some others may include (have not tested to validate missing, just grep of code):
ARPCHECK
DHCLIENTARGS
DHCLIENT_IGNORE_GATEWAY
DHCP_FQDN
DHCP_HOSTNAME
DHCPRELEASE
HOTPLUG
MACADDR
NETMASK
NO_DHCP_HOSTNAME
NOZEROCONF
PEERDNS
PERSISTENT_DHCLIENT
SCOPE
SRCADDR
USERCTL
WINDOW
ZONE
Originally reported by @hartsjc in #17 (comment)
Idea by @thom311 in #33 to use the checkpoint feature to revert changes in case the host cannot be reached anymore after the configuration. A possible idea would be not to remove the checkpoint but send it back to the ansible host and destroy it in a later step to test that the host can still be reached.
It seems to be supported in the code but it is not documented.
Support for network scripts moved from initscipts to network-scripts
package. This needs to be installed for the appropriate systems.
The network_connections struct is a good general data model for network connections, and could be used by more than just the linux-system-roles.network role. The value of this struct would be greatly increased if other attributes could be added, beyond what is useful by this role. However, adding user-defined attributes to network_connections that are not recognized by this role generates a failure.
Please consider tolerating user-defined attributes within the network_connections struct, without generating a failure in the linux-system-roles.network role
Example below. Note addition of non-standard "libvirt_dev", for use by another role in the same playbook. I use this to create a VM by generating a domain.xml with matching network devices which I supply to the libvirt module. Then when the VM is running, I could apply the linux-system-roles.network role to the guest. However, this does not work, because linux-system-roles.network generates a failure because of my addition "libvirt_dev" attribute.
network_connections:
If it is necessary to maintain a level of restrictiveness to the schema, I'd request either creating a new "network_user_attrs" data element which contains a list of user-defined attributes to be allowed within the struct, or use of a prefix such as "user_" for user-defined attributes.
Hoping that I did not misunderstand a fundamental aspect of how this role works, I encountered the following issue:
TASK [network : Configure networking connection profiles] *********************************************************************************************************************************************************************************** fatal: [redacted]: FAILED! => {"changed": false, "msg": "fatal error: configuration error: connections[0].ip: property is not allowed for state 'up' and persistent_state 'present'"}
Relevant variables:
network_connections:
- name: "ens3"
state: up
ip:
address:
- x.x.x.x/32
- x:x:x::/64
Incidentally the specified IPs match the current system configuration, so no actual changes should be taking place anyway but it still fails. If you need any other info or logs from the system, please let me know.
Fedora 28 Server, current master (a10e72b)
The role aims to accept the desired networking configuration in a declarative way. The user doesn't say to issue a certain command or series of steps (like nmcli
). Instead, the user describes the desired outcome in a declarative way, and the role makes it happen (depending on the configured provider
).
This is the case with the states present
and absent
. These states intend to only deploy persistent configuration on the host, but not actually changing the current networking configuration. Essentially they create/delete connection profiles in NetworkManager or write/delete ifcfg files. For these states, the order doesn't really matter (unless you specify the same profile more then once -- "same" in the sense of having the same name).
Then there are the states up
and down
, which basically translate to nmcli connection up|down
or ifup|ifdown
calls. For NetworkManager provider, the order in which you activate/deactivate a set of profiles shouldn't matter too much. NetworkManager should make it happen correctly, in either order. That might not be entirely true, for example if you activate a slave connection first or after the master, makes an actual difference. For initscripts, this is much more a problem though. The role doesn't really know what the current state of the system is, and neither do the initscripts. It essentially leaves the decision of getting the right order for calling ifup
to the user.
Currently, the order is the same as specified in the playbook. This should be improved. The user should just say which profiles should be up/down, and the role should figure out how to get that desired outcome (and in which order to call ifup/nmcli connection up
).
See also the variable force_state_change
. The role should become smarter about this, so that we require it less.
See also bug https://bugzilla.redhat.com/show_bug.cgi?id=1508614
rhel-system-roles network configuration does not reload the existing network profile upon modification when the interface will be makred as a part of bond slave.
rhel-system-roles-1.0-5.el7.noarch
rhel-system-roles.network
ansible 2.7.8
RHEL 7.5
kernel-3.10.0-862.14.4.el7.x86_64
NetworkManager-1.12.0-10.el7_6.x86_64
$ cat inventory
[test]
192.168.122.16
$ cat role_test_bond.yml
---
#- name: Test a network system role
- hosts: test
vars:
# network_providers: initscripts
network_connections:
- autoconnect: 'yes'
bond:
miimon: 100
mode: active-backup
interface_name: bond0
ip:
address:
- 192.168.40.1/24
auto6: 'no'
dhcp4: 'no'
name: bond0
state: up
type: bond
- name: ens9
interface_name: ens9
master: bond0
type: ethernet
state: up
roles:
- rhel-system-roles.network
$ ansible-playbook role_test_bond.yml
SSH password:
PLAY [test] ********************************************************************
TASK [Gathering Facts] *********************************************************
ok: [192.168.122.16]
TASK [rhel-system-roles.network : Check which services are running] ************
ok: [192.168.122.16]
TASK [rhel-system-roles.network : Check which packages are installed] **********
ok: [192.168.122.16]
TASK [rhel-system-roles.network : Print network provider] **********************
ok: [192.168.122.16] => {
"msg": "Using network provider: nm"
}
TASK [rhel-system-roles.network : Install packages] ****************************
skipping: [192.168.122.16]
TASK [rhel-system-roles.network : Enable network service] **********************
ok: [192.168.122.16]
TASK [rhel-system-roles.network : Configure networking connection profiles] ****
[WARNING]: [005] <info> #0, state:up persistent_state:present, 'bond0': add
connection bond0, 46072568-87ae-456b-b9f9-47b9624bdd24
[WARNING]: [006] <info> #0, state:up persistent_state:present, 'bond0': up
connection bond0, 46072568-87ae-456b-b9f9-47b9624bdd24 (not-active)
[WARNING]: [007] <info> #1, state:up persistent_state:present, 'ens9': update
connection ens9, 5a8d6d84-a7d7-4aba-a378-0e6db147c008
[WARNING]: [008] <info> #1, state:up persistent_state:present, 'ens9': up
connection ens9, 5a8d6d84-a7d7-4aba-a378-0e6db147c008 (is-modified)
changed: [192.168.122.16]
TASK [rhel-system-roles.network : Re-test connectivity] ************************
ok: [192.168.122.16]
PLAY RECAP *********************************************************************
192.168.122.16 : ok=7 changed=1 unreachable=0 failed=0
# ip -o a | grep -iw inet
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
2: ens3 inet 192.168.122.16/24 brd 192.168.122.255 scope global noprefixroute dynamic ens3\ valid_lft 3597sec preferred_lft 3597sec
10: bond0 inet 192.168.40.1/24 brd 192.168.40.255 scope global noprefixroute bond0\ valid_lft forever preferred_lft forever
# ip -o a | grep -iw inet
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
2: ens3 inet 192.168.122.16/24 brd 192.168.122.255 scope global noprefixroute dynamic ens3\ valid_lft 3509sec preferred_lft 3509sec
4: ens9 inet 192.168.122.221/24 brd 192.168.122.255 scope global noprefixroute dynamic ens9\ valid_lft 3585sec preferred_lft 3585sec
# ip -o a | grep -iw inet
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
2: ens3 inet 192.168.122.16/24 brd 192.168.122.255 scope global noprefixroute dynamic ens3\ valid_lft 3108sec preferred_lft 3108sec
4: ens9 inet 192.168.122.221/24 brd 192.168.122.255 scope global noprefixroute dynamic ens9\ valid_lft 3458sec preferred_lft 3458sec
9: bond0 inet 192.168.40.1/24 brd 192.168.40.255 scope global noprefixroute bond0\ valid_lft forever preferred_lft forever
A network restart resolves the issue and implements the change.
Instead of defaulting to a network provider depending on the platform check if the network service or NetworkManager are already enabled and use the respective service.
Example playbook:
---
- hosts: rhel7-latest
vars_files:
become: yes
become_method: sudo
become_user: root
vars:
roles:
- role: linux-system-roles.network # Configure Networking
network:
provider: nm # or initscripts
connections:
- name: DBnic
state: up
type: ethernet
interface_name: eth1
autoconnect: yes
ip:
dhcp4: yes
auto6: no
- name: WebBond
type: bond
autoconnect: yes
ip:
dhcp4: yes
auto6: no
- name: WebBond-linkA
type: ethernet
interface_name: eth2
master: WebBond
slave_type: bond
- name: WebBond-linkB
type: ethernet
interface_name: eth3
master: WebBond
slave_type: bond
Results:
[WARNING]: <warn> #0, state:up, "DBnic": wait for activation is not yet implemented
2. WebBond interface creates error message when created and brought up with the "state: up" option, though it does complete with active state.
TASK [linux-system-roles.network : Configure networking connection profiles] *****************************************************************
task path: /etc/ansible/roles/linux-system-roles.network/tasks/main.yml:20
[WARNING]: <info> #0, state:up, "DBnic": add connection DBnic, 1e2ea869-fbe0-49c1-aa7a-65e44bace54a
[WARNING]: <warn> #0, state:up, "DBnic": wait for activation is not yet implemented
[WARNING]: <info> #0, state:up, "DBnic": up connection DBnic, 1e2ea869-fbe0-49c1-aa7a-65e44bace54a
[WARNING]: <info> #1, state:present, "WebBond": add connection WebBond, 56592fb3-3feb-47ae-b908-bd8079758d50
[WARNING]: <info> #2, state:present, "WebBond-linkA": add connection WebBond-linkA, 31cdf4b5-7be3-4ec0-b600-a86412328441
[WARNING]: <info> #3, state:present, "WebBond-linkB": add connection WebBond-linkB, cefdeb90-22d5-4991-a6e4-cbac42b45ebd
[WARNING]: <info> #4, state:up, "MyApp-Team": add connection MyApp-Team, be90452c-7c56-4df0-bf59-d365685835fd
[WARNING]: <warn> #4, state:up, "MyApp-Team": wait for activation is not yet implemented
[WARNING]: <info> #4, state:up, "MyApp-Team": up connection MyApp-Team, be90452c-7c56-4df0-bf59-d365685835fd
[WARNING]: <error> #4, state:up, "MyApp-Team": up connection failed: failure to activate connection: nm-client-error-quark: Active
connection removed before it was initialized (2)
fatal: [rhel7-latest]: FAILED! => {"changed": false, "failed": true, "msg": "error: up connection failed: failure to activate connection: nm-client-error-quark: Active connection removed before it was initialized (2)"}
to retry, use: --limit @/home/tbowling/src/virt-demo/ansible/example-linux-system-roles.network.retry
PLAY RECAP ***********************************************************************************************************************************
rhel7-latest : ok=4 changed=0 unreachable=0 failed=1
Expected results:
It seems that when creating an interface, the default behavior is to also activate it. However, the expecation is that if "state: up" is specified, there should be no errors or warnings.
When trying to run a playbook with an undefined variable, the role fails with an error message such as:
TASK [linux-system-roles.network : Install packages] ************************************************************************************************************************************************
fatal: [rhel7b-cloud]: FAILED! => {"msg": "The conditional check 'not network_packages is subset(ansible_facts.packages.keys())' failed. The error was: error while evaluating conditional (not network_packages is subset(ansible_facts.packages.keys())): 'network_iphost' is undefined\n\nThe error appears to have been in '/home/till/scm/network-lsr/tests/roles/linux-system-roles.network/tasks/main.yml': line 14, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# Therefore install packages only when rpm does not find them\n- name: Install packages\n ^ here\n"}
This is the playbook:
# SPDX-License-Identifier: BSD-3-Clause
---
- hosts: all
vars:
network_connections:
# Create a profile for the underlying device of the VLAN.
- name: prod2
type: ethernet
autoconnect: no
interface_name: "{{ network_interface_name2 }}"
ip:
dhcp4: no
auto6: no
# on top of it, create a VLAN with ID 100 and static
# addressing
- name: prod2.100
state: up
type: vlan
parent: prod2
vlan_id: 100
ip:
address:
- "192.0.2.{{ network_iphost }}/24"
roles:
- linux-system-roles.network
Commandline:
ansible-playbook -i rhel7b-cloud, -e "network_interface_name2=eth1" eth-with-vlan.yml
Ideally there would be an independent check for the network_connections
to create a clear error message. Maybe just referencing network_connections
early helps.
When the role runs and configures the network, it determines success based on whether it was able to perform all requested configurations.
But it would be useful, to extend this and determine success based on certain properties by proping the network. For example, is a certain LLDP neighbour visible? Did we get a certain address from DHCP? Is a certain host reachable via ping/arping?
In conjunction with #33, such an extended connectivity check is useful to determine success/failure, and whether to restore the previous configuration (rollback).
Maybe these checks should be implemented as a separate module, not inside network_connections.py
.
Currently it accepts only an IP address and requires the prefix to contain the router mask.
The initscripts backend requires brctl for bridge support, however brctl might be missing on for example on CentOS6 images:
TASK [linux-system-roles.network : Configure networking connection profiles] ***
task path: /tmp/tmptjhidpez/tests/roles/linux-system-roles.network/tasks/main.yml:21
[WARNING]: [003] <info> #0, state:up, "LSR-TST-br31": add ifcfg-rh profile
"LSR-TST-br31"
[WARNING]: [004] <info> #0, state:up, "LSR-TST-br31": up connection LSR-TST-
br31 (not-active)
[WARNING]: [005] <info> #0, state:up, "LSR-TST-br31": call `ifup LSR-TST-
br31`: rc=1, out="Bridge support not available: brctl not found ", err=""
[WARNING]: [006] <error> #0, state:up, "LSR-TST-br31": call `ifup LSR-TST-
br31` failed with exit status 1
fatal: [/cache/CentOS-6-x86_64-GenericCloud-1804_02.qcow2c]: FAILED! => {"changed": true, "msg": "error: call `ifup LSR-TST-br31` failed with exit status 1"}
to retry, use: --limit @/tmp/tmptjhidpez/tests/tests_bridge.retry
I guess we have these options:
name
used to specify only the profile name and interface_name
or mac
could be specified to define the interface that the profile is restricted, too. This seems to be unexpected, since a profile called eth0
might configure eth2
if no interface_name
or mac
is specified. Therefore change it to
name
is used as interface_name
interface_name
is explicitly set empty, it will not be setmac
is specified, interface_name
defaults to not being setTools like virt-builder and virt-customize run commands within libguestfs managed qemu images to configure linux systems. I thought it might be possible to use the network system role to configure networking within a VM image prior to booting, however the role seems to want to enable NetworkManager, which is something we don't want to do at image configure time. It would be great if the network role would configure the networking scripts in an offline VM image.
For what it's worth, I'm running...
LIBGUESTFS_BACKEND=direct virt-customize -a testvm.qcow2 --copy-in `pwd`/pb.yml:/root --run-command 'ansible-playbook /root/pb.yml'
where pb.yml looks like:
---
- hosts: 127.0.0.1
connection: local
vars:
network_connections:
- name: eth0
state: up
type: ethernet
interface_name: eth0
autoconnect: yes
ip:
dhcp4: no
auto6: no
gateway4: 10.0.0.1
dns:
- 10.0.0.2
dns_search:
- atgreen.lab
address:
- 10.0.0.112/24
roles:
- role: rhel-system-roles.network
And the error I get is...
TASK [rhel-system-roles.network : Enable network service] **********************
fatal: [127.0.0.1]: FAILED! => {"changed": false, "msg": "Could not find the requested service NetworkManager: "}
to retry, use: --limit @/root/pb.retry
There is a commit in a fork that indicates there is a problem with package installation
youviewtv@d5d769e
This needs investigation.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.