Giter VIP home page Giter VIP logo

ansible-role-wireguard's Introduction

ansible-role-wireguard

This Ansible role is used in my blog series Kubernetes the not so hard way with Ansible but can be used standalone of course. I use WireGuard and this Ansible role to setup a fully meshed VPN between all nodes of my little Kubernetes cluster.

In general WireGuard is a network tunnel (VPN) for IPv4 and IPv6 that uses UDP. If you need more information about WireGuard you can find a good introduction here: Installing WireGuard, the Modern VPN.

Linux

This role should work with:

  • Ubuntu 20.04 (Focal Fossa)
  • Ubuntu 22.04 (Jammy Jellyfish)
  • Archlinux
  • Debian 11 (Bullseye)
  • Debian 12 (Bookworm)
  • Fedora 39
  • CentOS 7
  • AlmaLinux 8
  • AlmaLinux 9
  • Rocky Linux 8
  • Rocky Linux 9
  • openSUSE Leap 15.4
  • openSUSE Leap 15.5
  • Oracle Linux 9

Best effort

  • elementary OS 6

Molecule tests are available (see further down below). It should also work with Raspbian Buster but for this one there is no test available. MacOS (see below) should also work partitially but is only best effort.

MacOS

While this playbook configures, enables and starts a systemd service on Linux in a such a way that no additional action is needed, on MacOS it installs the required packages and it just generates the correct wg0.conf file that is then placed in the specified wireguard_remote_directory (/opt/local/etc/wireguard by default). In order to run the VPN, then, you need to:

sudo wg-quick up wg0

and to deactivate it

sudo wg-quick down wg0

or you can install the official app and import the wg0.conf file.

Versions

I tag every release and try to stay with semantic versioning. If you want to use the role I recommend to checkout the latest tag. The master branch is basically development while the tags mark stable releases. But in general I try to keep master in good shape too.

Requirements

By default port 51820 (protocol UDP) should be accessible from the outside. But you can adjust the port by changing the variable wireguard_port. Also IP forwarding needs to be enabled e.g. via echo 1 > /proc/sys/net/ipv4/ip_forward. I decided not to implement this task in this Ansible role. IMHO that should be handled elsewhere. You can use my ansible-role-harden-linux e.g. Besides changing sysctl entries (which you need to enable IP forwarding) it also manages firewall settings among other things. Nevertheless the PreUp, PreDown, PostUp and PostDown hooks may be a good place to do some network related stuff before a WireGuard interface comes up or goes down.

Changelog

Change history:

See full CHANGELOG.md

Recent changes:

16.0.2

  • OTHER
    • revert change in .github/workflows/release.yml

16.0.1

  • OTHER
    • update .github/workflows/release.yml
    • update meta/main.yml

16.0.0

  • BREAKING

    • removed support for Fedora 37/38 (reached end of life)
  • FEATURE

    • add support for Fedora 39
    • introduce wireguard_conf_backup variable to keep track of configuration changes. Default to false. (contribution by @shk3bq4d)
    • introduce wireguard_install_kernel_module. Allows to skip loading the wireguard kernel module. Default to true (which was the previous behavior). (contribution by @gregorydlogan)
  • Molecule

    • use different IP addresses
    • use generic Vagrant boxes for Rocky Linux
    • use alvistack Vagrant boxes for Ubuntu
    • use official Rocky Linux 9 Vagrant box
    • use official AlmaLinux Vagrant boxes
    • move memory and cpus parameter to Vagrant boxes

15.0.0

  • BREAKING

    • removed support for Ubuntu 18.04 (reached end of life)
    • removed support for Fedora 36 (reached end of life)
  • FEATURE

    • add support for Fedora 37
    • add support for Fedora 38
    • add support for openSUSE 15.5
    • add support for Debian 12
    • prefix host name comment with Name = for wg-info in WireGuard interface configuration (contribution by @tarag)
  • MOLECULE

    • rename kvm scenario to default
    • rename kvm-single-server scenario to single-server
    • upgrade OS and reboot in prepare before converge for Almalinux
  • OTHER

    • fix ansible-lint issues

Installation

  • Directly download from Github (change into Ansible role directory before cloning): git clone https://github.com/githubixx/ansible-role-wireguard.git githubixx.ansible_role_wireguard

  • Via ansible-galaxy command and download directly from Ansible Galaxy: ansible-galaxy install role githubixx.ansible_role_wireguard

  • Create a requirements.yml file with the following content (this will download the role from Github) and install with ansible-galaxy role install -r requirements.yml:

---
roles:
  - name: githubixx.ansible_role_wireguard
    src: https://github.com/githubixx/ansible-role-wireguard.git
    version: 16.0.0

Role Variables

These variables can be changed in group_vars/ e.g.:

# Directory to store WireGuard configuration on the remote hosts
wireguard_remote_directory: "/etc/wireguard"              # On Linux
# wireguard_remote_directory: "/opt/local/etc/wireguard"  # On MacOS

# The default port WireGuard will listen if not specified otherwise.
wireguard_port: "51820"

# The default interface name that WireGuard should use if not specified otherwise.
wireguard_interface: "wg0"

# The default owner of the wg.conf file
wireguard_conf_owner: root

# The default group of the wg.conf file
wireguard_conf_group: "{{ 'root' if not ansible_os_family == 'Darwin' else 'wheel' }}"

# The default mode of the wg.conf file
wireguard_conf_mode: 0600

# Whether any change to the wg.conf file should be backup
wireguard_conf_backup: false

# The default state of the wireguard service
wireguard_service_enabled: "yes"
wireguard_service_state: "started"

# By default "wg syncconf" is used to apply WireGuard interface settings if
# they've changed. Older WireGuard tools doesn't provide this option. In that
# case as a fallback the WireGuard interface will be restarted. This causes a
# short interruption of network connections.
#
# So even if "false" is the default, the role figures out if the "syncconf"
# option of the "wg" utility is available and if not falls back to "true"
# (which means interface will be restarted as this is the only possible option
# in this case).
#
# Possible options:
# - false (default)
# - true
#
# Both options have their pros and cons. The default "false" option (do not
# restart interface)
# - does not need to restart the WireGuard interface to apply changes
# - does not cause a short VPN connection interruption when changes are applied
# - might cause network routes are not properly reloaded
#
# Setting the option value to "true" will
# - restart the WireGuard interface as the name suggests in case of changes
# - cause a short VPN connection interruption when changes are applied
# - make sure that network routes are properly reloaded
#
# So it depends a little bit on your setup which option works best. If you
# don't have an overly complicated routing that changes very often or at all
# using "false" here is most properly good enough for you. E.g. if you just
# want to connect a few servers via VPN and it normally stays this way.
#
# If you have a more dynamic routing setup then setting this to "true" might be
# the safest way to go. Also if you want to avoid the possibility creating some
# hard to detect side effects this option should be considered.
wireguard_interface_restart: false

# Normally the role automatically creates a private key the very first time
# if there isn't already a WireGuard configuration. But this option allows
# to provide your own WireGuard private key if really needed. As this is of
# course a very sensitive value you might consider a tool like Ansible Vault
# to store it encrypted.
# wireguard_private_key:

# Set to "false" if package cache should not be updated (only relevant if
# the package manager in question supports this option)
wireguard_update_cache: "true"

# Normally the role installs and activates the wireguard kernel module where
# appropriate.  In some cases we might not be able load kernel modules, like
# unprivileged LXC guests.  If you set this to false you have to ensure
# the wireguard module is available in the kernel!
wireguard_install_kernel_module: true

There are also a few Linux distribution specific settings:

#######################################
# Settings only relevant for:
# - Ubuntu
# - elementary OS
#######################################

# DEPRECATED: Please use "wireguard_update_cache" instead.
# Set to "false" if package cache should not be updated.
wireguard_ubuntu_update_cache: "{{ wireguard_update_cache }}"

# Set package cache valid time
wireguard_ubuntu_cache_valid_time: "3600"

#######################################
# Settings only relevant for CentOS 7
#######################################

# Set wireguard_centos7_installation_method to "kernel-plus"
# to use the kernel-plus kernel, which includes a built-in,
# signed WireGuard module.
#
# The default of "standard" will use the standard kernel and
# the ELRepo module for WireGuard.
wireguard_centos7_installation_method: "standard"

# Reboot host if necessary if the "kernel-plus" kernel is in use
wireguard_centos7_kernel_plus_reboot: true

# The default seconds to wait for machine to reboot and respond
# if "kernel-plus" is in use. Is only relevant if
# "wireguard_centos7_kernel_plus_reboot" is set to "true".
wireguard_centos7_kernel_plus_reboot_timeout: "600"

# Reboot host if necessary if the standard kernel is in use
wireguard_centos7_standard_reboot: true

# The default seconds to wait for machine to reboot and respond
# if "standard" kernel is in use. Is only relevant if
# "wireguard_centos7_standard_reboot" is set to "true".
wireguard_centos7_standard_reboot_timeout: "600"

#########################################
# Settings only relevant for RockyLinux 8
#########################################

# Set wireguard_rockylinux8_installation_method to "dkms"
# to build WireGuard module from source, with wireguard-dkms.
# This is required if you use a custom kernel and/or your arch
# is not x86_64.
#
# The default of "standard" will install the kernel module
# with kmod-wireguard from ELRepo.
wireguard_rockylinux8_installation_method: "standard"

Every host in host_vars/ should configure at least one address via wireguard_address or wireguard_addresses. The wireguard_address can only contain one IPv4, thus it's recommended to use the wireguard_addresses variable that can contain an array of both IPv4 and IPv6 addresses.

wireguard_addresses:
  - "10.8.0.101/24"

Of course all IP's should be in the same subnet like /24 we see in the example above. If wireguard_allowed_ips is not set then the default values are IPs defined in wireguard_address and wireguard_addresses without the CIDR but instead with /32 (IPv4) or /128 (IPv6) which is basically a host route (have a look templates/wg.conf.j2). Let's see this example and let's assume you don't set wireguard_allowed_ips explicitly:

[Interface]
Address = 10.8.0.2/24
PrivateKey = ....
ListenPort = 51820

[Peer]
PublicKey = ....
AllowedIPs = 10.8.0.101/32
Endpoint = controller01.p.domain.tld:51820

This is part of the WireGuard config from my workstation. It has the VPN IP 10.8.0.2 and we've a /24 subnet in which all my WireGuard hosts are located. Also you can see we've a peer here that has the endpoint controller01.p.domain.tld:51820. When wireguard_allowed_ips is not explicitly set the Ansible template will add an AllowedIPs entry with the IP of that host plus /32 or /128. In WireGuard this basically specifies the routing. The config above says: On my workstation with the IP 10.8.0.2 I want send all traffic to 10.8.0.101/32 to the endpoint controller01.p.domain.tld:51820. Now let's assume we set wireguard_allowed_ips: "0.0.0.0/0". Then the resulting config looks like this.

[Interface]
Address = 10.8.0.2/24
PrivateKey = ....
ListenPort = 51820

[Peer]
PublicKey = ....
AllowedIPs = 0.0.0.0/0
Endpoint = controller01.p.domain.tld:51820

Now this is basically the same as above BUT now the config says: I want to route EVERY traffic originating from my workstation to the endpoint controller01.p.domain.tld:51820. If that endpoint can handle the traffic is of course another thing and it's up to you how you configure the endpoint routing ;-)

You can specify further optional settings (they don't have a default and won't be set if not specified besides wireguard_allowed_ips as already mentioned) also per host in host_vars/ (or in your Ansible hosts file if you like). The values for the following variables are just examples and no defaults (for more information and examples see wg-quick.8):

wireguard_allowed_ips: ""
wireguard_endpoint: "host1.domain.tld"
wireguard_persistent_keepalive: "30"
wireguard_dns: "1.1.1.1"
wireguard_fwmark: "1234"
wireguard_mtu: "1492"
wireguard_table: "5000"
wireguard_preup:
  - ...
wireguard_predown:
  - ...
wireguard_postup:
  - ...
wireguard_postdown:
  - ...
wireguard_save_config: "true"

wireguard_(preup|predown|postup|postdown) are specified as lists. Here are two examples:

wireguard_postup:
  - iptables -t nat -A POSTROUTING -o ens12 -j MASQUERADE
  - iptables -A FORWARD -i %i -j ACCEPT
  - iptables -A FORWARD -o %i -j ACCEPT
wireguard_preup:
  - echo 1 > /proc/sys/net/ipv4/ip_forward
  - ufw allow 51820/udp

The commands are executed in order as described in wg-quick.8.

Additionally one can add "unmanaged" peers. Those peers are not handled by Ansible and not part of the vpn Ansible host group e.g.:

wireguard_unmanaged_peers:
  client.example.com:
    public_key: 5zsSBeZZ8P9pQaaJvY9RbELQulcwC5VBXaZ93egzOlI=
    # preshared_key: ... e.g. from ansible-vault?
    allowed_ips: 10.0.0.3/32
    endpoint: client.example.com:51820
    persistent_keepalive: 0

One of wireguard_address (deprecated) or wireguard_addresses (recommended) is required as already mentioned. It's the IPs of the interface name defined with wireguard_interface variable (wg0 by default). Every host needs at least one unique VPN IP of course. If you don't set wireguard_endpoint the playbook will use the hostname defined in the vpn hosts group (the Ansible inventory hostname). If you set wireguard_endpoint to "" (empty string) that peer won't have a endpoint. That means that this host can only access hosts that have a wireguard_endpoint. That's useful for clients that don't expose any services to the VPN and only want to access services on other hosts. So if you only define one host with wireguard_endpoint set and all other hosts have wireguard_endpoint set to "" (empty string) that basically means you've only clients besides one which in that case is the WireGuard server. The third possibility is to set wireguard_endpoint to some hostname. E.g. if you have different hostnames for the private and public DNS of that host and need different DNS entries for that case setting wireguard_endpoint becomes handy. Take for example the IP above: wireguard_address: "10.8.0.101". That's a private IP and I've created a DNS entry for that private IP like host01.i.domain.tld (i for internal in that case). For the public IP I've created a DNS entry like host01.p.domain.tld (p for public). The wireguard_endpoint needs to be a interface that the other members in the vpn group can connect to. So in that case I would set wireguard_endpoint to host01.p.domain.tld because WireGuard normally needs to be able to connect to the public IP of the other host(s).

Here is a litte example for what I use the playbook: I use WireGuard to setup a fully meshed VPN (every host can directly connect to every other host) and run my Kubernetes (K8s) cluster at Hetzner Cloud (but you should be able to use any hoster you want). So the important components like the K8s controller and worker nodes (which includes the pods) only communicate via encrypted WireGuard VPN. Also (as already mentioned) I've two clients. Both have kubectl installed and are able to talk to the internal Kubernetes API server by using WireGuard VPN. One of the two clients also exposes a WireGuard endpoint because the Postfix mailserver in the cloud and my internal Postfix needs to be able to talk to each other. I guess that's maybe a not so common use case for WireGuard :D But it shows what's possible. So let me explain the setup which might help you to use this Ansible role.

First, here is a part of my Ansible hosts file:

[vpn]
controller0[1:3].i.domain.tld
worker0[1:2].i.domain.tld
server.at.home.i.domain.tld
workstation.i.domain.tld

[k8s_controller]
controller0[1:3].i.domain.tld

[k8s_worker]
worker0[1:2].i.domain.tld

As you can see I've three groups here: vpn (all hosts on that will get WireGuard installed), k8s_controller (the Kubernetes controller nodes) and k8s_worker (the Kubernetes worker nodes). The i in the domainname is for internal. All the i.domain.tld DNS entries have a A record that points to the WireGuard IP that we define shortly for every host e.g.: controller01.i.domain.tld. IN A 10.8.0.101. The reason for that is that all Kubernetes components only binds and listen on the WireGuard interface in my setup. And since I need this internal IPs for all my Kubernetes components I specify the internal DNS entries in my Ansible hosts file. That way I can use the Ansible inventory hostnames and variables very easy in the playbooks and templates.

For the Kubernetes controller nodes I've defined the following host variables:

Ansible host file: host_vars/controller01.i.domain.tld

---
wireguard_addresses:
  - "10.8.0.101/24"
wireguard_endpoint: "controller01.p.domain.tld"
ansible_host: "controller01.p.domain.tld"
ansible_python_interpreter: /usr/bin/python3

Ansible host file: host_vars/controller02.i.domain.tld:

---
wireguard_addresses:
  - "10.8.0.102/24"
wireguard_endpoint: "controller02.p.domain.tld"
ansible_host: "controller02.p.domain.tld"
ansible_python_interpreter: /usr/bin/python3

Ansible host file: host_vars/controller03.i.domain.tld:

---
wireguard_addresses:
  - "10.8.0.103/24"
wireguard_endpoint: "controller03.p.domain.tld"
ansible_host: "controller03.p.domain.tld"
ansible_python_interpreter: /usr/bin/python3

I've specified ansible_python_interpreter here for every node as the controller nodes use Ubuntu 18.04 which has Python 3 installed by default. ansible_host is set to the public DNS of that host. Ansible will use this hostname to connect to the host via SSH. I use the same value also for wireguard_endpoint because of the same reason. The WireGuard peers needs to connect to the other peers via a public IP (well at least via a IP that the WireGuard hosts can connect to - that could be of course also a internal IP if it works for you). IPs specified by wireguard_address or wireguard_addresses needs to be unique of course for every host.

For the Kubernetes worker I've defined the following variables:

Ansible host file: host_vars/worker01.i.domain.tld

---
wireguard_addresses:
  - "10.8.0.111/24"
wireguard_endpoint: "worker01.p.domain.tld"
wireguard_persistent_keepalive: "30"
ansible_host: "worker01.p.domain.tld"
ansible_python_interpreter: /usr/bin/python3

Ansible host file: host_vars/worker02.i.domain.tld:

---
wireguard_addresses:
  - "10.8.0.112/24"
wireguard_endpoint: "worker02.p.domain.tld"
wireguard_persistent_keepalive: "30"
ansible_host: "worker02.p.domain.tld"
ansible_python_interpreter: /usr/bin/python3

As you can see the variables are basically the same as the controller nodes have with one exception: wireguard_persistent_keepalive: "30". My worker nodes (at Hetzner Cloud) and my internal server (my server at home) are connected because I've running Postfix at my cloud nodes and the external Postfix server forwards the received mails to my internal server (and vice versa). I needed the keepalive setting because from time to time the cloud instances and the internal server lost connection and this setting solved the problem. The reason for this is of course because my internal server is behind NAT and the firewall/router must keep the NAT/firewall mapping valid (NAT and Firewall Traversal Persistence).

For my internal server at home (connected via DSL router to the internet) we've this configuration:

---
wireguard_addresses:
  - "10.8.0.1/24"
wireguard_endpoint: "server.at.home.p.domain.tld"
wireguard_persistent_keepalive: "30"
ansible_host: 192.168.2.254
ansible_port: 22

By default the SSH daemon is listening on a different port than 22 on all of my public nodes but internally I use 22 and that's the reason to set ansible_port: 22 here. Also ansible_host is of course a internal IP for that host. The wireguard_endpoint value is a dynamic DNS entry. Since my IP at home isn't static I need to run a script every minute at my home server that checks if the IP has changed and if so adjusts my DNS record. I use OVH's DynHost feature to accomplish this but you can use and DynDNS provider you want of course. Also I forward incoming traffic on port 51820/UDP to my internal server to allow incoming WireGuard traffic. IPs from wireguard_address and wireguard_addresses needs to be of course part of our WireGuard subnet.

And finally for my workstation (on which I run all ansible-playbook commands):

wireguard_addresses:
  - "10.8.0.2/24"
wireguard_endpoint: ""
ansible_connection: local
ansible_become: false

As you can see wireguard_endpoint: "" is a empty string here. That means the Ansible role won't set an endpoint for my workstation. Since there is no need for the other hosts to connect to my workstation it doesn't makes sense to have a endpoint defined. So in this case I can access all hosts defined in the Ansible group vpn from my workstation but not the other way round. So the resulting WireGuard config for my workstation looks like this:

[Interface]
Address = 10.8.0.2/24
PrivateKey = ....
ListenPort = 51820

[Peer]
PublicKey = ....
AllowedIPs = 10.8.0.101/32
Endpoint = controller01.p.domain.tld:51820

[Peer]
PublicKey = ....
AllowedIPs = 10.8.0.102/32
Endpoint = controller02.p.domain.tld:51820

[Peer]
PublicKey = ....
AllowedIPs = 10.8.0.103/32
Endpoint = controller03.p.domain.tld:51820

[Peer]
PublicKey = ....
AllowedIPs = 10.8.0.111/32
PersistentKeepalive = 30
Endpoint = worker01.p.domain.tld:51820

[Peer]
PublicKey = ....
AllowedIPs = 10.8.0.112/32
PersistentKeepalive = 30
Endpoint = worker02.p.domain.tld:51820

[Peer]
PublicKey = ....
AllowedIPs = 10.8.0.1/32
PersistentKeepalive = 30
Endpoint = server.at.home.p.domain.tld:51820

The other WireGuard config files (wg0.conf by default) looks similar but of course [Interface] includes the config of that specific host and the [Peer] entries lists the config of the other hosts.

Example Playbooks

- hosts: vpn
  roles:
    - githubixx.ansible_role_wireguard
  hosts: vpn
  roles:
    -
      role: githubixx.ansible_role_wireguard
      tags: role-wireguard

Example inventory using two different WireGuard interfaces on host "multi"

This is a complex example using yaml inventory format:

vpn1:
  hosts:
    multi:
      wireguard_addresses:
        - "10.9.0.1/32"
      wireguard_allowed_ips: "10.9.0.1/32, 192.168.2.0/24"
      wireguard_endpoint: multi.example.com
    nated:
      wireguard_addresses:
        - "10.9.0.2/32"
      wireguard_allowed_ips: "10.9.0.2/32, 192.168.3.0/24"
      wireguard_persistent_keepalive: 15
      wireguard_endpoint: nated.example.com
      wireguard_postup:
        - iptables -t nat -A POSTROUTING -o ens12 -j MASQUERADE
        - iptables -A FORWARD -i %i -j ACCEPT
        - iptables -A FORWARD -o %i -j ACCEPT
      wireguard_postdown:
        - iptables -t nat -D POSTROUTING -o ens12 -j MASQUERADE
        - iptables -D FORWARD -i %i -j ACCEPT
        - iptables -D FORWARD -o %i -j ACCEPT

vpn2:
  hosts:
    # Use a different name, and define ansible_host, to avoid mixing of vars without
    # needing to prefix vars with interface name.
    multi-wg1:
      ansible_host: multi
      wireguard_interface: wg1
      # when using several interface on one host, we must use different ports
      wireguard_port: 51821
      wireguard_addresses:
        - "10.9.1.1/32"
      wireguard_endpoint: multi.example.com
    another:
      wireguard_address:
        - "10.9.1.2/32"
      wireguard_endpoint: another.example.com

Sample playbooks for example above:

- hosts: vpn1
  roles:
    - githubixx.ansible_role_wireguard
- hosts: vpn2
  roles:
    - githubixx.ansible_role_wireguard

Testing

This role has a small test setup that is created using Molecule, libvirt (vagrant-libvirt) and QEMU/KVM. Please see my blog post Testing Ansible roles with Molecule, libvirt (vagrant-libvirt) and QEMU/KVM how to setup. The test configuration is here.

Afterwards molecule can be executed:

molecule converge

This will setup quite a few virtual machines (VM) with different supported Linux operating systems. To run a few tests:

molecule verify

To clean up run

molecule destroy

There is also a small Molecule setup that mimics a central WireGuard server with a few clients:

molecule converge -s single-server

License

GNU General Public License v3.0 or later

Author Information

http://www.tauceti.blog

ansible-role-wireguard's People

Contributors

ahanselka avatar cola-zero avatar discowzombie avatar elcomtik avatar fbourqui avatar gabriel-v avatar githubixx avatar gitouche-sur-osm avatar gregorydlogan avatar j8r avatar jdloft avatar john-p-potter avatar joneskoo avatar juergenhoetzel avatar leggewie avatar madic- avatar mofelee avatar moonrail avatar pallinger avatar penguineer avatar pierreozoux avatar sebix avatar shk3bq4d avatar tarag avatar ties avatar tjend avatar tobias-richter avatar wzzrd avatar ypid avatar zinefer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-role-wireguard's Issues

setup-debian.yml not able to setup debian but ubuntu instead ..

Debian script should be like this depend on WireGuard doc


- name: Setup WireGuard preference
  command: "printf 'Package: *\nPin: release a=unstable\nPin-Priority: 90\n' > /etc/apt/preferences.d/limit-unstable"
  tags:
    - wg-install

- name: Add WireGuard key
  apt_key:
    keyserver: "keyserver.ubuntu.com"
    id: "8B48AD6246925553"
    state: present
  run_once: true
  tags:
    - wg-install

- name: Add WireGuard repository
  apt_repository:
    repo: "deb http://deb.debian.org/debian/ unstable main"
    state: present
    update_cache: yes
  run_once: true
  tags:
    - wg-install

Ubuntu 2004 missing resolvconf package by default.

Hello

The role doesn't check if resolvconf is installed on ubuntu-2004. Installing the package resolves the error.

root@vpn:/etc/wireguard# journalctl -u wg-quick@wg0
-- Logs begin at Fri 2021-07-30 14:30:50 UTC, end at Fri 2021-07-30 15:50:08 UTC. --
Jul 30 15:38:41 vpn systemd[1]: Starting WireGuard via wg-quick(8) for wg0...
Jul 30 15:38:41 vpn wg-quick[24657]: [#] ip link add wg0 type wireguard
Jul 30 15:38:41 vpn wg-quick[24657]: [#] wg setconf wg0 /dev/fd/63
Jul 30 15:38:42 vpn wg-quick[24657]: [#] ip -4 address add 192.168.1.52/32 dev wg0
Jul 30 15:38:42 vpn wg-quick[24657]: [#] ip link set mtu 1420 up dev wg0
Jul 30 15:38:42 vpn wg-quick[24681]: [#] resolvconf -a wg0 -m 0 -x
Jul 30 15:38:42 vpn wg-quick[24683]: /usr/bin/wg-quick: line 32: resolvconf: command not found
Jul 30 15:38:42 vpn wg-quick[24657]: [#] ip link delete dev wg0
Jul 30 15:38:42 vpn systemd[1]: [email protected]: Main process exited, code=exited, status=127/n/a
Jul 30 15:38:42 vpn systemd[1]: [email protected]: Failed with result 'exit-code'.
Jul 30 15:38:42 vpn systemd[1]: Failed to start WireGuard via wg-quick(8) for wg0.
root@vpn:/etc/wireguard# cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.2 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

No support for roaming peers

There is no support for roaming peers because every node expects every-other peer to have a resolvable endpoint. Roaming peers (like a laptop) will never have resolvable endpoints so there must be a way to omit them from being present in other peer configs.

The issue comes from the config file template around this part.

{% for host in ansible_play_hosts %}
  {% if host != inventory_hostname %}
  
    [Peer]

Suggest adding a new variable wireguard_roaming to omit peers that are roaming.

{% for host in ansible_play_hosts %}
  {% if host != inventory_hostname %}
  
    [Peer]
    ...
    {% if hostvars[host].wireguard_roaming is undefined or not hostvars[host].wireguard_roaming %}
    ...
    {% endif %}
  {% endif %}
{% endfor %}

Put private key into separate file instead of main config

Reasons:

  • Avoid shoulder surfing and the like. You know the case when you show someone your config and it contains secrets? This not just looks unprofessional, it is also avoidable :)
  • Simplify the parsing of the configuration. Currently some regex parsing is done. That is not even needed.
  • Allow show/log diffs without revealing secrets (etckeeper, Ansible).
  • Will go in line with my Ansible controller based key management approach discussed in #65.
  • The native wg set subcommand only accepts files. When we already have files, we can manipulate wg interfaces easier.
  • Supported natively by systemd netdev. Ref: https://www.freedesktop.org/software/systemd/man/systemd.netdev.html#PrivateKeyFile=

Example:

PostUp = wg set %i private-key /etc/wireguard/%i.privkey

Downsides:

  • Will require automated migration. Doable.

What do you think?

The same should then be done for PSKs, although I have no file naming schema yet.

Variables to set server public/private key

Maybe it cannot be considered as a best practice, but when having a lots of clients already configured, it is very impractical to bother everyone to change their configuration when we re-deploy our WireGuard instance.

ansible-vault is used, and the private key is occasionally rotated. Doing this tasks can be recommended in the documentation.

CentOS 7 hardening and ELRepo

Running the role on a hardened CentOS 7 instance requires installing the elrepo GPGkey in order to install the module. More importantly, the module is not signed. While it installs, this error is observed in dmesg "Request for unknown module key 'The ELRepo Project (http://elrepo.org/): ELRepo.org Secure Boot Key: f365ad3481a7b20e3427b61b2a26635b83fe427b' err -11
wireguard: module verification failed: signature and/or required key missing - tainting kernel". This can be remedied by installing the key (http://elrepo.org/tiki/SecureBootKey), but this requires UEFI key management.

Per https://www.wireguard.com/install/, there are 3 methods to install WireGuard on CentOS 7:
Method 1: a signed module is available as built-in to CentOS's kernel-plus:
$ sudo yum install yum-utils epel-release
$ sudo yum-config-manager --setopt=centosplus.includepkgs=kernel-plus --enablerepo=centosplus --save
$ sudo sed -e 's/^DEFAULTKERNEL=kernel$/DEFAULTKERNEL=kernel-plus/' -i /etc/sysconfig/kernel
$ sudo yum install kernel-plus wireguard-tools
$ sudo reboot

Method 2: users wishing to stick with the standard kernel may use ELRepo's pre-built module:
$ sudo yum install epel-release elrepo-release
$ sudo yum install yum-plugin-elrepo
$ sudo yum install kmod-wireguard wireguard-tools

Method 3: users running non-standard kernels may wish to use the DKMS package instead:
$ sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
$ sudo curl -o /etc/yum.repos.d/jdoss-wireguard-epel-7.repo https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/repo/epel-7/jdoss-wireguard-epel-7.repo
$ sudo yum install wireguard-dkms wireguard-tools

The role currently uses the second option, which uses the standard kernel and requires no reboots. I have forked the role here: https://github.com/john-p-potter/ansible-role-wireguard/tree/kernel-plus and submitted MR #129 to upstream to address this issue.

Support use of wireguard-go on non-OS X platforms

Not all Linux hosts have access to their kernel (e.g. LXD containers, OpenVZ guests) to load a kernel module. For such environments, wireguard-go is required: it would be great if this role could provide such support.

manage firewall if present and active

Hi, first of all, thank you very much for your work on this. Many people profited.

I'd be surprised if this never came up before but one issue that I ran into while deploying this some of my servers out there was that one of them had a firewall running, ufw to be precise. Do you agree it might make sense to open the respective wireguard_port on all hosts by default?

Need for storing state locally?

Hi,

thanks for writing and sharing this role, this is really nice!

I have a question, is it necessary to store the state locally?
It would be easy to check if the server is already configured, find the private key and derive the public key, and if not, create the private key?

Am I missing something?

If you agree that storing the state is not so necessary, I can PR, to remove this need.

Cheers!

Switch to ELRepo for RHEL/Centos

I'm using this role to install wireguard on Centos7/8 systems. In recent update to Centos 7.8 wireguard caused problems(kernel panic at boot). It updated kernel, however wireguard kernel module wasn't updated.

I troubleshooted issue and found that this role installed dkms-wireguard package from jdoss repository, which seems to be not maintained properly. I found that official wireguard webpage recommends to use ELRepo and kmod-wireguard package https://www.wireguard.com/install/. I manually switched to this repository and it worked perfectly. Extra bonus for getting rid of dependencies for building dkms.

We should also add some task to properly uninstall old repository and dkms package.

I would make PR if you would like.

Tag wg-install is not applied properly

Hi altogether,

first of all: Thank you for this role - helped me rolling out wireguard on my hosts.

As I do have some hosts, that do run custom distros, the install tasks do not cover those.
No problem here - I think I've build myself some edge cases there.

But because of this, I've tried using the existing tag "wg-install" to skip any installation tasks and found it not working.
After a short inspection of the tasks, I've found a solution, that I'll provide in a PR later today.

This issue is more for documentation purposes than asking for a fix. :)

setting wg0.conf in only 1 host from group

Hi again , i need set the configuration in only 1 host from a group

i have

[aws_fws]
fw1.aws.local
fw2.aws.local

the variables are in the file

inventory/host_var/fw1/wireguard.yml

if I do

ansible-playbook wireguard.yml --limit="fw1.aws.local" 

everything is OK in 1 host... BUT

if i do in my playbook the same thing

- name: Configure Wireguard
  hosts: aws_fws
  gather_facts: yes

  vars:

  vars_files:

  roles:
    - role: wireguard
      when: inventory_hostname == 'fw1.aws.local'

i receive this error...

The full traceback is:
Traceback (most recent call last):
  File "/usr/local/bin/python_virtualenvs/ansible-4.3.0-devops_env9/lib/python3.8/site-packages/ansible/template/__init__.py", line 1100, in do_template
    res = j2_concat(rf)
  File "<template>", line 119, in root
  File "/usr/local/bin/python_virtualenvs/ansible-4.3.0-devops_env9/lib/python3.8/site-packages/jinja2/runtime.py", line 903, in _fail_with_undefined_error
    raise self._undefined_exception(self._undefined_message)
jinja2.exceptions.UndefinedError: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'wireguard__fact_public_key'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/python_virtualenvs/ansible-4.3.0-devops_env9/lib/python3.8/site-packages/ansible/plugins/action/template.py", line 146, in run
    resultant = templar.do_template(template_data, preserve_trailing_newlines=True, escape_backslashes=False)
  File "/usr/local/bin/python_virtualenvs/ansible-4.3.0-devops_env9/lib/python3.8/site-packages/ansible/template/__init__.py", line 1137, in do_template
    raise AnsibleUndefinedVariable(e)
ansible.errors.AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'wireguard__fact_public_key'
fatal: [fw1.aws.local]: FAILED! => changed=false
  msg: 'AnsibleUndefinedVariable: ''ansible.vars.hostvars.HostVarsVars object'' has no attribute ''wireguard__fact_public_key'''

fw1.yml

---
wireguard_interface: wg0
wireguard_address: 192.168.20.1/32
wireguard_endpoint: "{{ ansible_hostname }}.aws.local"
wireguard_dns: "aws.local"
wireguard_port: 51820

wireguard_unmanaged_peers:
  laptop:
    public_key: .......public_key......
    allowed_ips: 192.168.20.2/32"

wireguard_postup:
  - iptables -t nat -A POSTROUTING -o {{ ansible_default_ipv4.interface }} -j MASQUERADE
  - iptables -A FORWARD -i {{ wireguard_interface }} -j ACCEPT
  - iptables -A FORWARD -o {{ wireguard_interface }} -j ACCEPT
  - iptables -A INPUT -i {{ wireguard_interface }} -j ACCEPT
  - iptables -A OUTPUT -o {{ wireguard_interface }} -j ACCEPT

wireguard_postdown:
  - iptables -t nat -D POSTROUTING -o {{ ansible_default_ipv4.interface }} -j MASQUERADE
  - iptables -D FORWARD -i {{ wireguard_interface }} -j ACCEPT
  - iptables -D FORWARD -o {{ wireguard_interface }} -j ACCEPT
  - iptables -D INPUT -i {{ wireguard_interface }} -j ACCEPT
  - iptables -D OUTPUT -o {{ wireguard_interface }} -j ACCEPT

Flush handlers at the end of the role

This ensures the syncconf is called at the end of the role before moving on, otherwise it's only called at the end of the play.

The code would look like:

- name: Force all notified handlers to run
  meta: flush_handlers

Debian vanilla tasks fetches headers ahead of current kernel

The host I'm trying to target currently runs kernel 4.19.0-9-amd64, which is one patch (is this the correct name?) version behind the current release (4.19.0-10). As it's not a -cloud- tagged kernel, the debian-vanilla tasks attempt to fetch linux-headers-amd64, which pulls headers from a version outside my current kernel.

This makes wireguard's modprobe step fail, as it can't find the module for 4.19.0-9. Upgrading the kernel is not an option at the moment

I can provide execution logs if necessary, just let me know.

Steps to reproduce

I managed to make this reproducible via Vagrant:

Vagrant.configure("2") do |config|
  config.vm.box = "debian/buster64"

  config.vm.define "vpn"

  config.vm.provision "ansible" do |ans|
    ans.playbook = "../config-management/site.yaml" # Or any yaml that just includes the wireguard role
    ans.groups = {
      "vpn" => ["vpn"],
    }
    ans.host_vars = {
      "vpn" => {
        "wireguard_address" => "10.0.0.2/24"
      }
    }
  end
end

Workaround diff

I managed to make the role work on my use case by applying the following diff. However, I do not know the consequences this change could have on other environments:

diff --git a/tasks/setup-debian-vanilla.yml b/tasks/setup-debian-vanilla.yml
index 0b6aa0b..b45ce72 100644
--- a/tasks/setup-debian-vanilla.yml
+++ b/tasks/setup-debian-vanilla.yml
@@ -19,7 +19,7 @@
   changed_when: False

 - set_fact:
-    kernel_header_version: "{{ ('-cloud-' in ansible_kernel) | ternary(ansible_kernel,dpkg_arch.stdout) }}"
+    kernel_header_version: "{{ ansible_kernel }}"

 - name: (Debian) Install kernel headers to compile Wireguard with DKMS
   apt:

AnsibleUndefinedVariable: HostVarsVars object has no attribute 'public_key'

Quite new to Ansible, trying to get wireguard setup between a number of machines but am getting an error as below (let me know if you need the entire log). Is there a step I'm missing here, there doesn't seem to be anything in the readme about setting public keys, and if I understand it correctly, set public key fact should set this attribute.

TASK [wireguard : Set public key fact] ***************************************************************************************************
ok: [1.1.1.1]
ok: [2.2.2.2]
ok: [3.3.3.3]

TASK [wireguard : Create WireGuard configuration directory] ******************************************************************************
ok: [1.1.1.1]
ok: [2.2.2.2]
ok: [3.3.3.3]

TASK [wireguard : Generate WireGuard configuration file] *********************************************************************************
fatal: [1.1.1.1]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'public_key'"}
fatal: [2.2.2.2]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'public_key'"}
fatal: [3.3.3.3]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'public_key'"}

Debian role fails (ansible_lsb)

Role fails to execute on my Debian hosts due to ansible_lsb being empty.

Is there a better way to detect Raspbian or maybe have a check to see if ansible_lsb is populated?
The host in question is running Buster and all LSB named packages are installed, lsb-base & lsb-release.

[ansible]$ ansible node1 -m setup -a 'filter=ansible_lsb'
node1 | SUCCESS => {
    "ansible_facts": {
        "ansible_lsb": {},
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false
}

Split the main task

I'm tempted to split the main task into various smaller chunks that would be more readable, would it be ok for you?

Generate configuration for non-Ansible peers

Similar to #63, cc @joneskoo but the other way around.

What do you think about an option where the role could generate ready to use WireGuard configuration files that are not Ansible managed (for example phones). The convenient way to do this is to generate the private key, template the configuration and then provide it to the end device.

For implementation, I am thinking about adding those hosts actually to the Ansible inventory but with connection local. Then run the role against them and have the config redirected into a "secret" directory on the Ansible controller. I have done something like this with https://github.com/ypid/ansible-packages already. Works.

See also #65 (comment) and #66 for my background.

NAT / keepalive logic

At the moment, according to the example in the readme, the wireguard endpoint behind a NAT should have keepalive configured. This configures keepalives from all hosts to the host behind NAT. But shouldn't it be the other way around? The NATed endpoint needs to send keepalives to the other endpoints to be reachable.
The problem is, if the NATed endpoint doesn't send any frame over wireguard, the tunnel never comes up because the other endpoints can't send keepalives to the endpoint behind the nat. That's problematic if an endpoint, or a network, behind the nat should be accessible.

I think either the logic needs to be the other way around or maybe a ping from the device behind the nat to the other endpoints added to the role could help to initialize the tunnel.

Or do I miss something?

Execute lifecycle hooks upon reconfiguration to maintain consistency

Managing iptables rules is a common example for using wg-quick's lifecycle hooks PreUp, PostUp, PreDown and PostDown. However, when changing those rules and deploying this role again they're not executed since just the plain wg configuration is sync'ed. What's worse, after updating the configuration WireGuard can't be even restarted with wg-quick since deleting a non-existing iptables rule results in an error which in turn aborts the shutdown halfway through.

Suppose the following PostUp and PreDown rules are deployed:

wireguard_postup:
  - iptables -A FORWARD -i %i -j ACCEPT
  - iptables -A FORWARD -o %i -j ACCEPT
  - iptables -t nat -A POSTROUTING -i %i -o eth0 -j MASQUERADE
# Generate inverse iptables commands
wireguard_predown: "{{ wireguard_postup | reverse | replace('-A', '-D') }}"

When I add a new rule (e. g. iptables -A INPUT -i % -j REJECT) to wireguard_postup and deploy the role again the configuration file contains two new lines: one PostUp and one PreDown entry (containing the new rule and its counterpart). When I try to restart WireGuard the counterpart of the new rule is already executed leading to an error.

On the otherhand, when I remove an iptables rule from wireguard_postup its counterpart is removed as well and won't be executed when restarting WireGuard leaving the system in an unclean state.

One way of resolving this issue is to shutdown the interface before updating the configuration file and starting it up again after the update. This comes with several challenges, though:

  • Active connections will be interrupted (which might not be that big of a problem in a private network environment)
  • To achieve idempotence we don't want to shutdown WireGuard if the configuration didn't change
  • In order to run the *Down rules of the old (currently active) configuration we cannot update the configuration before shutting down the interface

Since there might be even more side effects I don't anticipate I'd like to start a discussion on that topic. Thank you in advance.

More robust error handling

I like this role and use it to manage 20+ nodes mesh.
The main issue I have is that if one node fails for whatever reason, it's excluded from the configuration and when the rest of the nodes are updated, they don't contain the key & IP of the failed node. This causes the node to be excluded and lose connectivity.

Most of the times I've hit this problem, a node had some silly minor issue I wasn't aware of (last example, expired cert for extra apt source) and unless I'm watching the script run, ready to cancel it, it ruins well-running node.

I'd like to see option where the play run is cancelled on the first error from a node.

CentOS 8

Hi... i'm trying to implement this role by default on CentOS 8 ...

kernel: 4.18
centos version: 8.3.2011

output

fatal: [firewall]: FAILED! => changed=false
  attempts: 10
  failed_when_result: true
  invocation:
    module_args:
      name: wireguard
      params: ''
      state: present
  msg: |-
    modprobe: FATAL: Module wireguard not found in directory /lib/modules/4.18.0-240.1.1.el8_3.x86_64
  name: wireguard
  params: ''
  rc: 1
  state: present
  stderr: |-
    modprobe: FATAL: Module wireguard not found in directory /lib/modules/4.18.0-240.1.1.el8_3.x86_64
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>

Ubuntu (and systemd) DNS specifics

Executive summary: when the DNS option is set, wg-quick does not run on Ubuntu without openresolv. Installing (and using) openresolv, messes up with the standard way resolution is handled (via systemd-resolved). Use systemd-resolve on such distros to properly inject WG DNS servers.

I'm working around this by keeping the DNS empty and using the following command in PostUp:
systemd-resolve -i wg0 --set-dns=10.99.1.10 --set-dns=10.99.1.11 --set-dns=10.99.1.12

Would be nice if the role detected this and implemented the workaround automatically.

Alternative is to use systemd's netdevs, but that's pretty heavy-handed change.

Feature request: add detection for Proxmox kernel

Description

It appears the DKMS kernel header detection doesn't account for Proxmox kernel, which although is a parasite of Debian, but have kernel of its own (specifically speaking, Proxmox uses free Debian repos but uses a Ubuntu kernel fork with inclusions of ZFS and Ceph -- weird mix but ok)

Synopsis

You should see a fatal message in playbook task procedures:

TASK [githubixx.ansible_role_wireguard : (Debian) Install kernel headers for the currently running kernel to compile Wireguard with DKMS] ***********************************************************************
fatal: [storage]: FAILED! => {"changed": false, "msg": "No package matching '**linux-headers-5.4.78-2-pve**' is available"}

Indeed, apt install linux-headers-5.4.78-2-pve doesn't show any positive result, and in fact it should be pve-headers.

Solution

Add the proxmox kernel identifier pve-headers to the conditions to check kernel header installation

Workaround

Skip that specific task when the distribution is Proxmox based, and ensure you manually installed the kernel headers.

Background

I wished to create my Proxmox cluster in a WireGuard based network fabric over the internet, because it seems you need a LAN IP to make corosync happy.

Adding Peers while wg interface is UP

Adding a peer to the ansible config results in the existing wg interface to be destroyed and restarted.
Instead it would be nice if just the peer was added to the already-existing configuration.

Generate static conf [no connect hosts]

Can you tell me if it is possible to generate keys and put them in a file without connecting via ssh? I have routers that require a specific setup.

Original[ru]
Подскажите можно ли сгенерировать ключи и положить их в файл не подключаясь по ssh. У меня есть роутеры которым требуется специфичная установка.

Install requires specific version

Hi,

Ansible galaxy complains, fails to find version.

You need to specify the exact version ansible-galaxy install githubixx.ansible_role_wireguard,3.2.0

 gr8pi-production git:(gr8pi-prod) ✗ ansible-galaxy install githubixx.ansible_role_wireguard      
- downloading role 'ansible_role_wireguard', owned by githubixx
 [WARNING]: - githubixx.ansible_role_wireguard was NOT installed successfully: Unable to compare role versions (2.0.1, 3.0.0, 3.0.1, 3.1.0, 3.2.0,
v1.0.2, v1.0.1, 2.0.0, v1.0.0) to determine the most recent version due to incompatible version formats. Please contact the role author to resolve
versioning conflicts, or specify an explicit role version to install.

Combined effort with DebOps?

Hey there, DebOps developer here. I found your role to be the best starting point to add WireGuard support to DebOps and would like to have your feedback on it.

For now, I see these viable options:

  1. Make the role itself auto detect (Ansible local facts) when it runs against a DebOps managed host and behave in a compatibly way. Additionally, as DebOps is a comprehensive approach to configuration management, I would need to maintain a wrapper role in DebOps that manages the Firewall. An example how this can look like is DebOps’s tor relay role: https://github.com/debops/debops/tree/master/ansible/roles/tor and its corresponding playbook:
    https://github.com/debops/debops/blob/master/ansible/debops-contrib-playbooks/service/tor.yml This is an intermidiate option but it would avoid the forking that I don’t like.

  2. Fork the role and make it a native DebOps role following https://docs.debops.org/en/master/dep/dep-0002.html

    Edit: That is kinda unavoidable. I started this work at https://github.com/ypid/ansible-wireguard/tree/prepare-for-debops

  3. We update the role and move it to DebOps :)

This is not urgent for me. I will open a improvement PR and a second one to add DebOps compatibility following option 1 and then you can have a look at the code.

Also note that DebOps has a role which solves similar problems of setting up a distributed public key system, https://docs.debops.org/en/master/ansible/roles/tinc/index.html. I will look into this older role again to compare your design with this one. Note that the tinc role has not be reworked since 2017 so it is not the most up-to-date one. There are more high-quality once if you want to check one out.

See also b-m-f/WirtBot#61 for my strategy.

hide peers without endpoint to enable routing via other wireguard host

Hi!

In my current setup I use wireguard to manage very different hosts. Some are globally reachable servers, but most devices are behind firewalls. I need the possibility to access all devices from my own 'management' laptop, which is itself a device without global endpoint. Therefore, I have one dedicated wireguard server that serves as router between all NAT-ted devices. This 'router' has my whole IP range assigned as AllowedIPs. However, the routing does not work if the NAT-ted devices are set as peers without endpoints, because then routing requests (using ping) fail with ping: sendmsg: Destination address required.

Furthermore, devices without an endpoint also don't need to have a ListenPort as this port will never be used.

I suggest:

  • to hide ListenPort, if the Endpoint is empty
  • to hide the whole Peer section for devices whose Endpoint is empty

Best regards,
Miroka

Make directory/file mode bits configurable

Currently the directory specified in wireguard_cert_directory has fixed permissions set to 0700. A wireguard_cert_directory_permission variable should be introduced with a default setting of 0700 to make it configurable but keeping the current setting as default.

Also the file permissions of the certificate files should be configurable while using the current settings as defaults to not change the current behaviour.

cannot redefine wireguard_port:

Hi,

I would like to connect few servers behind NAT to my mesh
They all have the same wireguard_endpoint:
But different: wireguard_port:

subsequently, ports are forwarded outside

but in every generated wg0.conf
all the peers in Endpoint have always the same port like the ListenPort
The endpoint address is a correct one

No package matching 'linux-headers-4.19.0' is available

Got this error on my debian servers:

TASK [githubixx.ansible_role_wireguard : (Debian) Install kernel headers for the currently running kernel to compile Wireguard with DKMS] ***
fatal: [host1]: FAILED! => {"changed": false, "msg": "No package matching 'linux-headers-4.19.0' is available"}
fatal: [host2]: FAILED! => {"changed": false, "msg": "No package matching 'linux-headers-4.19.0' is available"}

Using ansible-role-wireguard version '8.4.0' from ansible galaxy.
Debian 10.11 with kernel 4.19.0.

I first thought it is similar to #62, but this fix should be already in, right?

SaveConfig is always true

Even when setting wireguard_save_config: "false" explicitly on each host or for the group, all of their configurations will have SaveConfig = true present.

no attribute 'public_key'

Hi,

I am losing my mind, what I might be doing wrong to get this on each host?

FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'public_key'"}

ansible.vars.hostvars.HostVarsVars object has no attribute wireguard__fact_public_key

I could read this error has been mentioned in previous issues but none of their resolutions seems to help in my case. I hope someone here might have a bright idea.

I previously ran this role months ago successfully. Today I've formatted one of the servers in the network. Since I install it with Ansible I'veupdated the role before running le playbook. Unfortunately I face this error.

TASK [wireguard : Generate WireGuard configuration file] **********************************************************************************************************************************************************************************************************************************************************
fatal: [svr1.local.domain.net]: FAILED! => changed=false
  msg: 'AnsibleUndefinedVariable: ''ansible.vars.hostvars.HostVarsVars object'' has no attribute ''wireguard__fact_public_key'''
fatal: [svr2.local.domain.net]: FAILED! => changed=false
  msg: 'AnsibleUndefinedVariable: ''ansible.vars.hostvars.HostVarsVars object'' has no attribute ''wireguard__fact_public_key'''

My host file looks like the following:

    vpn:
      children:
        vpn_xymon:
          hosts:
            svr1.local.domain.net:
              wireguard_address: "10.8.0.9/24"
            svr2.local.domain.net:
              wireguard_address: "10.8.0.6/24"

What's wrong? :(

certs own by root

Hi

I am running the role from ubuntu 16.04 to a bunch of ubuntu 16.04 machines
The role fails on TASK [ansible-role-wireguard : Read private key]
with messages like
"Unable to find '/home/kisiel/wireguard/certs/hcloud1.vpn.private.key' in expected paths (use -vvvvv to see paths)"

"kisiel" is the user running ansible
the problem is that the /home/kisiel/wireguard/ was created with ownership to root
is this an error?

simple configuration

Hi again @githubixx

2 questions

how could be the simple configuration in my host_var for server/client to works this role ?

if i have

1- host_var/aws_wg_server # wireguard server

2- host_var/on_premise_server # client server to connect to wireguard server


i put on server side vars
wireguard_address: "1.2.3.4/24"
wireguard_endpoint: aws.wg.server

and i see the configuration on server with private key but missing public from peer.. .so.?

Wireguard dkms on Ubuntu 16.04

On ubuntu 16.04 the task (Ubuntu) Ensure WireGuard DKMS package is removed removes wireguard and wireguard-dkms.
But the task (Ubuntu) Install wireguard package than reinstalls wireguard and wireguard-dkms.

`wireguard_cert_owner` demands local root access - is that necessary?

Hi,

Using your role as a non root leads to

TASK [githubixx.ansible_role_wireguard : Create WireGuard certificates directory] ***********************************************************************************************
fatal: [master-0.k8s.gr8pi.org -> localhost]: FAILED! => {"changed": false, "gid": 1000, "group": "ieugen", "mode": "0755", "msg": "chown failed: [Errno 1] Operation not permitted: b'/home/user/production/ansible/wireguard-certs'", "owner": "user", "path": "/home/user/production/ansible/wireguard-certs", "size": 4096, "state": "directory", "uid": 1000}

Is the permission change necessary to be done locally ? If not I think it might make the role much portable.

wireguard_cert_owner: "root"
wireguard_cert_group: "root"

"Add WireGuard repository" fails on Ubuntu 18.04

TASK [githubixx.ansible_role_wireguard : (Ubuntu) Add WireGuard repository (for Ubuntu < 19.10)] *******************************************************************************

fatal: [bastion]: FAILED! =>  {
    "changed": false,
    "invocation": {
        "module_args": {
            "codename": null,
            "filename": null,
            "install_python_apt": true,
            "mode": null,
            "repo": "ppa:wireguard/wireguard",
            "state": "present",
            "update_cache": true,
            "validate_certs": true
        }
    },
    "msg": "failed to fetch PPA information, error was: HTTP Error 404: Not Found"
}

It seems ppa:wireguard/wireguard no longer exists.

I tried the command manually:

$ add-apt-repository ppa:wireguard/wireguard
Cannot add PPA: 'ppa:~wireguard/ubuntu/wireguard'.
The team named '~wireguard' has no PPA named 'ubuntu/wireguard'
Please choose from the following available PPAs:

This is on

$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.1 LTS
Release:	18.04
Codename:	bionic

Support for mutliple wireguard vpn on same host.

Ok, now I know it is a bit more I'm asking, but I really don't want to fork your project, and I'd prefer to collaborate here :)

Basically, We run on hetzner too, and we have a kubernetes cluster on baremetal instance, and we have ceph, and 10g net cards.
So I'd like one wireguard for kube, one for ceph frontend, and one for ceph backend.

I see how I can do it with your role, but it will not be backward compatible.

Basically:

#instance-1 host_var
wireguard:
 - wg0:
    - address: 10.0.0.1/24
       port: 51820
 - ceph-backend:
    - address: 10.0.1.1/24
       port: 51821
#instance-2 host_var
wireguard:
 - wg0:
    - address: 10.0.0.2/24
       port: 51820
 - ceph-backend:
    - address: 10.0.1.2/24
       port: 51821

And then I'd loop over these keys and derive the interface name from there.

What do you think? Can I PR? (I'll do it anyway for me ;) )

Linux header installation fails on Raspberry pi

Installation fails on Raspberry Pi with Raspbian : setup-debian.yml tries to install "linux-headers-{{ dpkg_arch.stdout }}", however, in Raspbian the package is called linux-headers (without arch).

Stable config when running against a subset of the initial ansible_play_hosts

The current design is modeled around ansible_play_hosts:

{% for host in ansible_play_hosts %}

This has one very strong downside which is that this role cannot run against one host because then it would remove all of the peers from that one host. I am currently looking into using inventory groups for this like in https://docs.debops.org/en/master/ansible/roles/tinc/index.html as part of #66. I opened this issue here because I think it is relevant for others as well. Note that there are more changes needed to solve this issue than just inventory groups.

A common thing to do is to have a "site" playbook which runs all roles against a server and which can fully deploy it. This is kinda incompatible with this role currently.

Fixed in: https://github.com/ypid/ansible-wireguard/tree/prepare-for-debops

Configure additional peers

In my use case some peers share a network (I'll call it "local network") and another peer is outside of that network. The outside peer is accessible from the peers in the local network, but not the other way around. This ansible role already works for that use case, but I'd like to configure additional peers so that the traffic inside the local network isn't sent to the outside peer but directly inside the local network to the other peers.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.