Giter VIP home page Giter VIP logo

ansible-rancher's Introduction

Ansible Playbooks and Roles for Rancher

Disclaimer!: We use this as a base for our own and customer setup at puzzle. Heavy work in progress and a lot of things that can be improved. Feel free to contribute. We are happy to assist.

These Ansible playbook and roles can be used to:

Prerequisites

We recommend you to run this playbooks inside a pipenv.

All dependencies are managed using pipenv, to get a virtual environment use:

# Only if you don't have pipenv yet:
pip install --user pipenv

Switch to the virtual environment and install dependencies into:

pipenv shell --three
pipenv install
# Now you can run ansible-playbook commands inside this pipenv shell:
ansible-playbook ...

You can verify the installed dependencies using pipenv graph (inside the pipenv shell):

$ pipenv graph
ansible==2.9.12
  - cryptography [required: Any, installed: 3.2]
  <--- output truncated --->
  - PyYAML [required: Any, installed: 5.3.1]
jmespath==0.10.0
openshift==0.11.2
  - jinja2 [required: Any, installed: 2.11.2]
    - MarkupSafe [required: >=0.23, installed: 1.1.1]
  - kubernetes [required: ~=11.0.0, installed: 11.0.0]
    - certifi [required: >=14.05.14, installed: 2020.6.20]
  <--- output truncated --->
  - six [required: Any, installed: 1.15.0]
selinux==0.2.1
  - distro [required: >=1.3.0, installed: 1.5.0]
  - setuptools [required: >=39.0, installed: 50.3.2]

Inventory

Check inventories/site for a sample inventory.

There are two special ansible groups:

  • rke_rancher_clusters: Hosts in this group represent a Rancher Control Plane instance
  • custom_k8s_clusters: Hosts in this group represent a custom kubernetes cluster added to a Rancher Control Plane

Members (Nodes) of the Rancher Control Plane and the Kubernetes cluster are managed with the following ansible groups.

Rancher Control Plane

For Rancher Control Plane: Assuming we have a Rancher Control Plane with the name cluster_rancher, we create the cluster_rancher host to the rke_rancher_clusters group and then add all nodes for this to the group rke_cluster_rancher, so the Rancher Control Plane name with a rke_ prefix.

[rke_rancher_clusters]
cluster_rancher # Belongs to Ansible Group rke_cluster_rancher

[rke_cluster_rancher]
rancher01
rancher02
rancher03

Make sure to set at least the following vars:

Custom Kubernetes Cluster

For a custom Kubernetes cluster managed with a Rancher Control Plane: Assuming our cluster has the name mycluster we create a host rancher_mycluster in the custom_k8s_clusters group (so cluster name with a rancher_ prefix). The member nodes of this cluster are then added to a group with the name mycluster. To use some dedicated roles on some nodes you can use other ansible groups which are children of the mycluster group.

[custom_k8s_clusters]
rancher_mycluster

[mycluster:children]
mycluster_master
mycluster_worker

[mycluster_master]
master01

[mycluster_worker]
worker01

Make sure to set at least the following vars:

Playbooks

site.yml

Playbook to apply docker, firewalld, rke_rancher_clusters & custom_rk8s_cluster. Check plays/prepare_k8s_nodes.yml, plays/deploy_rancher.yml & plays/deploy_k8s_cluster.yml for details.

cleanup_k8snode.yml

With this playbook to can cleanup a node which was already added to a kubernetes cluster. Based on https://rancher.com/docs/rancher/v2.x/en/cluster-admin/cleaning-cluster-nodes/

Roles

docker

Simple role to install Docker. Check roles/docker/README.md for more details.

firewalld

The role only configures firewalld depending on the k8s_role the node has (this behaviour can also be disabled if you want to). Based on https://rancher.com/docs/rancher/v2.x/en/installation/options/firewall/

rke_rancher_clusters

Role to deploy a Rancher Control Plane with rke and helm. Check roles/rke_rancher_clusters/README.md for more details.

custom_rk8s_cluster

Role to create a custom Kubernetes cluster on a Rancher Control Plane and add nodes to the cluster. Check roles/custom_k8s_cluster/README.md for more details.

rancher_keepalived

Role to deploy keepalived Daemonsets on Rancher Control Plane and custom Kubernetes clusters. Provides one or multiple highly available virtual IPv4/IPv6 address(es) to the regarding cluster. Usually directly called from rke_rancher_clusters and custom_rk8s_cluster.

License

GPLv3

Author Information

  • Sebastian Plattner
  • Philip Schmid

ansible-rancher's People

Contributors

dependabot[bot] avatar philipschmid avatar splattner avatar tuxpeople avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

ansible-rancher's Issues

Question: K8S roles

I've a question regarding the k8s roles assigned based on the inventory groups.

Given this sample inventory excerpt:

[mycluster_master]
node01
node02
node03

[mycluster_worker]
node01
node02
node03

All the nodes only get the worker role assigned. It completely ignores that they should (?) inherit the other two roles from the mycluster_master group. I solved this by adding all three roles to the mycluster_worker group.

I'm fully aware that this is

a) not the best idea of building a Kubernetes cluster
b) not supported by Rancher

but it's useful for testing and/or development scenarios. My question is: is this a issue of how this roles here are working? Or is it the behavior of Ansible? I assum the later, but I'm wondering.

Docker version issue

Hi There

Sorry to bother you again. In the official Docker repo, used by the Docker role, the newest Docker version is 20.10.1. Which causes an unsupported Docker version error later in rke.

I worked around with fixing the version in group_vars:

# grep docker inventories/group_vars/all.yml
docker_package: docker-ce-19.03.13

That's not a perfect solution, but I'm not sure what a good solution would be. I see the following possibilities:

  • Adding a version variable (One could think of a variable to set on 19.03 which then uses yum to search for the newest 19.03 release
  • Make use of the Docker script of Rancherlabs (curl https://releases.rancher.com/install-docker/19.03.sh | sh) which is again not a perfect solution as it would run every time

What do you think?

BR
Thomas

cloudscale api token is mandatory for local keepalived

When I use keepalived_setup_env=local, keepalive deployment fails:

TASK [rancher_keepalived : Create IP Failover DaemonSets for Internal IP] *********************************************************************************************************************************************
fatal: [rancher_mycluster]: FAILED! =>
  msg: |-
    The task includes an option with an undefined variable. The error was: 'cloudscale_api_token' is undefined

    The error appears to be in '/path/git/ansible-rancher/roles/rancher_keepalived/tasks/configure-keepalived.yml': line 105, column 3, but may
    be elsewhere in the file depending on the exact syntax problem.

    The offending line appears to be:


    - name: Create IP Failover DaemonSets for Internal IP
      ^ here

As soon as i set the token to a random string (eg. cloudscale_api_token="n/a") it works. It does not use the string at all, but it requires it.

Ansible module missing

Hi there

Environment:

ansible 2.9.15
python 3.6.8
CentOS Stream release 8

The roles/rke_rancher_clusters/tasks/rancher.yml makes use of the k8s module. Although it should exist in Ansible 2.9, I had no such module available on my machine. In Ansible 2.10, that particular module was moved to community. I installed it with ansible-galaxy collection install community.kubernetes and changed the k8s in roles/rke_rancher_clusters/tasks/rancher.yml to community.kubernetes.k8s. However, community.kubernetes.k8s has a dependency on the oc module. Which was deprecated and got removed in 2.9. They say, one should use openshift_raw instead, which is eventually the same as community.kubernetes.k8s.

Target Servers are all CentOS7

How did you work around that?

Kind regards
Thomas

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.