Preparing infrastructure for OpenShift 4 installation by hand is a rather tedious job. In order to save the effort, openshift-auto-upi provides a set of Ansible scripts that automate the infrastructure creation.
openshift-auto-upi is a separate tool, and is not in any way part of the OpenShift product. It enhances the openshift-installer by including automation for the following:
openshift-auto-upi comes with Ansible roles to provision OpenShift cluster hosts on the following target platforms:
openshift-auto-upi comes with Ansible roles to provision and configure:
Note that the infrastructure from the above list provisioned using openshift-auto-upi is NOT meant for production use. It is meant to be a temporary fill in for your missing production-grade infrastructure. Using openshift-auto-upi to provision any of the infrastructure from the above list is optional.
- Helper Host is a (virtual) machine that you must provide. It is a helper machine from which you will run openshift-auto-upi Ansible scripts. Any provisioned infrastructure (DHCP, DNS server, ...) will also be installed on the Helper host by default.
- Helper host requires access to the Internet.
- It is stronly discouraged to use openshift-auto-upi to provision infrastructure components on a bastion host. Services provisioned by openshift-auto-upi are not meant to be exposed to the public Internet.
- If your goal is to deploy OpenShift on your laptop, you can run the openshift-auto-upi directly on your laptop and use the local Libvirt as your target platform.
- OpenShift Hosts will be provisioned for you by openshift-auto-upi unless your target platform is bare metal.
openshift-auto-upi assumes that OpenShift hosts are assigned fixed IP addresses. This is accomplished by pairing the hosts MAC addresses with IP addresses in the DHCP server configuration. DHCP server then always assigns the same IP address to a specific host.
Note that in order to use DHCP and/or PXE server installed on the Helper host, the Helper host and all of the OpenShift hosts have to be provisioned on the same layer 2 network. In the opposite case, it is sufficient to have a working IP route between the Helper host and the OpenShift hosts.
If the DNS server is managed by openshift-auto-upi, a DNS name will be created for each OpenShift host. These DNS names follow the scheme:
<hostname>.<cluster_name>.<base_domain>
Note that these names are created only for your convenience. openshift-auto-upi doesn't rely on their existence as they are not required for installing OpenShift.
Here is a sample libvirt network configuration. It instructs libvirt to not provide DNS and DHCP servers for this network. Instead, DNS and DHCP servers for this network will be provided by openshift-auto-upi.
<network>
<name>default</name>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='default' stp='on' delay='0'/>
<dns enable='no'/>
<ip address='192.168.150.1' netmask='255.255.255.0'>
</ip>
</network>
The dependency diagram below depicts the dependencies between some of the openshift-auto-upi Ansible playbooks. You want to execute Ansible playbooks in the dependency order. Following sections describe the installation process in more detail.
There are two options to create a Helper Host:
- Create a Helper Host virtual machine. Recommended Helper Host machine size is 1 vCPU, 4 GB RAM, and 10 GB disk space. You have to install one of the supported operating systems on this machine.
- If you run one of the supported operating system on an existing machine, you can use that machine as your Helper Host.
Supported operating systems for the Helper Host are:
- Red Hat Enterprise Linux 7
- Red Hat Enterprise Linux 8
- Fedora release >= 31
Before continuing with the next steps, follow the basic configuration steps described here.
$ yum install git
$ yum install ansible
$ git clone https://github.com/noseka1/openshift-auto-upi.git
$ cd openshift-auto-upi
If you are installing OpenShift in a restricted network, you will need to create a local mirror registry. This registry will contain all OpenShift container images required for the installation. openshift-auto-upi automates the creation of the mirror registry by implementing the steps described in the Creating a mirror registry. To set up a mirror registry:
$ cp inventory/group_vars/all/infra/mirror_registry.yml.sample \
inventory/group_vars/all/infra/mirror_registry.yml
$ vi inventory/group_vars/all/infra/mirror_registry.yml
$ ansible-playbook mirror_registry.yml
Create custom openshift_install_config.yml configuration:
$ cp inventory/group_vars/all/openshift_install_config.yml.sample \
inventory/group_vars/all/openshift_install_config.yml
$ vi inventory/group_vars/all/openshift_install_config.yml
Create custom openshift_cluster_hosts.yml configuration:
$ cp inventory/group_vars/all/openshift_cluster_hosts.yml.sample \
inventory/group_vars/all/openshift_cluster_hosts.yml
$ vi inventory/group_vars/all/openshift_cluster_hosts.yml
Download OpenShift clients using Ansible:
$ ansible-playbook clients.yml
Note that dnsmasq.yml configuration file is shared between the DHCP, DNS, and PXE servers.
$ cp inventory/group_vars/all/infra/dnsmasq.yml.sample inventory/group_vars/all/infra/dnsmasq.yml
$ vi inventory/group_vars/all/infra/dnsmasq.yml
$ cp inventory/group_vars/all/infra/dhcp_server.yml.sample inventory/group_vars/all/infra/dhcp_server.yml
$ vi inventory/group_vars/all/infra/dhcp_server.yml
Provision DHCP server on the Helper host using Ansible:
$ ansible-playbook dhcp_server.yml
Note that dnsmasq.yml configuration file is shared between the DHCP, DNS, and PXE servers.
$ cp inventory/group_vars/all/infra/dnsmasq.yml.sample inventory/group_vars/all/infra/dnsmasq.yml
$ vi inventory/group_vars/all/infra/dnsmasq.yml
$ cp inventory/group_vars/all/infra/dns_server.yml.sample inventory/group_vars/all/infra/dns_server.yml
$ vi inventory/group_vars/all/infra/dns_server.yml
Provision DNS server on the Helper host using Ansible:
$ ansible-playbook dns_server.yml
PXE server can be used for booting OpenShift hosts when installing on bare metal or libvirt target platform. Installation on vSphere doesn't use PXE boot at all.
Note that dnsmasq.yml configuration file is shared between the DHCP, DNS, and PXE servers.
$ cp inventory/group_vars/all/infra/dnsmasq.yml.sample inventory/group_vars/all/infra/dnsmasq.yml
$ vi inventory/group_vars/all/infra/dnsmasq.yml
Provision PXE server on the Helper host using Ansible:
$ ansible-playbook pxe_server.yml
Web server is used to host installation artifacts such as ignition files and machine images. You can provision a Web server on the Helper host using Ansible:
$ ansible-playbook web_server.yml
Provision load balancer on the Helper host using Ansible:
$ ansible-playbook loadbalancer.yml
If you used openshift-auto-upi to deploy a DNS server, you may want to configure the Helper host to resolve OpenShift host names using this DNS server:
$ cp inventory/group_vars/all/infra/dns_client.yml.sample inventory/group_vars/all/infra/dns_client.yml
$ vi inventory/group_vars/all/infra/dns_client.yml
Configure the NetworkManager on the Helper host to forward OpenShift DNS queries to the local DNS server. Note that this playbook will issue systemctl NetworkManager restart
to apply the configuration changes.
$ ansible-playbook dns_client.yml
Create your install-config.yaml
file:
$ cp files/common/install-config.yaml.sample files/common/install-config.yaml
$ vi files/common/install-config.yaml
Kick off the OpenShift installation by issuing the command:
$ ansible-playbook openshift_baremetal.yml
Create custom libvirt.yml configuration:
$ cp inventory/group_vars/all/infra/libvirt.yml.sample inventory/group_vars/all/infra/libvirt.yml
$ vi inventory/group_vars/all/infra/libvirt.yml
Create your install-config.yaml
file:
$ cp files/common/install-config.yaml.sample files/common/install-config.yaml
$ vi files/common/install-config.yaml
Kick off the OpenShift installation by issuing the command:
$ ansible-playbook openshift_libvirt.yml
Create custom vsphere.yml configuration:
$ cp inventory/group_vars/all/infra/vsphere.yml.sample inventory/group_vars/all/infra/vsphere.yml
$ vi inventory/group_vars/all/infra/vsphere.yml
Create your install-config.yaml
file:
$ cp files/common/install-config.yaml.sample files/common/install-config.yaml
$ vi files/common/install-config.yaml
Kick off the OpenShift installation by issuing the command:
$ ansible-playbook openshift_vsphere.yml
Add the new hosts to the list of cluster hosts. At the same time, remove (comment out) the bootstrap host from the list to prevent the Ansible scripts from powering the bootstrap node back on:
$ vi inventory/group_vars/all/openshift_cluster_hosts.yml
If you are adding infra hosts and you use the load balancer managed by openshift-auto-upi, refresh the load balancer configuration by re-running the Ansible playbook:
$ ansible-playbook loadbalancer.yml
Re-run the platform-specific playbook to install the new cluster hosts:
$ ansible-playbook openshift_<baremetal|libvirt|vsphere>.yml
To allow the new nodes to join the cluster, you may need to sign their CSRs:
$ oc get csr
$ oc adm certificate approve <name>
- Implement Libvirt using fw_cfg
- Support oVirt
- Add documentation on the vm boot order: disk and then network
- Installing python dependencies on RHEL7 (e.g. python-pyvmomi) can be a challenge
- IPMI can be tested on virtual machines using VirtualBMC
- Check Ansible code using
ansible-lint *.yml
Projects similar to openshift-auto-upi: