This tutorial demonstrates how to do CICD with DataPower using GitOps on Kubernetes.
In this tutorial, you will:
- Create a Kubernetes cluster and image registry, if required.
- Install ArgoCD applications to manage cluster deployment of DataPower-related Kubernetes resources.
- Create an operational repository to store DataPower resources that are deployed to the Kubernetes cluster.
- Create a source Git repository to store the configuration and development artefacts for a virtual DataPower appliance.
- Run a Tekton pipleline to build, test, version and deliver the DataPower-related resources ready for deployment.
- Gain experience with the IBM-supplied DataPower operator and container.
At the end of this tutorial, you will have a solid foundation of GitOps and CICD for DataPower in a Kubernetes environment.
The following diagram shows a GitOps CICD pipeline for DataPower:
Notice:
- The git repository
dp01-src
holds the source configuration for the DataPower appliancedp01
. - The
dp01-src
repository also holds the source for a multi-protocol gateway ondp01
. - A Tekton pipeline uses the
dp01-src
repository to build, package, test, version and deliver resources that define thedp01
DataPower appliance. - If the pipeline is successful, then the YAMLs that define
dp01
are stored in the operational repositorydp01-ops
and the container image fordp01
is stored in an image registry. - Shortly after the changes are committed to the git repository, an ArgoCD application detects the updated YAMLs. It applies them to the cluster to update the running
dp01
DataPower appliance.
This tutorial will walk you through the process of setting up this configuration:
- Step 1: Follow the instructions in this README to set up your cluster, ArgoCD and the
dp01-ops
repository. - Step 2: Follow these instructions to create the
dp01-src
respository, run a tekton pipeline to populate it, and interact with the new or updated DataPower appliancedp01
.
Cover Minikube OCP options -->links
Fork this repository from a Template
.
- In the
Repsoitory name
field, specifydp01-ops
This repsoitory will be cloned to the specified GitHub account.
We're going to use the contents of this repository to configure our cluster for CICD and GitOps. We're going to use a copy of the dp01-ops
respository on our local machine to do this.
Open new Terminal window.
In it, store your Git userId in the GITUSER
environment variable, e.g. odowdaibm
export GITUSER=odowdaibm
Now clone the respository to your local machine. It's best practice to store all git repositories of a common root folder called git
. We will keep both the dp01 source and operation repositories under a subfolder datapower
.
Issue the following commands to optionally create this folder structure, and clone the dp01-ops
repository.
mkdir -p $HOME/git/datapower
cd $HOME/git/datapower
git clone [email protected]:$GITUSER/dp01-ops.git
The contents of the dp01-ops
repository will be synchronized with the Kubernetes cluster such that every object in the repository wil be deployed to the cluster. Let's briefly explore the contents of this repository.
Issue the following command:
cd dp01-ops
cat setup/namespaces.yaml
which shows the following YAMLs.
kind: Namespace
apiVersion: v1
metadata:
name: dp01-dev
labels:
name: dp01-dev
These YAMLs will define the dp01-dev
namespace which will be used to store Kubernetes resources for this tutorial. We'll explore the contents of this namespace throughout the tutorial.
Let's use this YAML to define two namespaces in our cluster:
oc apply -f setup/namespaces.yaml
which will create the dp01-dev
namespaces in the cluster.
namespace/dp01-dev created
We'll see how:
- the
dp01-mgmt
namespace is used to store generic Kubernetes resources relating to DataPower. - the
dp01-dev
namespace is used to store specific Kubernetes resources relating todp01
.
As the turorial proceeds, we'll see how the contents of the dp01-ops
repository fully defines the contents of all resources relating to our DataPower deployment. Moreover, we're going to set up the cluster such it is automatically updated whenever this dp01-ops
repository changes. This concept is called continuous deployemnt and we'll use ArgoCD to acheive it.
Let's install ArgoCD to enable continuous deployment:
Use the following command to create a subscription for ArgoCD:
oc apply -f setup/argocd-operator-sub.yaml
which will create a subscription for ArgoCD:
subscription.operators.coreos.com/openshift-gitops-operator created
This subscription enables the cluster to keep up-to-date with new version of ArgoCD. Each release has an install plan that is used to maintain it. In what might seem like a contradiction, our subscription creates an install plan that requires manual approval; we'll understand why a little later.
Explore the subscription using the following command:
cat setup/argocd-operator-sub.yaml
which details the subscription:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-gitops-operator
namespace: openshift-operators
spec:
channel: stable
installPlanApproval: Manual
name: openshift-gitops-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
See if you can understand each YAML node, referring to subscriptions if you need to learn more.
Let's find our install plan and approve it.
oc get installplan -n openshift-operators | grep "openshift-gitops-operator" | awk '{print $1}' | \
xargs oc patch installplan \
--namespace openshift-operators \
--type merge \
--patch '{"spec":{"approved":true}}'
which will approve the install plan
installplan.operators.coreos.com/install-xxxxx patched
where install-xxxxx
is the name of the ArgoCD install plan.
ArgoCD will now install; let's verify the installation has completed successfully by examining the ClusterServiceVersion (CSV) for ArgoCD. A CSV is created for each installation - it holds the exact versions of all dependent software and relevant RBAC permissions.
oc get clusterserviceversion -n openshift-gitops
NAME DISPLAY VERSION REPLACES PHASE
openshift-gitops-operator.v1.5.7 Red Hat OpenShift GitOps 1.5.7 openshift-gitops-operator.v1.5.6-0.1664915551.p Succeeded
Feel free to explore this CSV with, replacing x.y.z
with the installed version of ArgoCD:
oc describe csv openshift-gitops-operator.vx.y.z -n openshift-operators
oc patch argocd openshift-gitops \
--namespace openshift-gitops \
--type merge \
--patch '{"spec":{"applicationInstanceLabelKey":"argocd.argoproj.io/instance"}}'
The final operator we need to add to the cluster is the DataPower operator. It is installed from a specific IBM catalog source, so we first need to add the catalog source to the cluster.
Issue the following command:
oc apply -f setup/catalog-sources.yaml
which will add the sources defined in this YAML to the cluster:
catalogsource.operators.coreos.com/ibm-operator-catalog created
Feel free to examine the catalog source YAML:
cat setup/catalog-sources.yaml
Notice how two catalog sources are added.
We are going to install the DataPower operator into its own namespace, dp01-mgmt
. To allow it to access the dp01-dev
namespace, we create an operator group.
Issue the following command to create the operator group dp-operator-group
:
oc apply -f setup/dp-operator-group.yaml
which you will see created.
operatorgroup.operators.coreos.com/dp-operator-group created
You can examine this operator group with the following command:
cat setup/dp-operator-group.yaml
which will show you the details of the operator group:
Notice how this operator can control the dp01-dev
namespace, where we are going to initially deploy the dp01
DataPower appliance.
Now we've added these catalog sources, we can install the DataPower operator; we're familar with the process -- it's the same as ArgoCD.
Issue the following command:
oc apply -f setup/dp-operator-sub.yaml
subscription.operators.coreos.com/datapower-operator created
Explore the subscription using the following command:
cat setup/dp-operator-sub.yaml
which details the subscription:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
labels:
operators.coreos.com/datapower-operator.dp01-ns: ''
name: datapower-operator
namespace: openshift-operators
spec:
channel: v1.6
installPlanApproval: Manual
name: datapower-operator
source: ibm-operator-catalog
sourceNamespace: openshift-marketplace
startingCSV: datapower-operator.v1.6.3
Notice how this operator is installed in the dp01-mgmt
namespace. Note also the use of channel
and startingCSV
to be precise about the exact version of the DataPower operator to be installed.
Let's find our install plan and approve it.
oc get installplan -n openshift-operators | grep "datapower-operator" | awk '{print $1}' | \
xargs oc patch installplan \
--namespace openshift-operators \
--type merge \
--patch '{"spec":{"approved":true}}'
which will approve the install plan:
installplan.operators.coreos.com/install-xxxxx patched
where install-xxxxx
is the name of the DataPower install plan.
Again, feel free to verify the DataPower installation with the following commands:
oc get clusterserviceversion -n dp01-mgmt
Replace x.y.z
with the installed version of DataPower in the following command:
oc describe csv datapower-operator.vx.y.z -n openshift-operators
Our final task is to install Tekton. With it, we can create pipelines that populate the operational repository dp01-ops
using the DataPower configuration and development artefacts stored in dp01-src
. Once populated by Tekton, ArgoCD will then synchronize these artefacts with the cluster to ensure the cluster is running the most up-tp-date version of dp01
.
Issue the following command to create a subscription for Tekton:
oc apply -f setup/tekton-operator-sub.yaml
which will create a subscription:
subscription.operators.coreos.com/openshift-pipelines-operator created
Again, this subscription enables the cluster to keep up-to-date with new version of Tekton.
Explore the subscription using the following command:
cat setup/tekton-operator-sub.yaml
which details the subscription:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-pipelines-operator
namespace: openshift-operators
spec:
channel: stable
installPlanApproval: Manual
name: openshift-pipelines-operator-rh
source: redhat-operators
sourceNamespace: openshift-marketplace
Manual Tekton install:
kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.16.3/release.yaml)
Let's find our install plan and approve it.
oc get installplan -n openshift-operators | grep "openshift-pipelines-operator" | awk '{print $1}' | \
xargs oc patch installplan \
--namespace openshift-operators \
--type merge \
--patch '{"spec":{"approved":true}}'
which will approve the install plan
installplan.operators.coreos.com/install-xxxxx patched
where install-xxxxx
is the name of the Tekton install plan.
Again, feel free to verify the Tekton installation with the following commands:
oc get clusterserviceversion -n openshift-pipelines
(replacing x.y.z
with the installed version of Tekton)
oc describe csv openshift-pipelines-operator-rh.vx.y.z -n openshift-operators
To allow Tekton to access GitHub, specifically to create YAMLs in the dp01-ops
repository, we need to set up appropriate SSH keys for access.
Issue the following command to create an SSH key pair:
ssh-keygen -t rsa -b 4096 -C "[email protected]" -f ./.ssh/id_rsa -q -N ""
Issue the following command to create a known_hosts
file for SSH access:
ssh-keyscan -t rsa github.com | tee ./.ssh/github-key-temp | ssh-keygen -lf - && cat ./.ssh/github-key-temp >> ./.ssh/known_hosts
Issue the following command to create a secret containing the SSH private key and known_hosts
file:
oc create secret generic dp01-ssh-credentials -n dp01-dev --from-file=id_rsa=./.ssh/id_rsa --from-file=known_hosts=./.ssh/known_hosts --from-file=./.ssh/config --dry-run=client -o yaml > .ssh/dp-git-credentials.yaml
Issue the following command to create this secret in the cluster:
oc apply -f .ssh/dp-git-credentials.yaml
Finally, add this secret to the pipeline
service account to allow it to use dp-1-ssh-credentials
secret to access GitHub.
oc patch serviceaccount pipeline \
--namespace dp01-dev \
--type merge \
--patch '{"secrets":[{"name":"dp01-ssh-credentials"}]}'
To allow the Tekton pipeline to push the generated DataPower Kubernetes resource YAMLs to the dp01-ops
repository, we need to add the public key we've just generated to GitHub.
Use the following link in a browser to access the GitHub user interface to add an SSH public key:
https://github.com/settings/keys
You'll see the following page:
Copy your public key to the clipboard:
pbcopy < ./.ssh/id_rsa.pub
Click on New SSH Key
, and complete the following details:
- Add name
dp01 SSH key
- Paste key into box
- Hit
Add SSH key
button
The Tekton pipeline now has access to your GitHub.
(Might be better to use access tokens, to limit scope... consider as change.)
Finally, we're going to create an ArgoCD application to manage dp01
.
cat environments/dev/argocd/dp01.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: dp01-argo
namespace: openshift-gitops
annotations:
argocd.argoproj.io/sync-wave: "100"
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: dp01-dev
server: https://kubernetes.default.svc
project: default
source:
path: environments/dev/dp01/
repoURL: https://github.com/dp-auto/dp01-ops.git
targetRevision: main
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- Replace=true
Notice how the this Argo application will be monitoring GitHub for resources to deploy to the cluster:
source:
path: environments/dev/dp01/
repoURL: https://github.com/odowdaibm/dp01-ops.git
targetRevision: main
See how:
repoURL: https://github.com/odowdaibm/dp01-ops.git
identifies the repository where the YAMLs are locatedtargetRevision: main
identifies the branch within the repositorypath: environments/dev/dp01/
identifies the folder within the repository
Let's deploy this ArgoCD application to the cluster:
oc apply -f environments/dev/argocd/dp01.yaml
which will complete with:
application.argoproj.io/dp01-argo created
We now have an ArgoCD application monitoring our repository, e.g. https://github.com/odowdaibm/dp01-ops.git
.
We can use the ArgoCD UI to look at the dp01-argo
application and the resources it is managing:
Issue the following command to identify the URL for the ArgoCD login page:
oc get route openshift-gitops-server -n openshift-gitops -o jsonpath='{"https://"}{.spec.host}{"\n"}'
which will return a URL similar to this:
https://openshift-gitops-server-openshift-gitops.vpc-mq-cluster1-d02cf90349a0fe46c9804e3ab1fe2643-0000.eu-gb.containers.appdomain.cloud
Issue the following command to determin the ArgoCD password
for the admin
user:
oc extract secret/openshift-gitops-cluster -n openshift-gitops --keys="admin.password" --to=-
Login to ArgoCD with admin
and password
.
You will see the following screen: