Giter VIP home page Giter VIP logo

openshift-deployment's Introduction

openshift-deployment

End to end walkthrough of deploying and using Reloader and Mosquitto in OpenShift

If you're reading this it's likely that you are just getting started with the Operate First team or you're getting started with Site Reliability Engineering concepts. A little about myself: as of writing this (Dec 2021), I have been with Red Hat for about 8 months where I started as an intern working on a Massachusetts Open Cloud security project. As of October 2021, I am a full time Site Reliability Engineer (SRE) working with the Operate First team. This means I work a lot with OpenShift, and in this walk-through I will take you through my first complete project deployment.

What exactly is an SRE? Well it's a rather broad title but here is the definition of site reliability engineering from the Operate First website: "SRE is a software engineering approach to manage operations for systems, applications and services. We use software as a tool to manage systems, solve problems, and automate operations tasks." In the context of Operate First we deploy, manage and improve applications and resources in OpenShift cloud environments.

And what exactly is Operate First? I'll once again refer to a quote from the Operate First website: "Operate First is a concept where open source developers bring their projects to a production cloud during development where they can interact with a cloud provider’s operators and gain valuable feedback on the operational considerations of their project."

For more info check out: Operate First

To see more of what it's like to be an a part of the Operate First/SRE team you should join the Operate First Slack: OperateFirst slack Additionally, check out the repositories here: Operate First Github Repo, more specifically you can look at issues in the apps repo for instance: apps repo issues

If you’re anything like me, the best way to actually learn this stuff is by jumping in headfirst and working with it. I’ll run you step by step through my own first successful complete deployment to the operate-first team. For this example we’ll be using Reloader as our case study.

Here is the link to the Reloader Github repository: Reloader

First of all, what is Reloader and why is it useful to us? Reloader essentially allows us to monitor for when changes are made to configMap and secret manifests within our cluster. When changes are detected the Reloader Pod will trigger affected pods to be redeployed on a rolling basis. Makes sense right? If you're anything like me when I got started, the previous 2 sentences sound dreadfully confusing. For right now what Reloader does and all of the specifics are not super important, and first we need to set some things up so we can actually use it. As we set up we will cover all the necessary terminology and concepts we need to flesh all this out.

In the following sections I'm going to run through setting up an OpenShift cluster, deploying applications in said cluster, and everything that goes along with it.

The final versions of each of the manifests we'll be using are in the manifests file and you'll find the walkthrough broken up into isses in the issues tab of the repository. Start with issue #1 and work your way up.

If you have any questions or issues throughout you can contact me at [email protected].

I hope this guide will help you get started on your journey with OpenShift and Operate First!

-Dylan

openshift-deployment's People

Contributors

dystewart avatar

Stargazers

 avatar Bryan Montalvan avatar

Watchers

 avatar

openshift-deployment's Issues

Introduction to Kustomize

If you are familiar and comfortable with Kustomize you can safely skip this section.
If you are new to Kustomize or just need a basic refresher, here are some of the resources me and a few other beginners found very helpful:

  1. Kustomize installation docs
  2. Getting started with Kustomize tool for Kubernetes
  3. Kustomize tutorial with instructions & examples
  4. Declarative Management of Kubernetes Objects Using Kustomize

Kustomize is an integral and necessary tool that helps us to decoratively deploy our applications in our OpenShift environment by bundling all of our manifests together. That is, it allows us to easily specify application configurations and fine tune how the applications are deployed and managed. As you can tell, this idea of declarative deployment is very powerful tool since it prevents the need for an administrator to make configuration changes to an application every time we deploy it.

Make sure you're comfortable with Kustomize before moving on as we'll be using it early and often in this walkthrough.

Deploying an application (Eclipse Mosquitto)

Now to get into the nitty gritty. We are going to deploy an application in our OpenShift cluster to see how it works. The application we'll be using is called Eclipse Mosquitto, which is an open source MQTT broker. We don't need to worry too much about what it does because we won't be using it in depth but you can learn more about it here: Eclipse Mosquitto

It's important to remember that everything in OpenShift is defined by a yaml manifest. So what we need to do is create a directory somewhere on our local machine where we will gather and work with the manifests we need to deploy, so go ahead and set that directory up. Before we start though, we want to make sure we know all kubernetes terminology we'll be using to deploy this application. I'll include each of the files we'll be using so you can see what they look like along with a description. Copy and paste them into your working directory as they appear.

namespace
When we are working we always need to be aware of what namespace we are working in. A namespace allows us to scope our resources to avoid things like naming conflicts. This in turn allows multiple teams or applications to use the cluster. (NOTE: a namespace is also called a project within OpenShift)

More on namespaces: kubernetes namespaces

Let's create a new namespace that we'll deploy mosquitto in:
$ oc new-project mosquitto
Now your OpenShift CLI will be inside you new namespace.

deployment
Another critical object we'll be using is a deployment. We use a deployment to create and configure certain aspects of pods. A pod is essentially a container that runs our application. Deployments can also be used to spin up extra replica pods if for instance resource usage reaches a certain threshold.

More on deployments: kubernetes deployments

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mosquitto
  labels:
    app: mosquitto
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mosquitto
  template:
    metadata:
      labels:
        app: mosquitto
    spec:
      containers:
        - name: mosquitto
          image: quay.io/dystewar/mosquitto
          ports:
            - containerPort: 1883
          volumeMounts:
            - name: mosquitto-conf
              mountPath: /mosquitto/config
            - name: mosquitto-secret
              mountPath: /mosquitto/secret  
      volumes:
        - name: mosquitto-conf
          configMap:
            name: mosquitto-config-file
        - name: mosquitto-secret
          secret:
            secretName: mosquitto-secret-file

secrets
We will be mounting a secret in our Mosquitto pod for testing Reloader later on. A secret is a Kubernetes resource that holds some sensitive info like credentials or a token. The benefit of a secret is that we need not include sensitive data in our application code.

More on secrets: kubernetes secrets

secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: mosquitto-secret-file
type: Opaque
data:
  username: YWRtaW4=
  password: MWYyZDFlMmU2N2Rm

configMap
We will be using a configMap to pass some configuration code into our Mosquitto pod via an environment variable.

More on configMaps: kubernetes configMap

config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: mosquitto-config-file
data:
  mosquitto.conf: |
    log_dest stdout
    log_type all
    log_timestamp true
    listener 9001

If you take a look in our deployment.yaml you can see that it's dependent on a couple volume mounts namely the our secret and our configMap:

deployment.yaml

...
containers:
        - name: mosquitto
          image: quay.io/dystewar/mosquitto
          ports:
            - containerPort: 1883
          volumeMounts:
            - name: mosquitto-conf
              mountPath: /mosquitto/config
            - name: mosquitto-secret
              mountPath: /mosquitto/secret  
      volumes:
        - name: mosquitto-conf
          configMap:
            name: mosquitto-config-file
        - name: mosquitto-secret
          secret:
            secretName: mosquitto-secret-file
...

So we need to create the secret and configMap before we can actually create our deployment successfully so let's do it:

oc apply -f secret.yaml
oc apply -f config.yaml

Now double check that the resources were indeed created:
oc get configmaps
oc get secrets
You should see your secret(mosquitto-secret-file) and your configMap(mosquitto-config-file) listed in the reults. These are the same names as we see in the config.yaml and secret.yaml files. You can find the names under the metadata.name fields in each file.

Finally we can create our deployment to actually get our Mosqutto pod up and running:
oc apply -f deployment.yaml
Now we can see the deployment:
oc get deployment
And we can also see the pod the created by our deployment:
oc get pods

We can see all this via the web UI as well by selecting our mosquitto project from the projects tab. If you click on workloads you'll see your deployment, and by clicking on your deployment you will see the pod.

Now Mosquitto is up and running in the cluster!

Setting up Quicklab

The first thing we need to do is set up an OpenShift cluster in quicklab. We will deploy all of our applications and work pretty much exclusively within this environment for now. For more info on OpenShift I recommend you check out the docs here: OpenShift Docs.

To set up the cluster head to https://quicklab.upshift.redhat.com. NOTE: You will need to be on the Red Hat VPN to access the url.

Once signed in, click on the "New Cluster" button and make the selections you see below.

2

3

Once everything is correctly setup click the button to provision your cluster. The provisioning process will take a few minutes.

Once the cluster is provisioned we need to install a bundle that will allow us to access it via a nice web UI. This is the same web UI that is used for other Operate-First clusters such as the Smaug cluster.

Once provisioning is complete click on your cluster and click on the new bundle button, under "Product Information". Click the drop down menu and select openshift4upi. A bundle configuration screen will appear, you can leave all the default options and click submit. Once the bundle is installed you'll see a bunch of information has populated the "Cluster Information" section. You'll see something like the image below

4

Most relevant to us for now are the "OpenShift URL" and "OpenShift Credentials" section where you will see something like:

OpenShift URL: https://console-openshift-console.apps.testing.lab.upshift.rdu2.redhat.com


OpenShift Credentials: (username: password)
kubeadmin : your_password

You'll use the kubeadmin : your_password pair to login at your OpenShift URL.

Once you're logged in at the web UI, you'll want to click the top right of your screen where you see the kube:admin drop-down and click the 'copy login command'

Then in the tab that was just opened you'll want to copy and paste the oc login command. Here's what mine looks like for reference:
oc-login

Now that you're all logged in, you can start working with your cluster. In the next section we'll see how you can deploy an application (Eclipse Mosquitto).

Testing Reloader with Mosquitto

The final thing we need to do is update our Mosquitto deployment with annotations that Reloader uses to redeploy pods when a specified configmap or secret changes.
Check out the Reloader docs for more info on how it works: Reloader

To track the configmap and secret resources of our Mosquitto pod we need to add the following annotations in the metadata section of the Mosquitto deployment.yaml:

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    secret.reloader.stakater.com/reload: "mosquitto-secret-file"
    configmap.reloader.stakater.com/reload: "mosquitto-config-file"
  name: mosquitto
  labels:
    app: mosquitto
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mosquitto
  template:
    metadata:
      labels:
        app: mosquitto
    spec:
      containers:
        - name: mosquitto
          image: quay.io/dystewar/mosquitto
          ports:
            - containerPort: 1883
          volumeMounts:
            - name: mosquitto-conf
              mountPath: /mosquitto/config
            - name: mosquitto-secret
              mountPath: /mosquitto/secret  
      volumes:
        - name: mosquitto-conf
          configMap:
            name: mosquitto-config-file
        - name: mosquitto-secret
          secret:
            secretName: mosquitto-secret-file

Once you update the deployment.yaml manifest click save and finally we can check if our Reloader pod is working. At the web UI take a look at the secrets tab while in your Mosquitto project and select the mosquitto-secret-file resource. Take a look at the yaml of the secret resource and change the username or password to something else. As always click save to apply the changes. Now take a look back at your Reloader pod logs and you should see something like this:
reloaderworks

The change was picked up by Reloader! if you look at the Mosquitto pod you'll see that it was just recreated to reflect the changes. This redeployment was triggered by Reloader!

You can also test that changes to the Mosquitto configmap by making a change to the configmap yaml like below:
false

Now looking back at the Reloader logs:
cm

Reloader detected our configmap changes too! And there you have it. We have successfully configured and deployed 2 applications in our OpenShift cluster! Most basic application deployments will be spun up in a similar way, and you can take what you've seen here to help with debugging your next deployment.

Deploying Reloader

Now that our Mosquitto pod is deployed we can set up a Reloader pod. What this Reloader instance will do is monitor all the namespaces in our cluster, looking for annotations in other application manifests, namely configMap and secret manifests. These annotations let Reloader know that we want to track the deployments for changes in the configMaps and secrets and then redeploy these deployments after a change is detected in order to keep our pods up to date automatically.

The first thing to do is ensure you're logged into your cluster, then create a new local directory to work on the reloader deployment. Also, let's make a new namespace called reloader in our cluster:
$ oc new-project reloader

To deploy Reloader we will use the kustomize install instructions on the reloader github page: relaoder
So we will just copy this code into a kustomization.yaml file for now:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
  - https://github.com/stakater/Reloader/deployments/kubernetes

namespace: reloader

Let's see if we can deploy our reloader pod with our kustomize file as is. To do that make sure you're in the directory with your kustomization.yaml and run:

$ kubectl apply -k .
serviceaccount/reloader-reloader created
clusterrole.rbac.authorization.k8s.io/reloader-reloader-role created
clusterrolebinding.rbac.authorization.k8s.io/reloader-reloader-role-binding created
deployment.apps/reloader-reloader created

Okay that's a good start, we didn't get any errors on the command line. Let's double check that everything is up and running:
$ oc get deployments NAME READY UP-TO-DATE AVAILABLE AGE reloader-reloader 0/1 0 0 6m44s
Okay so something is preventing our deployment from going up. When debugging sometimes I find it nice and easy to debug at the web UI but there are also highly detailed ways to debug using oc or kubectl at the command line. Time to take a closer look at the deployment using the web UI.

One of the first places you can look for hints when something goes wrong with your deployment is the details tab of said deployment in the conditions section. Here we can see a security context issue is causing our deployment to fail:
seccontext
Specifically, the message tells us we need to look at spec.containers[0].securityContext in our deployment.yaml:
seccontextyaml

The reason we're getting this error is because the reloader deployment is set up with a user that doesn't exist in our environment. To get around this we can simply edit the security context (under spec.template.spec in the deployment yaml manifest) from this:

...
securityContext: 
        runAsNonRoot: true
        runAsUser: 65534
...

To this:

...
securityContext: {}
...

You can edit the manifest right in the YAML tab of your reloader-reloader deployment to see what affect this will have. After you make the change make sure to hit the save button so the deployment refreshes. Now our pod shows up!... Click the Pods tab to take a look. But we are seeing another error now, this time an ImagePullBackOff error.

You'll probably see this a lot when testing in quicklab but the fix is relatively simple. If you want to see a more detailed description of this error click on your pod and look into the Events tab. This tab is very helpful for debugging purposes.

imagepullbckoff

As we can see Docker is rate limiting us while attempting to pull the image so we need another way to pull the image. To get around this rate limiting issue we need to use Quay.io. Quay much like Docker allows us to push and pull images, just without the pesky Docker rate limiting.

You will need to either create or login to your quay account at: quay.io
Now let's create a new repository in our quay account which we will use to store our image. Click the "Create New Repository" button and choose the following configuration:
quay

Once you're all setup over there let's look at the Docker Hub website to see the image we were trying to pull: DockerHub Reloader. Be sure to note the version number (v0.0.104 in case) in the error message of you pod.

Now login to quay at our terminal using podman and pull the image you need to your local machine

$ podman login quay.io #use your quay username and pwd
$ podman pull stakater/reloader:v0.0.104 #pull the image using the appropriate version number
$ podman images #Run this to get the image ID of the image you just pulled
$ podman tag <YOUR_IMAGEID> quay.io/QUAY_USERNAME/reloader:v0.0.104 #tag the image so we can push it to quay
$ podman push quay.io/QUAY_USERNAME/reloader:v0.0.104 #Now push the container image to your quay reloader repo 

With our quay repo now set up we can point our deployment to our it so we can actually pull the Reloader image. To do so we need to edit the spec.containers.image field from this:

...
          imagePullPolicy: IfNotPresent
          terminationMessagePolicy: File
          image: 'stakater/reloader:v0.0.104'
...

To this:

...
          imagePullPolicy: IfNotPresent
          terminationMessagePolicy: File
          image: 'quay.io/USER_NAME/reloader:v0.0.104'
...

Once you make the change to the deployment yaml file click save again and view your pod. It's status now says running! So the changes we've made to the container image source and the security context have gotten our reloader pod to a running state. Let's now use Kustomize to declaratively make these changes, so we don't need to manually configure the deployment in our cluster every time we deploy it. To do this we will make a patch of our kustomization.yaml. This in essence uses the original kustomization file from stakater and overwrites it with changes we specify in our patch.

Lets create a directory within our working directory called patches. Within that folder let's create a document called deployment_patch.yaml. It should look like this:

deployment_patch.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: reloader-reloader
spec:
  template:
    spec:
      securityContext: NULL
      containers:
        - image: quay.io/dystewar/reloader:v0.0.104
          name: reloader-reloader

Switching back to our main directory we need to update our kustomization.yaml:

kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
  - https://github.com/stakater/Reloader/deployments/kubernetes
patchesStrategicMerge:
  - patches/deployment_patch.yaml

namespace: reloader

Now lets delete our deployment reloader-reloader and redeploy with our updated kustomization.yaml that reflects our patches:

$ oc delete deployment reloader-reloader
$ kubectl apply -k .

Now our reloader deployment is successfully configured, without having to make the manual changes we made before. Our patch took care of that for us. Great so everything is up and running, but now what? Let's check the logs tab of our reloader pod to make sure everything is looking good within the pod itself:
logs

Okay what on Earth is this trying to tell us? Well it mentions a serviceaccount, which is used by processes in our pods to provide a flexible way to control API access without sharing a regular user’s credentials. Further analysis shows that our serviceaccount can't access deploymentconfig resources. So there's something going on here with our role based access control (rbac) here so we'll need to make a change to our clusterrole. Before we do that make sure to take a look at this link and look specifically at the sections regarding clusterroles and clusterrolebindings: rbac

If we take a look in the Reloader repo at the clusterrole: clusterrole.yaml, specifically at the rules section, we don't see deploymentconfig rule. Permissions using rbac are additive, meaning permissions are denied by default, so now it makes sense why we were getting that error in the pod logs. Also note the name of the clusterrole (reloader-reloader-role).

You might be wondering how we know that our serviceaccount is associated with this clusterrole. The easiest way to see this is by running:

oc get sa
NAME                SECRETS   AGE
builder                        2         5d3h
default                        2         5d3h
deployer                     2         5d3h
reloader-reloader       2         5d3h

So the serviceaccount we have been talking about is called reloader-reloader.
Now take a look at clusterrolebinding.
The roleRef section shows we are referencing our (reloader-reloader-role) clusterrole and the subject section is binding that clusterrole to our (reloader-reloader) serviceaccount.

Now we just need to add permissions for accessing deploymentconfigs to our clusterrole. We can do this creating another Kustomize patch to our deployment:

clusterrolepatch.yaml

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole
metadata:
  name: reloader-reloader-role
rules:
  - apiGroups:
      - "apps.openshift.io"
    resources:
      - deploymentconfigs
    verbs:
      - list
      - get
      - update
      - patch

Add the above file to your patches folder.

Then update your kustomization.yaml:

kustomization.yaml

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole
metadata:
  name: reloader-reloader-role
rules:
  - apiGroups:
      - ""
    resources:
      - secrets
      - configmaps
    verbs:
      - list
      - get
      - watch
  - apiGroups:
      - "apps"
    resources:
      - deployments
      - daemonsets
      - statefulsets
    verbs:
      - list
      - get
      - update
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - deployments
      - daemonsets
    verbs:
      - list
      - get
      - update
      - patch
  - apiGroups:
      - "apps.openshift.io"
    resources:
      - deploymentconfigs
    verbs:
      - list
      - get
      - update
      - patch

Finally lets redeploy to fix our pod:

$ oc delete deployment reloader-reloader
$ kubectl apply -k .

Now we can see the logs of our Reloader pod and everything appears normal! We can see that Reloader is now monitoring all our namespaces in the cluster for changes in configmaps and secrets.

monitoring

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.