Giter VIP home page Giter VIP logo

jenkins-client-plugin's Introduction

OpenShift Jenkins Pipeline (DSL) Plug-in

Overview

The OpenShift Pipeline DSL Plug-in is a Jenkins plug-in which aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with an OpenShift API Server. The plug-in leverages an OpenShift command line tool (oc) which must be available on the nodes executing the script (options for getting the binary on your nodes can be found here).

Starting with the 3.7 release of OpenShift, this plug-in is now considered GA, is fully supported, and is included in the OpenShift Jenkins images.

If you are interested in the legacy Jenkins plug-in, you can find it here.

Reader Prerequisites

  • Familiarity with OpenShift command line interface is highly encouraged before exploring the plug-in's features. The DSL leverages the oc binary and, in many cases, passes method arguments directly on to the command line. This document cannot, therefore, provide a complete description of all possible OpenShift interactions -- the user may need to reference the CLI documentation to find the pass-through arguments a given interaction requires.
  • A fundamental level of understanding of the Jenkins Pipeline architecture and basic pipeline steps may be required to appreciate the remainder of this document. Readers may also find it useful to understand basic Groovy syntax, since pipeline scripts are written using Groovy (Note: Jenkins sandboxes and interferes with the use of some Groovy facilities).

Installing and developing

This plug-in is available at the Jenkins Update Center and is included in the OpenShift Jenkins image.

Otherwise, if you are interested in building this plug-in locally, follow these steps:

  1. Install maven (platform specific)
  2. Clone this git repository:
    git clone https://github.com/openshift/jenkins-client-plugin.git
    
  3. In the root of the local repository, run maven
    cd jenkins-client-plugin
    mvn clean package
    
  4. Maven will build target/openshift-client.hpi (the Jenkins plug-in binary)
  5. Open Jenkins in your browser, and navigate (as an administrator):
  6. Manage Jenkins > Manage Plug-ins.
  7. Select the "Advanced" Tab.
  8. Find the "Upload Plug-in" HTML form and click "Browse".
  9. Find the openshift-client.hpi built in the previous steps.
  10. Submit the file.
  11. Check that Jenkins should be restarted.

You should now be able to configure an OpenShift Cluster. Before running a job, you may also need to ensure your Jenkins nodes have the 'oc' binary installed. And for Linux users, there can be some additional requirements for running 'oc' based on which Linux distribution you are using.

If you want to test your changes against a running OpenShift server using the regression test suite located in the OpenShift Origin repository, see these instructions.

Compatibility with Declarative Pipeline

The means by which this plug-in has been able to coexist with declarative pipeline has taken a few twists and turns since v1.0 of that feature first arrived in early 2017.

In particular, the recommendation around leveraging this plug-in's directives with the declarative pipeline { ... } directive have had to be adjusted as the pipeline-model-definition plug-in has evolved.

Currently, there are two requirements of note:

  1. Per the requirements of declarative pipelines, the
pipeline {

...

}

directive must be the outer most closure to fully enable all the declarative pipeline semantics and features.

  1. Declarative currently does not support groovy body/closures that are not pipeline script steps. And since this plug-in integrates into Jenkins as a Global Variable, it does not meet that restriction. As such, you must encapsulate all use of this plug-in with the declarative
script {

...

}

directive such that the declarative interpreter is told to treat this as scripted pipeline.

In fact, per https://issues.jenkins-ci.org/browse/JENKINS-42360 this situation has been discussed upstream and it is stated that this restrictions between declarative and global variables will not be lifted.

As development of this plug-in continues, periodic attempts will be made to monitor that these recommendations are still valid, and if not, either adjust recommendations accordingly, or leverage any new integration points with declarative that are synergistic with this plug-in's design.

But certainly if users of this plug-in notice changes before the maintainers of the plug-in, please advise via opening issues at https://github.com/openshift/jenkins-client-plugin.

Compatibility with parallel step

As this plug-in is implemented as a Jenkins Global Variable, then its step name, openshift, is, as a Global Variable implies, a singleton within the context of a job run. This aspect has ramifications when employing openshift.withCluster(...) within the parallel step's closure, as openshift.withCluster (and the openshift.withProject or openshift.withCredentials calls that can occur within a openshift.withCluster closure) modifies the state of the openshift global variable for a pipeline run.

The current version of groovy in Jenkins, as well as the lack of API for Global Variables to interpret that they are running within the context of a given parallel closure, prevent this plug-in from managing withCluster, withProject, or withCredentials calls in parallel.

As such, any use of the parallel step must be within the inner most closure of whatever combination of openshift.withCluster, openshift.withProject, and openshift.withCredentials your pipeline employs

openshift.withCluster() {
   openshift.withProject() {
      parallel {
      ...
      }
   }
}

NOTE: while the more recent pipeline documentation discusses parallel as part of the Declarative Syntax, at least at the time of this writing, it was still provided as part of the workflow-cps plug-in (i.e. the original Scripted Syntax). If that were to change, and parallel were only available via the Declarative Syntax, then the above restrictions regarding Declarative Syntax would apply.

If you cannot prevent openshift.withCluster from being called from within a parallel block, the openshift.setLockName method can be called prior openshift.withCluster to denote a lock resource to be processed by the pipeline lock step provided by the Lockable Resources plug-in. This plug-in is included in the OpenShift Jenkins image. But if you are running Jenkins outside of the OpenShift Jenkins image, you need to have that plug-in installed in order to leverage this integration.

An example usage would be

pipeline {
    agent none
    stages {
        stage('Test init') {
            steps {
                script {
                    openshift.setLockName(${var-name-that-ideally-includes-job-name-and-run-number})
                }
            }
        }
        stage('Run tests') {
            parallel {
                stage('Test run 1') {
                    steps {
                        script {
                            openshift.withCluster(...) {
                            ...
                            }
                        }
                    }
                }

                stage('Test run 2') {
                    steps {
                        script {
                            openshift.withCluster(...) {
                            ...
                            }
                        }
                    }
                }
            }
        }
    }
}

NOTE: there is currently no automatic means to periodically purge any lock resources which your pipelines dynamically create. You'll have to manage periodically purging that inventory.

Compatibility with load step

Another integration concern, albeit dependent on the timing of Jenkins restarts, resulting from the Global Variable integration point has been uncovered with respect to the load step.

Also note, these timing windows can be widened when the input step is used, as that can pause a pipeline run while it waits for user confirmation.

Under the covers, the load step creates a new version of the internal objects the Global Variable is bounded to. If the openshift.withCluster and openshift.withProject calls are in separate scripts, and Jenkins is restarted, we've seen state stored by the openshift.withCluster call lost as the pipeline run is read back into the system after the restart completes. The state is not lost if a restart does not occur.

We'll illustrate with a user provided example. First, here is some of this plug-in's DSL stored in a Groovy method within a deploy.groovy file:

def execute(String env) {
                openshift.withProject(env) {
                    openshift.selector("dc", "simple-python").rollout().latest()
                    def latestDeploymentVersion = openshift.selector('dc',"simple-python").object().status.latestVersion
                    def rc = openshift.selector('rc', "simple-python-${latestDeploymentVersion}")
                    timeout (time: 10, unit: 'MINUTES') {
                        rc.untilEach(1){
                            def rcMap = it.object()
                            return (rcMap.status.replicas.equals(rcMap.status.readyReplicas))
                        }
                    }
                }
}

return this

Next, some examples uses of load. First, our user saw the problem during a Jenkins restart while the pipeline was paused on the second input call (an NPE trying to get the server URL that is stored when openshift.withCluster is called occurred when the pipeline resumed after the restart):

node('master') {
    checkout scm
    stage('approval') {
        timeout(time: 30, unit: 'DAYS') {
                input message: "Start first rollout ?"
            }
    }
    stage('first rollout') {
        openshift.withCluster() {
             def steps = load 'deploy.groovy'
             steps.execute("de-sandpit-prod")
             echo "completed first rollout"
        }
    }
    stage('approval') {
        timeout(time: 30, unit: 'DAYS') {
                input message: "Start second rollout ?"
            }
    }
    stage('second rollout') {
        openshift.withCluster() {
             def steps = load 'deploy.groovy'
             steps.execute("de-sandpit-prod")
             echo "completed second rollout"
        }
    }
}

The problem can be avoided in one of three ways.

  • avoid the use of load and in-line the groovy in question
node('master') {
    checkout scm
    stage('approval') {
        timeout(time: 30, unit: 'DAYS') {
                input message: "Start first rollout ?"
            }
    }
    stage('first rollout') {
           openshift.withCluster() {
                openshift.withProject("de-sandpit-prod") {
                    openshift.selector("dc", "simple-python").rollout().latest()
                    def latestDeploymentVersion = openshift.selector('dc',"simple-python").object().status.latestVersion
                    def rc = openshift.selector('rc', "simple-python-${latestDeploymentVersion}")
                    timeout (time: 10, unit: 'MINUTES') {
                        rc.untilEach(1){
                            def rcMap = it.object()
                            return (rcMap.status.replicas.equals(rcMap.status.readyReplicas))
                        }
                    }
                }
             echo "completed first rollout"
         }
    }
    stage('approval') {
        timeout(time: 30, unit: 'DAYS') {
                input message: "Start second rollout ?"
            }
    }
    stage('second rollout') {
          openshift.withCluster() {
                openshift.withProject("de-sandpit-prod") {
                    openshift.selector("dc", "simple-python").rollout().latest()
                    def latestDeploymentVersion = openshift.selector('dc',"simple-python").object().status.latestVersion
                    def rc = openshift.selector('rc', "simple-python-${latestDeploymentVersion}")
                    timeout (time: 10, unit: 'MINUTES') {
                        rc.untilEach(1){
                            def rcMap = it.object()
                            return (rcMap.status.replicas.equals(rcMap.status.readyReplicas))
                        }
                    }
                }
             echo "completed second rollout"
         }
    }
}
  • make the openshift.withCluster the outer most closure/context (only fully viable with scripted plug-ins ... see declaritive notes above) and still use the load step
node('master') {
    checkout scm
    stage('approval') {
        timeout(time: 30, unit: 'DAYS') {
                input message: "Start first rollout ?"
            }
    }
    stage('first rollout') {
        openshift.withCluster() {
             def steps = load 'deploy.groovy'
             steps.execute("de-sandpit-prod")
             echo "completed first rollout"
        }
    }
    stage('approval') {
        timeout(time: 30, unit: 'DAYS') {
                input message: "Start second rollout ?"
            }
    }
    stage('second rollout') {
        openshift.withCluster() {
             def steps = load 'deploy.groovy'
             steps.execute("de-sandpit-prod")
             echo "completed second rollout"
        }
    }
}
  • add both openshift.withCluster and openshift.withProject to the groovy file you bring in via the load step
def execute(String env) {
            openshift.withCluster() {
                openshift.withProject(env) {
                    openshift.selector("dc", "simple-python").rollout().latest()
                    def latestDeploymentVersion = openshift.selector('dc',"simple-python").object().status.latestVersion
                    def rc = openshift.selector('rc', "simple-python-${latestDeploymentVersion}")
                    timeout (time: 10, unit: 'MINUTES') {
                        rc.untilEach(1){
                            def rcMap = it.object()
                            return (rcMap.status.replicas.equals(rcMap.status.readyReplicas))
                        }
                    }
                }
            }
}

Examples

As the DSL is designed to be intuitive for experienced OpenShift users, the following high level examples may serve to build that intuition before delving into the detailed documentation.

Hello, World

Let's start with a "Hello world" style example.

/** Use of hostnames and OAuth token values in the DSL is heavily discouraged for maintenance and **/
/** security reasons. The global Jenkins configuration and credential store should be used instead. **/
/** Subsequent examples will demonstrate how to do this. **/
openshift.withCluster( 'https://10.13.137.207:8443', 'CO8wPaLV2M2yC_jrm00hCmaz5Jgw...' ) {
    openshift.withProject( 'myproject' ) {
        echo "Hello from project ${openshift.project()} in cluster ${openshift.cluster()}"
    }
}

Centralizing Cluster Configuration

Now let's simplify the first example by moving host, port, token and project information out of the script and into the Jenkins cluster configuration. A single logical name (e.g. "mycluster") can now be used to reference these values. This means that if the cluster information changes in the future, your scripts won't have to!

/** The logical name references a Jenkins cluster configuration which implies **/
/** API Server URL, default credentials, and a default project to use within the closure body. **/
openshift.withCluster( 'mycluster' ) {
    echo "Hello from ${openshift.cluster()}'s default project: ${openshift.project()}"

    // But we can easily change project contexts
    openshift.withProject( 'another-project' ) {
        echo "Hello from a non-default project: ${openshift.project()}"
    }

    // And even scope operations to other clusters within the same script
    openshift.withCluster( 'myothercluster' ) {
        echo "Hello from ${openshift.cluster()}'s default project: ${openshift.project()}"
    }
}

Sticking with the defaults

We can make the previous example even simpler! If you have defined a cluster configuration named "default" or if the Jenkins instance is running within an OpenShift pod, you don't need to specify any cluster information.

openshift.withCluster() { // Use "default" cluster or fallback to OpenShift cluster detection
    echo "Hello from the project running Jenkins: ${openshift.project()}"
}

Introduction to Selectors

Now a quick introduction to Selectors which allow you to perform operations on a group of API Server objects.

openshift.withCluster( 'mycluster' ) {
    /** Selectors are a core concept in the DSL. They allow the user to invoke operations **/
    /** on group of objects which satisfy a given criteria. **/

    // Create a Selector capable of selecting all service accounts in mycluster's default project
    def saSelector = openshift.selector( 'serviceaccount' )

    // Prints `oc describe serviceaccount` to Jenkins console
    saSelector.describe()

    // Selectors also allow you to easily iterate through all objects they currently select.
    saSelector.withEach { // The closure body will be executed once for each selected object.
        // The 'it' variable will be bound to a Selector which selects a single
        // object which is the focus of the iteration.
        echo "Service account: ${it.name()} is defined in ${openshift.project()}"
    }

    // Prints a list of current service accounts to the console
    echo "There are ${saSelector.count()} service accounts in project ${openshift.project()}"
    echo "They are named: ${saSelector.names()}"

    // Selectors can also be defined using qualified names
    openshift.selector( 'deploymentconfig/frontend' ).describe()

    // Or Kind + Label information
    openshift.selector( 'dc', [ tier: 'frontend' ] ).describe()

    // Or a static list of names
    openshift.selector( [ 'dc/jenkins', 'build/ruby1' ] ).describe()

    // Also, you can easily test to see if the selector found what
    // were looking for and vary your pipeline's logic as needed.
    def templateSelector = openshift.selector( "template", "mongodb-ephemeral")
    def templateExists = templateSelector.exists()
    def template
    if (!templateExists) {
        template = openshift.create('https://raw.githubusercontent.com/openshift/origin/master/examples/db-templates/mongodb-ephemeral-template.json').object()
    } else {
        template = templateSelector.object()
    }

}

Actions speak louder than words

Describing things is fine, but let's actually make something happen! Here, notice that new Selectors are regularly returned by DSL operations.

openshift.withCluster( 'mycluster' ) {
    // Run `oc new-app https://github.com/openshift/ruby-hello-world` . It
    // returns a Selector which will select the objects it created for you.
    def created = openshift.newApp( 'https://github.com/openshift/ruby-hello-world' )

    // This Selector exposes the same operations you have already seen.
    // (And many more that you haven't!).
    echo "new-app created ${created.count()} objects named: ${created.names()}"
    created.describe()

    // We can create a Selector from the larger set which only selects
    // the build config which new-app just created.
    def bc = created.narrow('bc')

    // Let's output the build logs to the Jenkins console. bc.logs()
    // would run `oc logs bc/ruby-hello-world`, but that might only
    // output a partial log if the build is in progress. Instead, we will
    // pass '-f' to `oc logs` to follow the build until it terminates.
    // Arguments to logs get passed directly on to the oc command line.
    def result = bc.logs('-f')

    // Many operations, like logs(), return a Result object (even a Selector
    // is a subclass of Result). A Result object contains detailed information about
    // what actions, if any, took place to accomplish an operation.
    echo "The logs operation require ${result.actions.size()} oc interactions"

    // You can even see exactly what oc command was executed.
    echo "Logs executed: ${result.actions[0].cmd}"

    // And even obtain the standard output and standard error of the command.
    def logsString = result.actions[0].out
    def logsErr = result.actions[0].err

    // And if after some processing within your pipeline, if you decide
    // you need to initiate a new build after the one initiated by
    // new-app, simply call the `oc start-build` equivalent:
    def buildSelector = bc.startBuild()
    buildSelector.logs('-f')
}

Peer inside of OpenShift objects

openshift.withCluster( 'mycluster' ) {
    def dcs = openshift.newApp( 'https://github.com/openshift/ruby-hello-world' ).narrow('dc')

    // dcs is a Selector which selects the deployment config created by new-app. How do
    // we get more information about this DC? Turn it into a Groovy object using object().
    // If there was a chance here that more than one DC was created, we should use objects()
    // which would return a List of Groovy objects; however, in this example, there
    // should only be one.
    def dc = dcs.object()

    // dc is not a Selector -- It is a Groovy Map which models the content of the DC
    // new-app created at the time object() was called. Changes to the model are not
    // reflected back to the API server, but the DC's content is at our fingertips.
    echo "new-app created a ${dc.kind} with name ${dc.metadata.name}"
    echo "The object has labels: ${dc.metadata.labels}"

}

Watching and waiting? Of course!

Patience is a virtue.

openshift.withCluster( 'mycluster' ) {
    def bc = openshift.newApp( 'https://github.com/openshift/ruby-hello-world' ).narrow('bc')

    // The build config will create a new build object automatically, but how do
    // we find it? The 'related(kind)' operation can create an appropriate Selector for us.
    def builds = bc.related('builds')

    // There are no guarantees in life, so let's interrupt these operations if they
    // take more than 10 minutes and fail this script.
    timeout(10) {

        // We can use watch to execute a closure body each objects selected by a Selector
        // change. The watch will only terminate when the body returns true.
        builds.watch {
            // Within the body, the variable 'it' is bound to the watched Selector (i.e. builds)
            echo "So far, ${bc.name()} has created builds: ${it.names()}"

            // End the watch only once a build object has been created.
            return it.count() > 0
        }

        // But we can actually want to wait for the build to complete.
        builds.watch {
            if ( it.count() == 0 ) return false

            // A robust script should not assume that only one build has been created, so
            // we will need to iterate through all builds.
            def allDone = true
            it.withEach {
                // 'it' is now bound to a Selector selecting a single object for this iteration.
                // Let's model it in Groovy to check its status.
                def buildModel = it.object()
                if ( it.object().status.phase != "Complete" ) {
                    allDone = false
                }
            }

            return allDone;
        }


        // Uggh. That was actually a significant amount of code. Let's use untilEach(){}
        // instead. It acts like watch, but only executes the closure body once
        // a minimum number of objects meet the Selector's criteria only terminates
        // once the body returns true for all selected objects.
        builds.untilEach(1) { // We want a minimum of 1 build

            // Unlike watch(), untilEach binds 'it' to a Selector for a single object.
            // Thus, untilEach will only terminate when all selected objects satisfy this
            // the condition established in the closure body (or until the timeout(10)
            // interrupts the operation).

            return it.object().status.phase == "Complete"
        }
    }

Note: We currently advise against running multiple watches in parallel. Both with our internal testing as well as those from upstream users (see #140), we see issues where the same CpsStepContext is called on multiple watches. There are future plans to upgrade some of this plug-in's integration with pipelines to later levels in the hopes of resolving this restriction.

Looking to Verify a Deployment or Service? We Can Still Do That!

If you are looking for the equivalent of openshiftVerifyDeployment from the OpenShift Jenkins Plug-in, the below performs the same operation.

openshift.withCluster() {
      openshift.withProject( "${DEV_PROJECT}" ){
      def latestDeploymentVersion = openshift.selector('dc',"${APP_NAME}").object().status.latestVersion
      def rc = openshift.selector('rc', "${APP_NAME}-${latestDeploymentVersion}")
      rc.untilEach(1){
          def rcMap = it.object()
          return (rcMap.status.replicas.equals(rcMap.status.readyReplicas))
      }
}

Also, if you are using an oc binary that has the oc rollout status functionality, waiting to verify a deployment is baked in.

openshift.withCluster() {
 openshift.withProject() {
    def dc = openshift.selector('dc', "${appName}")
    // this will wait until the desired replicas are available
    dc.rollout().status()
 }
}

If you are looking for the equivalent of openshiftVerifyService from the OpenShift Jenkins Plug-in, we added a similar operation. This works with Services with ClusterIP as well as headless Services (with selectors) in the namespace specified by the openshift.withProject().

openshift.withCluster() {
    openshift.withProject() {
        def connected = openshift.verifyService(${svc_name})
        if (connected) {
            echo "Able to connect to ${svc_name}"
        } else {
            echo "Unable to connect to ${svc_name}"
        }
    }
}

ImageStream SCMs? Use Pipeline Build Strategy and Image Change Triggers instead.

No equivalent of the ImageStream SCM (SCM steps are a special extension point in Jenkins pipelines) provided by openshiftImageStream from the OpenShift Jenkins Plug-in is provided.

That step was introduced prior to the introduction of the OpenShift Pipeline Build Strategy.

With the advent of OpenShift Pipeline Build Strategy, incorporating your pipeline into such a BuildConfig along with the use of an Image Change Trigger is the better choice for triggering pipeline jobs from changes to ImageStreams in OpenShift.

Tagging images across namespaces

Re-tagging existing images or promoting images across different namespaces (e.g. from Staging to Production) can be done easily and only requires valid credentials.

openshift.withCluster( 'mycluster' ) {

    // 'myuser' should have permissions to push/pull images from 'mynamespace'
    openshift.withCredentials( 'myuser' ) {

	// We tag our 'imagename:latest' image to 'imagename:lastStable' so that we can revert if needed
        openshift.tag( 'mynamespace/imagename:latest', 'mynamespace/imagename:lastStable')
    }
}
openshift.withCluster( 'mycluster' ) {

    // 'myuser' should have permissions to push/pull images from 'namespace-1' and push/pull images to 'namespace-2'
    openshift.withCredentials( 'myuser' ) {

        // We tag our image, making it available in another namespace
        openshift.tag( 'namespace-1/imagename:version', 'namespace-2/imagename:version')
    }
}

Deleting objects. Easy.

openshift.withCluster( 'mycluster' ) {

    // Delete all deployment configs with a particular label.
    openshift.selector( 'dc', [ environment:'qe' ] ).delete()

}

Creating objects. Easier than you were expecting... hopefully....

openshift.withCluster( 'mycluster' ) {

        // You can use sub-verbs of create for some simple objects
        openshift.create( 'serviceaccount', 'my-service-account' )

        // But you want to craft your own, don't you? First,
        // model it with Groovy Maps, Lists, and primitives.
        def secret = [
            "kind": "Secret",
            "metadata": [
                "name": "mysecret",
                "labels": [
                    'findme':'please'
                ]
            ],
            "stringData": [
                "username": "myname",
                "password": "mypassword"
            ]
        ]

        // create will marshal the model into JSON and send it to the API server.
        // We will add some passthrough arguments (--save-config and --validate)
        // just for fun.
        def objs = openshift.create( secret, '--save-config', '--validate' )

        // create(...) returns a Selector will select the resulting object(s).
        objs.describe()

        // But, you say, I've already modeled my object in JSON/YAML! It is in
        // an SCM or accessible with via HTTP, or..., or ...
        // Don't worry. Just get it to the current Jenkins workspace any way
        // you want (e.g. using a Jenkins plug-in for your SCM). Then read the
        // file into a String using normal Jenkins steps.
        def fromJSON = openshift.create( readFile( 'myobjects.json' ) )

        // You will get a Selector for the objects created, as always.
        echo "Created objects from JSON file: ${fromJSON.names()}"

}

....aside from some lessons learned by our users

One of the core design points of this plug-in when translating its pipeline syntax to oc invocations is that the output type is hard coded to name for many operations. In other words, we pass the -o=name argument.

This assumption has prevented users from combining the --dry-run and -o yaml with openshift.create to get the defaul yaml for a API object type, and then pipe that yaml as a parameter into a subsequent openshift.create or openshift.apply call.

At this time, the use of openshift.raw is required to achieve the expected results of combining --dry-run and -o yaml.

For example:

apply = openshift.apply(openshift.raw("create configmap frontend-config --dry-run --from-file=config.js --output=yaml").actions[0].out)

Need to update an object without replacing it?

openshift.withCluster( 'mycluster' ) {
    def p = openshift.selector( 'pods/mypod' ).object()
    p.metadata.labels['newlabel']='newvalue' // Adjust the model
    openshift.apply(p) // Patch the object on the server
}

However, be forewarned, controllers or other users can update objects while you are trying to update objects, and you can get collision conflicts. Consider:

  1. Pruning any object status or other optional fields you are not concerned with
  2. If pruning java.util.Map object returned from object() on selectors, you'll need to make sure the remove method is in the pipeline groovy method whitelist.
  3. When pruning, you have to make sure required fields have at least a minimal or empty setting, even if you are not overriding them

As examples, first, here is a version where we get the selector for a Deployment Config we created from and openshift.newApp call, and then pruned various fields to avoid conflicts, and then updated an expected port's name:

openshift.withCluster() {
    openshift.withProject() {
        def app = openshift.newApp('registry.access.redhat.com/jboss-fuse-6/fis-java-openshift')
        def dc = app.narrow('dc')
        def dcmap = dc.object()
        dcmap.remove('status')
        dcmap.metadata.remove('annotations')
        dcmap.metadata.remove('labels')
        dcmap.metadata.remove('creationTimestamp')
        dcmap.metadata.remove('generation')
        dcmap.metadata.remove('resourceVersion')
        dcmap.metadata.remove('selfLink')
        dcmap.metadata.remove('uid')
        dcmap.spec.remove('replicas')
        dcmap.spec.remove('revisionHistoryLimit')
        dcmap.spec.remove('selector')
        dcmap.spec.remove('strategy')
        dcmap.spec.remove('test')
        dcmap.spec.remove('triggers')
        dcmap.spec.template.spec.containers[0].ports[0].name = "jolokia"
        echo "${dcmap}"

        openshift.apply(dcmap)

    }
}

Next, here is the analogous example where we simply called openshift.newApp to create the Deployment Config, but then constructed from scratch a minimal portion of the API object such that we can update the object in the same way as above:

openshift.withCluster() {
    openshift.withProject() {
        def app = openshift.newApp('registry.access.redhat.com/jboss-fuse-6/fis-java-openshift')
        def dcpatch = [
               "metadata":[
                   "name":"fis-java-openshift",
                   "namespace":"myproject"
            ],
               "apiVersion":"apps.openshift.io/v1",
               "kind":"DeploymentConfig",
               "spec":[
                   "template":[
                       "metadata":[:],
                       "spec":[
                           "containers":[
                                 ["image":"registry.access.redhat.com/jboss-fuse-6/fis-java-openshift",
                                  "name":"fis-java-openshift",
                                  "resources":[:],
                                  "ports":[
                                       ["name":"jolokia",
                                        "containerPort":8778,
                                        "protocol":"TCP"
                                        ]
                                       ]
                                  ]
                           ],
                           "securityContext":[:],
                       ]
                   ]
                   ]
               ]

        openshift.apply(dcpatch)

    }
}

Cannot live without OpenShift templates? No problem.

openshift.withCluster( 'mycluster' ) {

    // One straightforward way is to pass string arguments directly to `oc process`.
    // This includes any parameter values you want to specify.
    def models = openshift.process( "openshift//mongodb-ephemeral", "-p", "MEMORY_LIMIT=600Mi" )

    // A list of Groovy object models that were defined in the template will be returned.
    echo "Creating this template will instantiate ${models.size()} objects"

    // For fun, modify the objects that have been defined by processing the template
    for ( o in models ) {
        o.metadata.labels[ "mylabel" ] = "myvalue"
    }

    // You can pass this list of object models directly to the create API
    def created = openshift.create( models )
    echo "The template instantiated: ${created.names()}"

    // Want more control? You could model the template itself!
    def template = openshift.withProject( 'openshift' ) {
        // Find the named template and unmarshal into a Groovy object
        openshift.selector('template','mysql-ephemeral').object()
    }

    // Explore the template model
    echo "Template contains ${template.parameters.size()} parameters"

    // For fun, modify the template easily while modeled in Groovy
    template.labels["mylabel"] = "myvalue"

    // This model can be specified as the template to process
    openshift.create( openshift.process( template, "-p", "MEMORY_LIMIT=600Mi" ) )

}

Want to promote or migrate objects between environments?

openshift.withCluster( 'devcluster' ) {

    def maps = openshift.selector( 'dc', [ microservice: 'maps' ] )

    // Model the source objects using the 'exportable' flag to strip cluster
    // specific information from the objects (i.e. like 'oc export').
    def objs = maps.objects( exportable:true )

    // Modify the models as you see fit.
    def timestamp = "${System.currentTimeMillis()}"
    for ( obj in objs ) {
        obj.metadata.labels[ "promoted-on" ] = timestamp
    }

    openshift.withCluster( 'qecluster' ) {

        // Might want Jenkins to ask someone before we do this ;-)
        mail (
            to: '[email protected]',
            subject: "Maps microservice (${env.BUILD_NUMBER}) is awaiting promotion",
             body: "Please go to ${env.BUILD_URL}.");
        input "Ready to update QE cluster with maps microservice?"

        // Note that the selector is relative to its closure body and
        // operates on the qecluster now.
        maps.delete( '--ignore-not-found' )

        openshift.create( objs )

        // Let's wait until at least one pod is Running
        maps.related( 'pods' ).untilEach {
            return it.object().status.phase == 'Running'
        }
    }

}

Error handling

Error handling is fairly standard for Jenkins DSL scripts. try/catch blocks can be used prevent recoverable errors from causing a build to fail.

openshift.withCluster( 'mycluster' ) {

    try {
        openshift.withCredentials( 'some-invalid-token-value' ) {
            openshift.newProject( 'my-new-project' )
            // ...
        }
    } catch ( e ) {
        // The exception is a hudson.AbortException with details
        // about the failure.
        "Error encountered: ${e}"
    }

}

The error printed out to the Jenkins console would look something like (and yes, the token will be masked as shown):

Error encountered: hudson.AbortException: new-project returned an error; sub-step failed:
{reference={}, err=error: You must be logged in to the server (the server has asked for the client to provide credentials),
verb=new-project, cmd=oc my-new-project --skip-config-write --insecure-skip-tls-verify
--server=https://192.168.1.109:8443 --namespace=myproject --token=XXXXX , out=, status=1}

Troubleshooting

Want to see the details of your OpenShift API Server interactions?

openshift.withCluster( 'mycluster' ) {

    openshift.verbose()
    // Get details printed to the Jenkins console and pass high --log-level to all oc commands
    openshift.newProject( 'my-new-project' )
    openshift.verbose(false) // Turn it back

    // If you want verbose output, but want a specific log-level
    openshift.logLevel(3)
    openshift.newProject( 'my-new-project-2' )
    ....
}

Who are you, really?

Getting advanced? You might need more than just default credentials associated with your cluster. You can leverage any OpenShift Token credential type in the Jenkins credential store by passing withCredentials the credential's identifier. If you think security is a luxury you can live without (it's not), you can also pass withCredentials a raw token value.

Note: doAs() has been deprecated in favor of withCredentials(), but is currently still supported for use in existing scripts

openshift.withCluster( 'mycluster' ) {
    openshift.withCredentials( 'my-normal-credential-id' ) {
        ...
    }

    openshift.withCredentials( 'my-privileged-credential-id' ) {
        ...
    }

    // Raw token value. Not recommended.
    openshift.withCredentials( 'CO8wPaLV2M2yC_jrm00hCmaz5Jgw...' ) {
        ...
    }
}

I need more.

If the available DSL operations are not sufficient, you can always pass a raw command directly to the oc binary. If you do not specify a server, token, or project, normal closure context rules will apply.

openshift.withCluster( 'mycluster' ) {
    def result = openshift.raw( 'status', '-v' )
    echo "Cluster status: ${result.out}"
}

But honestly, wouldn't you rather contribute and add the operation you need? ;-)

Configuring an OpenShift Cluster

Are you running your Jenkins instance within an OpenShift cluster? Does it only interact with resources within that cluster? You might not need to do anything here! Leaving out the cluster name when calling openshift.withCluster will cause the plug-in to try:

  1. To access a Jenkins cluster configuration named "default" and, if one does not exist..
  2. To assume it is running within an OpenShift Pod with a service account. In this scenario, the following cluster information will be used:
  • API Server URL: "https://${env.KUBERNETES_SERVICE_HOST}:${env.KUBERNETES_SERVICE_PORT_HTTPS}"
  • File containing Server Certificate Authority: /run/secrets/kubernetes.io/serviceaccount/ca.crt
  • File containing Pod's project name: /run/secrets/kubernetes.io/serviceaccount/namespace
  • File containing OAuth Token: /run/secrets/kubernetes.io/serviceaccount/token
openshift.withCluster() {  // find "default" cluster configuration and fallback to OpenShift cluster detection
    // ... operations relative to the default cluster ...
}

If you do need to configure clusters, it is a simple matter. As an authorized Jenkins user, navigate to Manage Jenkins -> Configure System -> and find the OpenShift Plug-in section.

Add a new cluster and you should see a form like the following. cluster-config

The cluster "name" (e.g. "mycluster") is the only thing you need to remember when writing scripts. If the cluster configuration has a default credential or project, they will be used automatically when operations are performed relative to that cluster (unless they are explicitly overridden).

openshift.withCluster( 'mycluster' ) {
    // ... operations relative to this cluster ...
}

The cluster configuration can be exported and imported with the Jenkins Configuration as Code Plug-in.

unclassified:
  openshift:
    clusterConfigs:
    - name: "mycluster"
      serverUrl: "example.com"

Setting up Credentials

You can define a new credential using the OpenShift Sync plug-in or directly in the Jenkins credential store.

To define a new credential using the OpenShift sync plug-in, you can add a Opaque/generic secret where the data has a "openshift-client-token" key.

# Create the secret
oc create secret generic my-prilvileged-token-id --from-file=openshift-client-token=mysecretToken.txt
# Add label to mark that it should be synced.
oc label secret my-prilvileged-token-id credential.sync.jenkins.openshift.io=true

This token will be accessible with the credential ID of "${OPENSHIFT_NAMESPACE}-my-prilvileged-token-id"

To define a new credential for the DSL in the Jenkins credential store, navigate to Credentials -> System -> Global credentials -> Add Credentials (you can the domain based on your particular security requirements).

token-config

This token can then be selected as the default token for a given Jenkins configuration cluster OR used tactically in the DSL with openshift.withCredentials( 'my-privileged-credential-id' ) {...} .

Setting up Jenkins Nodes

Each Jenkins node (master/agents) must have a copy of the OpenShift command line tool (oc) installed and in the Jenkins PATH environment variable. If your Jenkins nodes are running OpenShift images, stop reading here: they are already installed!

If your nodes are running outside of OpenShift, you can install the tool on each node yourself, or use Jenkins' Tool installer to do it for you. To do this, as authorized Jenkins user, navigate to Manage Jenkins -> Global Tool Configuration -> and find the "OpenShift Client Tools" section.

Here you can define a version of the client tools, where to find them, and if they should be automatically instead when a node requires them.

In the following example, a logical name "oc1.3.2" is associated with a particular build of the client tools available on github, which contains a folder inside with the 'oc' binary so you must specify this folder as "Subdirectory of extracted archive" while configuring the tool.

tool-config-by-url

Using this tool is then a simple matter of executing the OpenShift operations with the PATH adjusted to give it preference. If configured as above, the client tools will automatically be installed once on nodes that use the Pipeline 'tool' step.

node('agent1') {
    withEnv(["PATH+OC=${tool 'oc1.3.2'}"]) {
        openshift.withCluster( 'mycluster' ) {
            echo "${openshift.raw( "version" ).out}"
            echo "In project: ${openshift.project()}"
        }
    }
}

Please refer to Jenkins documentation on Global Tool Configuration which allows, for example, Linux and Windows nodes to acquire different builds of a tool.

Understanding the implications of the KUBECONFIG environment variable or presence of .kube/config in the home directory of the user running Jenkins

Users with prior experience with the OpenShift command line tool (oc) may be familiar with the KUBECONFIG environment variable, and the config file location you set for KUBECONFIG, as a means of establishing default settings for the various parameters for the command.

Similarly default settings can be provided by creating a .kube/config file in the home directory of whatever user is invoking the oc command if no KUBECONFIG variable is set.

The settings in .kube/config are analogous to the arguments this plug-in supplies to oc as a result of the various DSL methods you employ in your pipeline. If you have established at .kube/config file in your Jenkins environment which will be found by the invocations of oc your pipeline induces, they may interfere with the intentions of your use of various openshift.with... directives in your pipeline.

In the case where you run Jenkins out of an OpenShift pod though via the OpenShift Jenkins image, the environment is set up such that conflicts of this nature will not occur.

Moving Images Cluster to Cluster

Multiple OpenShift clusters is a common practice. A non-production and production cluster is very common. In a continuous integration pipeline, we want to achieve promotion to all environments and clusters. We need a way to promote images from cluster to cluster. The oc command line tool has a built-in command, image mirror, that will promote images from cluster to cluster.

stage('Move Image') {
	steps {
    	withDockerRegistry([credentialsId: "source-credentials", url: "source-registry-url"]) {

        	withDockerRegistry([credentialsId: "destination-credentials", url: "destination-registry-url"]) {

            	sh """
                	oc image mirror mysourceregistry.com/myimage:latest mydestinationegistry.com/myimage:latest
              	"""

            }
         }
     }
}

Note the docker related requirements when using oc image mirror.

You call this documentation?!

Not exactly. This is a brief overview of some of the capabilities of the plug-in. The details of the API are embedded within the plug-in's online documentation within a running Jenkins instance. To find it:

  1. Create a new Pipeline Item
  2. Click "Pipeline Syntax" below the DSL text area
  3. On the left navigation menu, click "Global Variables Reference"

A preview is provided below, but please see the Global Variable Reference in a running instance for the latest API information.

jenkins-online-help

jenkins-client-plugin's People

Contributors

adambkaplan avatar akram avatar bparees avatar coopernetes avatar coreydaley avatar danielalejandrohc avatar danmcp avatar dependabot[bot] avatar gabemontero avatar garethahealy avatar gomesp avatar itwasonlyabug avatar jcpowermac avatar joejstuart avatar jonesbusy avatar jupierce avatar nfalco79 avatar openshift-merge-robot avatar rolfedh avatar sherl0cks avatar stevekuznetsov avatar sylivankenobi avatar vfreex avatar waveywaves avatar xaseron avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jenkins-client-plugin's Issues

Command and argument disappeared when re-config jenkins project

I create a build step using OpenShift Generic OC Invocation plugin in my jenkins plugin and saved without problems. But the command line options and arguments field DISAPPEARED when i come back to configuration user interface to make some changes.

Kindly help!

Unknown command "perform" for "oc" when using DSL method openshift.run()

Working on additional testing for playbook2image and I ran into this error. This was just a test pipeline to determine the return objects for openshift.run().

Versions:
openshift v3.5.5.5
OpenShift Jenkins Client Plugin: 0.9.2

Error

OpenShift Build p2i/createcred-pipeline-1
[Pipeline] node
Running on master in /var/lib/jenkins/jobs/p2i-createcred-pipeline/workspace
[Pipeline] {
[Pipeline] stage
[Pipeline] { (openshift run)
[Pipeline] echo

[Pipeline] _OcContextInit
[Pipeline] _OcContextInit
[Pipeline] readFile
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: perform returned an error;
{reference={}, err=Error: unknown command "perform" for "oc"
Run 'oc --help' for usage., verb=perform, cmd=oc perform test-app-jenkins --image=172.30.10.223:5000/p2i/test-app --restart=Never --env OPTS='-vvv -u 1001 --connection local' --env INVENTORY_FILE=inventory --env PLAYBOOK_FILE=test-playbook.yaml --server=https://172.30.0.1:443 --namespace=p2i --token=XXXXX , out=, status=1}

Finished: FAILURE

BuildConfig w/Pipeline

---
apiVersion: v1
kind: BuildConfig
metadata:
  name: "createcred-pipeline" 
spec:
  strategy:
    type: "JenkinsPipeline"
    jenkinsPipelineStrategy:
        jenkinsfile: |-
            node {
                stage('openshift run') {
                    openshift.withCluster() {
                        openshift.withProject() {
                            def run = openshift.run("test-app-jenkins", 
                                "--image=172.30.10.223:5000/p2i/test-app",
                                "--restart=Never",
                                "--env OPTS='-vvv -u 1001 --connection local'",
                                "--env INVENTORY_FILE=inventory",
                                "--env PLAYBOOK_FILE=test-playbook.yaml" )
                            println(run)
                        }
                    }    
                }
            }

Is the "perform" argument to simplePassthrough correct?

https://github.com/openshift/jenkins-client-plugin/blob/master/src/main/resources/com/openshift/jenkins/plugins/OpenShiftDSL.groovy#L658

Potential race condition when using multibranch and newBuild

When using workflow multibranch plugin with newBuild a potential race can occur. If more than one build starts simultaneously the Dockerfile's base image could be added as an ImageStream twice - which will result in an error since the IS already exists.

Example of this issue using oc new-build from cli.

jcallen@jcallen ~                                                                                                                 [10:47:12] 
> $ oc new-build -D $'FROM centos\nRUN yum install -y vim wget curl zsh' --name master; oc new-build -D $'FROM centos\nRUN yum install -y vim wget curl zsh' --name foo
--> Found Docker image a8493f5 (2 weeks old) from Docker Hub for "centos"

    * An image stream will be created as "centos:latest" that will track the source image
    * A Docker build using a predefined Dockerfile will be created
      * The resulting image will be pushed to image stream "master:latest"
      * Every time "centos:latest" changes a new build will be triggered

--> Creating resources with label build=master ...
    imagestream "centos" created
    imagestream "master" created
    buildconfig "master" created
--> Success
    Build configuration "master" created and build triggered.
    Run 'oc logs -f bc/master' to stream the build progress.
--> Found Docker image a8493f5 (2 weeks old) from Docker Hub for "centos"

    * An image stream will be created as "centos:latest" that will track the source image
    * A Docker build using a predefined Dockerfile will be created
      * The resulting image will be pushed to image stream "foo:latest"
      * Every time "centos:latest" changes a new build will be triggered

--> Creating resources with label build=foo ...
    error: imagestreams "centos" already exists
    imagestream "foo" created
    buildconfig "foo" created
--> Failed

An optional resolution of this issue that I have tested is to use Jenkin's lockable resources plugin for example:
https://github.com/jcpowermac/ojalloy/blob/master/vars/newBuildOpenShift.groovy#L25

Stop spamming the pipeline with sub-step information

@jupierce used the quiet listener factory or whatever to stop a bunch of output from being shown inside of the watch -- would be great to get every output of openshift to not spam my pipeline with

[Pipeline] readFile
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] readFile
[Pipeline] readFile
[Pipeline] _OcAction

[Pipeline] readFile
[Pipeline] readFile
[Pipeline] _OcAction

/cc @bparees @gabemontero

Security: deny token retrieval

Hello,
right now you can use the following to retrieve the jenkins encrypted token:

openshift.raw("whoami","-t")

This should be blocked by the plugin to disable easy leakage of token information

adjust to underlying oc/openshift change in types returned on selectors

When running jenkins-image-sample.groovy, we used to be able to run it multiple times within a jenkins instance.

However, there seems to have been a change in the underlying oc/openshift behavior. I suspect, though have not precisely confirmed, from a quick grep of "deploymentconfigs" in the origin code base that it could be related to the recent API group changes.

When first creating the objs in jenkins-image-sample.groovy from the mongodb-ephemeral template, when iterating over the created objects, DCs are prefaced with the type "deploymentconfig/". And the current jenkins-client-plugin code base works fine.

However, if you run the pipeline again, and we find existing objects (and hence don't create them), when we iterate over the objects from the selector, DCs are prefaced with "deploymentconfigs/" (note the 's' at the end on deploymentconfig), and the abbreviation mapping logic in OpenShiftDSL.groovy does not recognize the type and aborts.

Adjusting the abbreviation mapping from a 1-1 to a 1-many should be straight forward (though it will again tax my already tenuous feelings toward groovy I'm sure).

@bparees fyi

Plugin forces to install 'oc' client manually

Hi,

I have got Jenkins outside Openshift. I have to install the 'oc' client manually if I want to use this plugin. It would be great if the plugin comes with the 'oc' client to be installed in the Jenkins machine.

Thanks

Add support for "oc rsh"

It seems that "oc rsh" is not working with theopenshift.raw() interface of the plugin:

Pipeline:

node {
  def ocDir = tool "oc"
  withEnv(["PATH+OC=${ocDir}"]) {
  openshift.withCluster('minishift') {
    openshift.withProject('myproject') {
      echo "HELLO FROM ${openshift.project()}"
      echo openshift.raw('version').out
      echo openshift.raw('get','pod', 'petclinic2-3-hjx72').out
      echo openshift.raw("rsh","petclinic2-3-hjx72","hostname").out
    }
  }
  }
}

I get the following:

Started by user admin
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/openshift-test
[Pipeline] {
[Pipeline] tool
[Pipeline] withEnv
[Pipeline] {
[Pipeline] echo

[Pipeline] _OcContextInit
[Pipeline] _OcContextInit
[Pipeline] echo
HELLO FROM myproject
[Pipeline] _OcAction
[Pipeline] echo
oc v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://xxx.xxx.xxx.xxx:8443/
openshift v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7

[Pipeline] _OcAction
[Pipeline] echo
NAME                 READY     STATUS    RESTARTS   AGE
petclinic2-3-hjx72   1/1       Running   4          39d

[Pipeline] _OcAction
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: raw command [rsh, petclinic2-3-hjx72, hostname] returned an error;
{reference={}, err=Error from server (NotFound): pods "petclinic2-3-hjx72" not found, verb=, cmd=oc  rsh petclinic2-3-hjx72 hostname --insecure-skip-tls-verify --server=https://xxx.xxx.xxx.xxx:8443/ --namespace=myproject --token=XXXXX , out=, status=1}

Finished: FAILURE

Error running commands outside withProject block

Hi, I am using the plugin version 0.9.5 with OC client 1.5.1 in a Jenkins server outside Openshift. When I try to run some commands like openshift.newProject or openshift.process outside a withProject block I get this error:

java.io.FileNotFoundException: /var/run/secrets/kubernetes.io/serviceaccount/namespace (No such file or directory)

If I put them inside a withProject block then they work.

Document changes

Currently it is difficult to quickly see what has changed between releases. Having a changelog file or documenting changes in the README.md would remedy this without being a burden on development.

Checking if resource exists

I am having a problem checking if resource exists before i delete it.
It would be good if you could implement method to do tasks like this.

def p = openshift.selector( 'hpa/myautoscaler' ).object()
if(p != null){openshift.selector( 'hpa/myautoscaler' ).delete()}

Right now when i try to execute first command to get the resource object it ends up failing my Jenkins pipeline build.

ERROR: Error during delete;
{reference={}, err=Error from server (NotFound): horizontalpodautoscalers.autoscaling "myautoscaler" not found, verb=delete, cmd=oc delete hpa/myautoscaler --insecure-skip-tls-verify --server=https://my.awsome.cluster --namespace=test --token=XXXXX , out=, status=1}

Proxy configuration

I need to connect to Openshift through a corporate proxy. How can I configure the proxy in this plugin?

confirm proxy capability

@coreydaley - just forwarded an email thread from openshift-sme on this

Quick summary from that thread

  • oc should honor HTTP_PROXY or HTTPS_PROXY (I dove into this a few months back)
  • users should either set that env var via pipeline build strategy env vars, build params with that name, or use of env.HTTP_PROXY=<value> in there scripts
  • (re)confirm in plugin java code that the EnvVar is properly propagated to the durable task plugin task that leads to the fork/exec of oc (OcAction.java)
  • at the moment, I think the maven/slave related stuff if orthogonal, but let's see what we get back from the customer

@bparees fyi

openshift.create() creates file on master in master/slave setup resulting in error

In a master/slave scenario, when using openshift.create() within a node('foo'){} block, a temporary file named /tmp/create<randomness>.markup is created on the master, but the oc create -f <tempfile> is executed on the slave, resulting in an error: file not found.

Sample syntax that causes an issue:

openshift.withCluster() {
  openshift.withProject() {
    node('nodejs') {
      stage('create template') {
            git url: 'https://github.com/openshift/nodejs-ex.git' 
            def template = openshift.create(readFile('openshift/templates/nodejs-mongodb.json')).object()
      }
    }
  }
}

Sample error:
err=error: the path "/tmp/create739820317386924410.markup" does not exist, verb=create, cmd=oc create -f /tmp/create739820317386924410.markup -o=name --server=https://172.30.0.1:443 --namespace=testing --token=XXXXX , out=, status=1}

Using username/password credentials?

I could not get this to work. The only way I could get authenication to work was to use a token, as per the docs. I am not an expert on authentication in openshift, but to get a token I need to login to get it? Also it expires every 24 hours so its not something static? Anyone got an example of using a username/password? Even better would be to update the docs with an example, or tell us why username/password should not be used. I would add that I added the username/password credentials as type 'Openshift Username Password'.

Certificate authority is always expected

The plugin either expects that the CA is configured via "Cluster Configuration" (i.e.: via Manage Jenkins page) or, it defaults to the mounted ca.crt under /var/run.

This is fine, if you are using the plugin to talk to the same cluster you are running in or you want to explicitly set the CA. But in some instances, customers will include the signer of the CA into the Jenkins slaves, thus the certificate-authority is not needed.

For example, if my jenkins is running on cluster1, but trying to talk to cluster2, with the current plugin, it would use the following command:

oc get projects --server=https://cluster2.corp:8443 --namespace=dev --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --token=XXXXX

Which will fail with a 509x cert issue.

If i manually run the same command, without the certificate-authority specified, the command succeeds.

Is it possible to provide a flag, either by Cluster Config or Env, which can turn off the use of certificate-authority?

FYI: I am a RedHatter (part of UK Consulting @gahealy) on-site with a customer.

facilitate CD team annotate scenario

@coreydaley - I've forwarded an email thread from aos-cicd-devops to you.

There were exchanges between @stevekuznetsov , @csrwng , @jupierce , and myself around a create imagestream flow, followed by an annotation of that flow, along with some permutations of that flow.

Minimally, let's add an annotate method on the resource selector (to may the label method).

But as you digest the email and we further iterate in this issue, we agree more more nuances we'll want to add.

@bparees fyi (I'm forwarding the email to you as well as reference).

expanded environment variables in raw oc invocation

When using the build-step "generic oc invocation", environment variables are not expanded.

Build log output:

Executing: oc start-build testapp --from-dir=${OSE_BINARYBUILD_DIR} --server=https://xxxx --namespace=testapp-demo --token=XXXXX 
error: stat /var/jenkins_home/${OSE_BINARYBUILD_DIR}: no such file or directory
Client tool terminated with status: 1

No such property: objs for class: groovy.lang.Binding

Created a pipeline using the migrate example, changing it up for my needs. Most of the examples work, just the part creating objects doesnt appear to be.

openshift.withCluster( '' ) {
  openshift.withProject( 'test-project' ) {
    echo "Hello from a Source project: ${openshift.project()}"
    def maps = openshift.selector( 'deploymentconfig/fun-stuff' )
    def objs = maps.objects( exportable:true )
    def timestamp = "${System.currentTimeMillis()}"
    for ( obj in objs ) {
      obj.metadata.labels[ "promoted-on" ] = timestamp
    }
  }
  openshift.withProject( 'stage' ) {
    echo "Hello from a nDestination project: ${openshift.project()}"
    openshift.create( objs )
    maps.related( 'pods' ).untilEach {
      return it.object().status.phase == 'Running'
    }
  }
}

Got this error in jenkins

OpenShift Build deployment/test-pipeline-1
[Pipeline] echo

[Pipeline] node
Running on master in /var/lib/jenkins/jobs/deployment-test-pipeline/workspace
[Pipeline] {
[Pipeline] _OcContextInit
[Pipeline] _OcContextInit
[Pipeline] echo
Hello from a Source project: test-project
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] _OcContextInit
[Pipeline] echo
Hello from a nDestination project: stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
groovy.lang.MissingPropertyException: No such property: objs for class: groovy.lang.Binding
	at groovy.lang.Binding.getVariable(Binding.java:63)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:224)
	at org.kohsuke.groovy.sandbox.impl.Checker$4.call(Checker.java:241)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:238)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:221)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:221)
	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getProperty(SandboxInvoker.java:28)
	at com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:20)
	at WorkflowScript.run(WorkflowScript:13)
	at com.openshift.jenkins.plugins.OpenShiftDSL.withProject(jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy:317)
	at com.openshift.jenkins.plugins.OpenShiftDSL$Context.run(jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy:106)
	at com.openshift.jenkins.plugins.OpenShiftDSL.withProject(jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy:316)
	at WorkflowScript.run(WorkflowScript:11)
	at com.openshift.jenkins.plugins.OpenShiftDSL.withCluster(jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy:305)
	at com.openshift.jenkins.plugins.OpenShiftDSL$Context.run(jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy:106)
	at com.openshift.jenkins.plugins.OpenShiftDSL.withCluster(jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy:304)
	at com.openshift.jenkins.plugins.OpenShiftDSL.node(jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy:1289)
	at ___cps.transform___(Native Method)
	at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.get(PropertyishBlock.java:74)
	at com.cloudbees.groovy.cps.LValueBlock$GetAdapter.receive(LValueBlock.java:30)
	at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.fixName(PropertyishBlock.java:66)
	at sun.reflect.GeneratedMethodAccessor257.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
	at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
	at com.cloudbees.groovy.cps.Next.step(Next.java:83)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:173)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:162)
	at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:122)
	at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:261)
	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:162)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:19)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:35)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:32)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:32)
	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:174)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:330)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:82)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:242)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:230)
	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE

Failed Stage Indicating Weirdly in Web UI

From this snippet of Jenkinsfile:

    stage("Generate ConfigMaps") {
      steps {
        script {
          openshift.withCluster() {
            // Create the build and run it
            def builderObjects = openshift.process("--filename=jenkins/setup/resources/config-generator-build.yaml")
            openshift.apply(builderObjects)
            def jobBuilder = openshift.selector("buildconfig/configmap-job-builder")
            sh "cp -R jenkins/library/src jenkins/setup/images/config-job-image"
            openshift.startBuild("configmap-job", "--from-dir=./jenkins/setup/images/config-job-image", "--follow")

            // Wait for an imagestream tag for the builder to be available
            def imageStream = openshift.selector("imagestream/configmap-job")
            def jobImageRef = ""
            imageStream.watch {
              is = it.object()
              if ((is.status.tags != null) && (is.status.tags.size() > 0)) {
                jobImageRef = is.status.tags[0].items[0].dockerImageReference
                return true
              }
            }

            openshift.selector("job/configmap-job").delete("--ignore-not-found")

            // Run the job to create config maps
            def configMapJob = openshift.process(
              "--filename=jenkins/setup/resources/config-generator-job.yaml",
              "-p", "JOB_IMAGE=${jobImageRef}",
              "-p", "JENKINS_ADMIN_SA=${env.JENKINS_ADMIN_SA}")
            openshift.apply(configMapJob)
          }
        }
      }
  }

I see in the logs:

apply returned an error;
{
  "reference": {
    "/tmp/apply1725148177158694347.markup": {
      "apiVersion": "v1",
      "kind": "List",
      "items": [
        {
          "metadata": {
            "name": "configmap-job",
            "labels": {
              "template": "configmap-job",
              "job-name": "configmap-job"
            }
          },
          "apiVersion": "batch/v1",
          "kind": "Job",
          "spec": {
            "template": {
              "metadata": {
                "labels": {
                  "job-name": "configmap-job"
                }
              },
              "spec": {
                "dnsPolicy": "ClusterFirst",
                "containers": [
                  {
                    "name": "configmap-setup",
                    "image": "172.30.1.1:5000/jenkins/configmap-job@sha256:ec005904f8f950d128deb6216ae0dd0a2a79bad8d0bcf93164285394787d6930",
                    "imagePullPolicy": "IfNotPresent",
                    "resources": {}
                  }
                ],
                "securityContext": {},
                "restartPolicy": "Never",
                "serviceAccountName": "jenkins-admin"
              }
            },
            "completions": 1,
            "activeDeadlineSeconds": 600,
            "parallelism": 1
          }
        }
      ]
    }
  },
  "err": "Error from server (NotFound): the server could not find the requested resource",
  "verb": "apply",
  "cmd": "oc apply -f /tmp/apply1725148177158694347.markup -o=name --server=https://172.30.0.1:443 --namespace=jenkins --token=XXXXX",
  "out": "",
  "status": "1"
}

But the web UI does not show a failed step:

jenkins_bad

Enable CA Verify with insecure-skip-tls-verify: true is broken

When having this configured in ~jenkins/.kube/config on the os level

- cluster:
    insecure-skip-tls-verify: true

I can not enable SSL verify on the cluster Level in Jenkins. I will always get:

  action failed: {reference={}, err=error: specifying a root certificates file with the insecure flag is not allowed, verb=get

This should be possible. The plugin should not depend on System configuration

  • Setting KUBECONFIG to something sane, or
  • overwriting insecure-skip-tls-verify always (even if false)) or (best)
  • allow setting ENV VARS at the global cluster level.

Workaround: Wrap in withEnv(["KUBECONFIG=${workspace}"]) - with deleteDir() in a finally block

Selector does not work with multiple labels

When I have 2 BuildConfigs in Openshift like this:

BuildConfig 1 labels:

ref: master
app: my-app

BuildConfig 2 labels:

ref: master
app: my-second-app

I expect

def buildConfigSelector = openshift.selector('bc', [ref: 'master', app: 'my-third-app'])
if (buildConfigSelector.count() > 0) {
 sh 'echo more than 0'
}
else {
 sh 'echo no more than 0'
}

to print no more than 0. But currently it selects either BuildConfig 1 or BuildConfig 2. I can verify this using

def result = buildConfigSelector.describe()
sh "echo $result"

Describe prints BuildConfig 1 or BuildConfig 2.

start build with -F option doesn't print logs

I'm using plugin version 0.9.6 with Jenkins 2.74.
$ oc version
oc v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7
features: Basic-Auth

Server xxxxxxxxxxxxx
openshift v3.6.173.0.5
kubernetes v1.6.1+5115d708d7

When I try to start build from Jenkins using the -F flag using this command
def bc = openshift.selector("bc/jws-app")
bc.startBuild('--follow') or bc.startBuild('-F)

Jenkins waits until build is finished but it does not print the logs in the console. If I start the build from command line
oc start-build bc/jws-app -F

then it works correctly and I can see the logs in the console

Mechanism to get user token

I am currently with a customer (i am a RedHatter part of UK Consulting), who want to use a Jenkins pipeline to deploy certain resources (DCs, secrets, etc) into a project owned by a certain user - our Jenkins instance does not have permissions within their project and it might be in a different cluster.

To deploy those resources, we get the users token, and then use that token as part of the "withCluster()" later on in the pipeline, so as to "impersonate" the user.

The following bit of code works, and gets back a token:

stages {
    stage ('Get token via Plugins') {
        steps {
            script {
                openshift.withCluster("${params.CLUSTER_ID}") {
                    openshift.withProject("${params.PROJECT_NAME}") {
                        openshift.raw("login", "--token='' --username=${params.SYSTEM_ACCOUNT_NAME} --password=${params.SYSTEM_ACCOUNT_PASSWORD}")

                        def token = openshift.raw("whoami", "--token='' -t")
                        echo "token == ${token.out}"
                    }   
                }
            }
        }
    }
}

However, its includes a hack, in that i provide the token as empty, to trick the plugin into not adding it onto the command line.

Would it be possible to either provide a "whoami" method, which would return the token or a Env var to tell the plugin not to set the token, so as that i don't need to provide the empty token.

Or, can the above be achieved in a different way?

I know its probably a bit of a strange use case, but something that more customers might want to do.

NPE when missing object is not helpful to users

Whenever I forget to wrap a call in withCluster() or try to .object() on something that doesn't exist, I get:

java.lang.NullPointerException: Cannot invoke method isSkipTlsVerify() on null object
	at org.codehaus.groovy.runtime.NullObject.invokeMethod(NullObject.java:91)
	at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
	at org.codehaus.groovy.runtime.callsite.NullCallSite.call(NullCallSite.java:35)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
	at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:19)
	at com.openshift.jenkins.plugins.OpenShiftDSL.buildCommonArgs(jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy:368)
	at com.openshift.jenkins.plugins.OpenShiftDSL.buildCommonArgs(jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy:351)
	at com.openshift.jenkins.plugins.OpenShiftDSL$OpenShiftResourceSelector._asSingleMap(jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy:1069)
	at com.openshift.jenkins.plugins.OpenShiftDSL$OpenShiftResourceSelector.object(jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy:1085)

This is not really a helpful stacktrace and if we could wrap it in some context that would help me remember to withCluster() or suggest that no object was selected it would be much more helpful.

/cc @gabemontero @bparees @jupierce

new oc rsh parially broken

Hello, I tested the code introduced by #81 and ran into an issue

run -it --rm -v $(pwd):/jenkins-plugin -v $(pwd)/../.m2:/root/.m2 -w /jenkins-plugin maven:latest mvn package

Installed the plugin, however some commands still fail:

Pipeline Code:

  echo openshift.rsh(shortname,"ps ax").out

Result:

ERROR: rsh returned an error;
{reference={}, err=error: unknown gnu long option

Usage:
 ps [options]

 Try 'ps --help <simple|list|output|threads|misc|all>'
  or 'ps --help <s|l|o|t|m|a>'
 for additional help text.

For more details see ps(1).
command terminated with exit code 1, verb=rsh, cmd=oc --server=https://xxx.xxx.xxx.xxx:8443/ --namespace=myproject --token=XXXXX rsh centos7-im0t0k1k-1-62s9v ps ax --insecure-skip-tls-verify , out=, status=1}

It seems you you have to move ALL --param params in front of the command.

This works:

Pipeline:

echo openshift.rsh(shortname,"echo").out

Result:

[Pipeline] _OcAction
[Pipeline] echo
--insecure-skip-tls-verify

untilEach: does not work with a static selector?

Here is my Jenkinsfile:

pipeline {
    agent any
    stages {
        stage ('test') {
            steps {
                script {
                    List<String> names = Arrays.asList("pods/release-ci-binary-1-build", "pods/release-ci-binary-2-build", "pods/release-ci-binary-3-build", "pods/release-ci-binary-4-build", "pods/release-ci-binary-6-build")
                    openshift.withCluster() {
                        def selector = openshift.selector(names[0])
                        if (names.size() > 1) {
                            for (String name : names[1..-1]) {
                                selector = selector.union(openshift.selector(name))
                            }
                        }
                        selector.untilEach(1) {
                            echo "Saw ${it.name()}"
                        }
                    }
                }
            }
        }
    }
}

Here is the output (+100000000000 points for the error message):

Started by user stevekuznetsov
[Pipeline] node
Running on master in /var/lib/jenkins/jobs/testing/workspace
[Pipeline] {
[Pipeline] stage
[Pipeline] { (test)
[Pipeline] script
[Pipeline] {
[Pipeline] echo

[Pipeline] _OcContextInit
[Pipeline] readFile
[Pipeline] readFile
[Pipeline] _OcWatch
Entering watch
Running watch closure body
[Pipeline] {
[Pipeline] readFile
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] readFile
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] echo
Saw pods/release-ci-binary-3-build
[Pipeline] echo
Saw pods/release-ci-binary-6-build
[Pipeline] echo
Saw pods/release-ci-binary-2-build
[Pipeline] echo
Saw pods/release-ci-binary-4-build
[Pipeline] echo
Saw pods/release-ci-binary-1-build
[Pipeline] }
[Pipeline] // _OcWatch
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: watch terminated with an error: 1
Finished: FAILURE

Argument in wrong position when executing Selector.rollout().status("-w")

Hi, I tried to run the command Selector.rollout().status("-w") but it looks like the -w argument is placed in a wrong position and the oc command returns an error:

ERROR: rollout:status returned an error;
{reference={}, err=Error: unknown shorthand flag: 'w' in -w

Usage:
oc rollout SUBCOMMAND [options]

Available Commands:
cancel cancel the in-progress deployment
history View rollout history
latest Start a new rollout for a deployment config with the latest state from its triggers
pause Mark the provided resource as paused
resume Resume a paused resource
retry Retry the latest failed rollout
status Show the status of the rollout
undo Undo a previous rollout

Use "oc --help" for more information about a given command.
Use "oc options" for a list of global command-line options (applies to all commands)., verb=rollout, cmd=oc rollout -w status deploymentconfig/jws-app --insecure-skip-tls-verify --server=xxxxxx --namespace=pablodemo-test --token=XXXXX , out=, status=1}

Groovy DSL support seems to missing

While getting pipeline syntax with url like http://localhost:8080/job/pipetti/pipeline-syntax/ and then clicking "Intellij IDEA GDSL", I get the text file which sadly do not reveal the openshift API like openshift.withCluster(), only the global variable openshift ; related to your client-plugin API.

Instead, the legacy openshift pipeline plugin reveals methods like openshiftBuild(), etc.
Having the proper and full gdsl file makes the editing of openshift related APIS quicker and effective.

The related gdsl is shown below:

//The global script scope
def ctx = context(scope: scriptScope())
contributor(ctx) {
method(name: 'build', type: 'Object', params: [job:'java.lang.String'], doc: 'Build a job')
method(name: 'build', type: 'Object', namedParams: [parameter(name: 'job', type: 'java.lang.String'), parameter(name: 'parameters', type: 'Map'), parameter(name: 'propagate', type: 'boolean'), parameter(name: 'quietPeriod', type: 'java.lang.Integer'), parameter(name: 'wait', type: 'boolean'), ], doc: 'Build a job')
method(name: 'echo', type: 'Object', params: [message:'java.lang.String'], doc: 'Print Message')
method(name: 'emailext', type: 'Object', namedParams: [parameter(name: 'subject', type: 'java.lang.String'), parameter(name: 'body', type: 'java.lang.String'), parameter(name: 'attachLog', type: 'boolean'), parameter(name: 'attachmentsPattern', type: 'java.lang.String'), parameter(name: 'compressLog', type: 'boolean'), parameter(name: 'from', type: 'java.lang.String'), parameter(name: 'mimeType', type: 'java.lang.String'), parameter(name: 'postsendScript', type: 'java.lang.String'), parameter(name: 'presendScript', type: 'java.lang.String'), parameter(name: 'recipientProviders', type: 'Map'), parameter(name: 'replyTo', type: 'java.lang.String'), parameter(name: 'to', type: 'java.lang.String'), ], doc: 'Extended Email')
method(name: 'emailextrecipients', type: 'Object', params: [recipientProviders:'Map'], doc: 'Extended Email Recipients')
method(name: 'error', type: 'Object', params: [message:'java.lang.String'], doc: 'Error signal')
method(name: 'input', type: 'Object', params: [message:'java.lang.String'], doc: 'Wait for interactive input')
method(name: 'input', type: 'Object', namedParams: [parameter(name: 'message', type: 'java.lang.String'), parameter(name: 'id', type: 'java.lang.String'), parameter(name: 'ok', type: 'java.lang.String'), parameter(name: 'parameters', type: 'Map'), parameter(name: 'submitter', type: 'java.lang.String'), parameter(name: 'submitterParameter', type: 'java.lang.String'), ], doc: 'Wait for interactive input')
method(name: 'isUnix', type: 'Object', params: [:], doc: 'Checks if running on a Unix-like node')
method(name: 'library', type: 'Object', params: [identifier:'java.lang.String'], doc: 'Load a shared library on the fly')
method(name: 'library', type: 'Object', namedParams: [parameter(name: 'identifier', type: 'java.lang.String'), parameter(name: 'changelog', type: 'java.lang.Boolean'), parameter(name: 'retriever', type: 'Map'), ], doc: 'Load a shared library on the fly')
method(name: 'libraryResource', type: 'Object', params: [resource:'java.lang.String'], doc: 'Load a resource file from a shared library')
method(name: 'mail', type: 'Object', namedParams: [parameter(name: 'subject', type: 'java.lang.String'), parameter(name: 'body', type: 'java.lang.String'), parameter(name: 'bcc', type: 'java.lang.String'), parameter(name: 'cc', type: 'java.lang.String'), parameter(name: 'charset', type: 'java.lang.String'), parameter(name: 'from', type: 'java.lang.String'), parameter(name: 'mimeType', type: 'java.lang.String'), parameter(name: 'replyTo', type: 'java.lang.String'), parameter(name: 'to', type: 'java.lang.String'), ], doc: 'Mail')
method(name: 'milestone', type: 'Object', params: [ordinal:'java.lang.Integer'], doc: 'The milestone step forces all builds to go through in order')
method(name: 'milestone', type: 'Object', namedParams: [parameter(name: 'ordinal', type: 'java.lang.Integer'), parameter(name: 'label', type: 'java.lang.String'), ], doc: 'The milestone step forces all builds to go through in order')
method(name: 'node', type: 'Object', params: [label:java.lang.String, body:'Closure'], doc: 'Allocate node')
method(name: 'properties', type: 'Object', params: [properties:'Map'], doc: 'Set job properties')
method(name: 'readTrusted', type: 'Object', params: [path:'java.lang.String'], doc: 'Read trusted file from SCM')
method(name: 'resolveScm', type: 'Object', namedParams: [parameter(name: 'source', type: 'Map'), parameter(name: 'targets', type: 'Map'), parameter(name: 'ignoreErrors', type: 'boolean'), ], doc: 'Resolves an SCM from an SCM Source and a list of candidate target branch names')
method(name: 'retry', type: 'Object', params: [count:int, body:'Closure'], doc: 'Retry the body up to N times')
method(name: 'script', type: 'Object', params: [body:'Closure'], doc: 'Run arbitrary Pipeline script')
method(name: 'sleep', type: 'Object', params: [time:'int'], doc: 'Sleep')
method(name: 'sleep', type: 'Object', namedParams: [parameter(name: 'time', type: 'int'), parameter(name: 'unit', type: 'java.util.concurrent.TimeUnit'), ], doc: 'Sleep')
method(name: 'stage', type: 'Object', params: [name:java.lang.String, body:'Closure'], doc: 'Stage')
method(name: 'stage', type: 'Object', params: [body:Closure], namedParams: [parameter(name: 'name', type: 'java.lang.String'), parameter(name: 'concurrency', type: 'java.lang.Integer'), ], doc: 'Stage')
method(name: 'timeout', type: 'Object', params: [time:int, body:'Closure'], doc: 'Enforce time limit')
method(name: 'timeout', type: 'Object', params: [body:Closure], namedParams: [parameter(name: 'time', type: 'int'), parameter(name: 'unit', type: 'java.util.concurrent.TimeUnit'), ], doc: 'Enforce time limit')
method(name: 'timestamps', type: 'Object', params: [body:'Closure'], doc: 'Timestamps')
method(name: 'tool', type: 'Object', params: [name:'java.lang.String'], doc: 'Use a tool from a predefined Tool Installation')
method(name: 'tool', type: 'Object', namedParams: [parameter(name: 'name', type: 'java.lang.String'), parameter(name: 'type', type: 'java.lang.String'), ], doc: 'Use a tool from a predefined Tool Installation')
method(name: 'waitUntil', type: 'Object', params: [body:'Closure'], doc: 'Wait for condition')
method(name: 'withCredentials', type: 'Object', params: [bindings:Map, body:'Closure'], doc: 'Bind credentials to variables')
method(name: 'withEnv', type: 'Object', params: [overrides:Map, body:'Closure'], doc: 'Set environment variables')
method(name: 'ws', type: 'Object', params: [dir:java.lang.String, body:'Closure'], doc: 'Allocate workspace')
method(name: 'catchError', type: 'Object', params: [body:'Closure'], doc: 'Advanced/Deprecated Catch error and set build result')
method(name: 'dockerFingerprintRun', type: 'Object', params: [containerId:'java.lang.String'], doc: 'Advanced/Deprecated Record trace of a Docker image run in a container')
method(name: 'dockerFingerprintRun', type: 'Object', namedParams: [parameter(name: 'containerId', type: 'java.lang.String'), parameter(name: 'toolName', type: 'java.lang.String'), ], doc: 'Record trace of a Docker image run in a container')
method(name: 'envVarsForTool', type: 'Object', namedParams: [parameter(name: 'toolId', type: 'java.lang.String'), parameter(name: 'toolVersion', type: 'java.lang.String'), ], doc: 'Fetches the environment variables for a given tool in a list of 'FOO=bar' strings suitable for the withEnv step.')
method(name: 'getContext', type: 'Object', params: [type:'Map'], doc: 'Advanced/Deprecated Get contextual object from internal APIs')
method(name: 'podTemplate', type: 'Object', params: [body:Closure], namedParams: [parameter(name: 'label', type: 'java.lang.String'), parameter(name: 'name', type: 'java.lang.String'), parameter(name: 'activeDeadlineSeconds', type: 'int'), parameter(name: 'annotations', type: 'Map'), parameter(name: 'cloud', type: 'java.lang.String'), parameter(name: 'containers', type: 'Map'), parameter(name: 'envVars', type: 'Map'), parameter(name: 'idleMinutes', type: 'int'), parameter(name: 'imagePullSecrets', type: 'Map'), parameter(name: 'inheritFrom', type: 'java.lang.String'), parameter(name: 'instanceCap', type: 'int'), parameter(name: 'namespace', type: 'java.lang.String'), parameter(name: 'nodeSelector', type: 'java.lang.String'), parameter(name: 'nodeUsageMode', type: 'java.lang.String'), parameter(name: 'serviceAccount', type: 'java.lang.String'), parameter(name: 'slaveConnectTimeout', type: 'int'), parameter(name: 'volumes', type: 'Map'), parameter(name: 'workingDir', type: 'java.lang.String'), parameter(name: 'workspaceVolume', type: 'Map'), ], doc: 'Define a podTemplate to use in the kubernetes plugin')
method(name: 'withContext', type: 'Object', params: [context:java.lang.Object, body:'Closure'], doc: 'Advanced/Deprecated Use contextual object from internal APIs within a block')
property(name: 'openshift', type: 'com.openshift.jenkins.plugins.pipeline.OpenShiftGlobalVariable')
property(name: 'docker', type: 'org.jenkinsci.plugins.docker.workflow.DockerDSL')
property(name: 'pipeline', type: 'org.jenkinsci.plugins.pipeline.modeldefinition.ModelStepLoader')
property(name: 'env', type: 'org.jenkinsci.plugins.workflow.cps.EnvActionImpl.Binder')
property(name: 'params', type: 'org.jenkinsci.plugins.workflow.cps.ParamsVariable')
property(name: 'currentBuild', type: 'org.jenkinsci.plugins.workflow.cps.RunWrapperBinder')
property(name: 'scm', type: 'org.jenkinsci.plugins.workflow.multibranch.SCMVar')
}
//Steps that require a node context
def nodeCtx = context(scope: closureScope())
contributor(nodeCtx) {
def call = enclosingCall('node')
if (call) {
method(name: 'bat', type: 'Object', params: [script:'java.lang.String'], doc: 'Windows Batch Script')
method(name: 'bat', type: 'Object', namedParams: [parameter(name: 'script', type: 'java.lang.String'), parameter(name: 'encoding', type: 'java.lang.String'), parameter(name: 'returnStatus', type: 'boolean'), parameter(name: 'returnStdout', type: 'boolean'), ], doc: 'Windows Batch Script')
method(name: 'checkout', type: 'Object', params: [scm:'Map'], doc: 'General SCM')
method(name: 'checkout', type: 'Object', namedParams: [parameter(name: 'scm', type: 'Map'), parameter(name: 'changelog', type: 'boolean'), parameter(name: 'poll', type: 'boolean'), ], doc: 'General SCM')
method(name: 'containerLog', type: 'Object', params: [name:'java.lang.String'], doc: 'Get container log from Kubernetes')
method(name: 'containerLog', type: 'Object', namedParams: [parameter(name: 'name', type: 'java.lang.String'), parameter(name: 'limitBytes', type: 'int'), parameter(name: 'returnLog', type: 'boolean'), parameter(name: 'sinceSeconds', type: 'int'), parameter(name: 'tailingLines', type: 'int'), ], doc: 'Get container log from Kubernetes')
method(name: 'deleteDir', type: 'Object', params: [:], doc: 'Recursively delete the current directory from the workspace')
method(name: 'dir', type: 'Object', params: [path:java.lang.String, body:'Closure'], doc: 'Change current directory')
method(name: 'fileExists', type: 'Object', params: [file:'java.lang.String'], doc: 'Verify if file exists in workspace')
method(name: 'git', type: 'Object', params: [url:'java.lang.String'], doc: 'Git')
method(name: 'git', type: 'Object', namedParams: [parameter(name: 'url', type: 'java.lang.String'), parameter(name: 'branch', type: 'java.lang.String'), parameter(name: 'changelog', type: 'boolean'), parameter(name: 'credentialsId', type: 'java.lang.String'), parameter(name: 'poll', type: 'boolean'), ], doc: 'Git')
method(name: 'load', type: 'Object', params: [path:'java.lang.String'], doc: 'Evaluate a Groovy source file into the Pipeline script')
method(name: 'openshiftBuild', type: 'Object', params: [bldCfg:'java.lang.String'], doc: 'Trigger OpenShift Build')
method(name: 'openshiftBuild', type: 'Object', namedParams: [parameter(name: 'bldCfg', type: 'java.lang.String'), parameter(name: 'apiURL', type: 'java.lang.String'), parameter(name: 'authToken', type: 'java.lang.String'), parameter(name: 'buildName', type: 'java.lang.String'), parameter(name: 'checkForTriggeredDeployments', type: 'java.lang.String'), parameter(name: 'commitID', type: 'java.lang.String'), parameter(name: 'env', type: 'Map'), parameter(name: 'namespace', type: 'java.lang.String'), parameter(name: 'showBuildLogs', type: 'java.lang.String'), parameter(name: 'verbose', type: 'java.lang.String'), parameter(name: 'waitTime', type: 'java.lang.String'), parameter(name: 'waitUnit', type: 'java.lang.String'), ], doc: 'Trigger OpenShift Build')
method(name: 'openshiftCreateResource', type: 'Object', params: [jsonyaml:'java.lang.String'], doc: 'Create OpenShift Resource(s)')
method(name: 'openshiftCreateResource', type: 'Object', namedParams: [parameter(name: 'jsonyaml', type: 'java.lang.String'), parameter(name: 'apiURL', type: 'java.lang.String'), parameter(name: 'authToken', type: 'java.lang.String'), parameter(name: 'namespace', type: 'java.lang.String'), parameter(name: 'verbose', type: 'java.lang.String'), ], doc: 'Create OpenShift Resource(s)')
method(name: 'openshiftDeleteResourceByJsonYaml', type: 'Object', params: [jsonyaml:'java.lang.String'], doc: 'Delete OpenShift Resource(s) from JSON/YAML')
method(name: 'openshiftDeleteResourceByJsonYaml', type: 'Object', namedParams: [parameter(name: 'jsonyaml', type: 'java.lang.String'), parameter(name: 'apiURL', type: 'java.lang.String'), parameter(name: 'authToken', type: 'java.lang.String'), parameter(name: 'namespace', type: 'java.lang.String'), parameter(name: 'verbose', type: 'java.lang.String'), ], doc: 'Delete OpenShift Resource(s) from JSON/YAML')
method(name: 'openshiftDeleteResourceByKey', type: 'Object', namedParams: [parameter(name: 'types', type: 'java.lang.String'), parameter(name: 'keys', type: 'java.lang.String'), parameter(name: 'apiURL', type: 'java.lang.String'), parameter(name: 'authToken', type: 'java.lang.String'), parameter(name: 'namespace', type: 'java.lang.String'), parameter(name: 'verbose', type: 'java.lang.String'), ], doc: 'Delete OpenShift Resource(s) by Key')
method(name: 'openshiftDeleteResourceByLabels', type: 'Object', namedParams: [parameter(name: 'types', type: 'java.lang.String'), parameter(name: 'keys', type: 'java.lang.String'), parameter(name: 'values', type: 'java.lang.String'), parameter(name: 'apiURL', type: 'java.lang.String'), parameter(name: 'authToken', type: 'java.lang.String'), parameter(name: 'namespace', type: 'java.lang.String'), parameter(name: 'verbose', type: 'java.lang.String'), ], doc: 'Delete OpenShift Resource(s) using Labels')
method(name: 'openshiftDeploy', type: 'Object', params: [depCfg:'java.lang.String'], doc: 'Trigger OpenShift Deployment')
method(name: 'openshiftDeploy', type: 'Object', namedParams: [parameter(name: 'depCfg', type: 'java.lang.String'), parameter(name: 'apiURL', type: 'java.lang.String'), parameter(name: 'authToken', type: 'java.lang.String'), parameter(name: 'namespace', type: 'java.lang.String'), parameter(name: 'verbose', type: 'java.lang.String'), parameter(name: 'waitTime', type: 'java.lang.String'), parameter(name: 'waitUnit', type: 'java.lang.String'), ], doc: 'Trigger OpenShift Deployment')
method(name: 'openshiftExec', type: 'Object', params: [pod:'java.lang.String'], doc: 'OpenShift Exec')
method(name: 'openshiftExec', type: 'Object', namedParams: [parameter(name: 'pod', type: 'java.lang.String'), parameter(name: 'apiURL', type: 'java.lang.String'), parameter(name: 'arguments', type: 'Map'), parameter(name: 'authToken', type: 'java.lang.String'), parameter(name: 'command', type: 'java.lang.String'), parameter(name: 'container', type: 'java.lang.String'), parameter(name: 'namespace', type: 'java.lang.String'), parameter(name: 'verbose', type: 'java.lang.String'), parameter(name: 'waitTime', type: 'java.lang.String'), parameter(name: 'waitUnit', type: 'java.lang.String'), ], doc: 'OpenShift Exec')
method(name: 'openshiftImageStream', type: 'Object', params: [:], doc: 'OpenShift ImageStreams')
method(name: 'openshiftImageStream', type: 'Object', namedParams: [parameter(name: 'name', type: 'java.lang.String'), parameter(name: 'tag', type: 'java.lang.String'), parameter(name: 'namespace', type: 'java.lang.String'), parameter(name: 'apiURL', type: 'java.lang.String'), parameter(name: 'authToken', type: 'java.lang.String'), parameter(name: 'changelog', type: 'boolean'), parameter(name: 'poll', type: 'boolean'), parameter(name: 'verbose', type: 'java.lang.String'), ], doc: 'OpenShift ImageStreams')
method(name: 'openshiftScale', type: 'Object', namedParams: [parameter(name: 'depCfg', type: 'java.lang.String'), parameter(name: 'replicaCount', type: 'java.lang.String'), parameter(name: 'apiURL', type: 'java.lang.String'), parameter(name: 'authToken', type: 'java.lang.String'), parameter(name: 'namespace', type: 'java.lang.String'), parameter(name: 'verbose', type: 'java.lang.String'), parameter(name: 'verifyReplicaCount', type: 'java.lang.String'), parameter(name: 'waitTime', type: 'java.lang.String'), parameter(name: 'waitUnit', type: 'java.lang.String'), ], doc: 'Scale OpenShift Deployment')
method(name: 'openshiftTag', type: 'Object', namedParams: [parameter(name: 'srcStream', type: 'java.lang.String'), parameter(name: 'srcTag', type: 'java.lang.String'), parameter(name: 'destStream', type: 'java.lang.String'), parameter(name: 'destTag', type: 'java.lang.String'), parameter(name: 'alias', type: 'java.lang.String'), parameter(name: 'apiURL', type: 'java.lang.String'), parameter(name: 'authToken', type: 'java.lang.String'), parameter(name: 'destinationAuthToken', type: 'java.lang.String'), parameter(name: 'destinationNamespace', type: 'java.lang.String'), parameter(name: 'namespace', type: 'java.lang.String'), parameter(name: 'verbose', type: 'java.lang.String'), ], doc: 'Tag OpenShift Image')
method(name: 'openshiftVerifyBuild', type: 'Object', params: [bldCfg:'java.lang.String'], doc: 'Verify OpenShift Build')
method(name: 'openshiftVerifyBuild', type: 'Object', namedParams: [parameter(name: 'bldCfg', type: 'java.lang.String'), parameter(name: 'apiURL', type: 'java.lang.String'), parameter(name: 'authToken', type: 'java.lang.String'), parameter(name: 'checkForTriggeredDeployments', type: 'java.lang.String'), parameter(name: 'namespace', type: 'java.lang.String'), parameter(name: 'verbose', type: 'java.lang.String'), parameter(name: 'waitTime', type: 'java.lang.String'), parameter(name: 'waitUnit', type: 'java.lang.String'), ], doc: 'Verify OpenShift Build')
method(name: 'openshiftVerifyDeployment', type: 'Object', params: [depCfg:'java.lang.String'], doc: 'Verify OpenShift Deployment')
method(name: 'openshiftVerifyDeployment', type: 'Object', namedParams: [parameter(name: 'depCfg', type: 'java.lang.String'), parameter(name: 'apiURL', type: 'java.lang.String'), parameter(name: 'authToken', type: 'java.lang.String'), parameter(name: 'namespace', type: 'java.lang.String'), parameter(name: 'replicaCount', type: 'java.lang.String'), parameter(name: 'verbose', type: 'java.lang.String'), parameter(name: 'verifyReplicaCount', type: 'java.lang.String'), parameter(name: 'waitTime', type: 'java.lang.String'), parameter(name: 'waitUnit', type: 'java.lang.String'), ], doc: 'Verify OpenShift Deployment')
method(name: 'openshiftVerifyService', type: 'Object', params: [svcName:'java.lang.String'], doc: 'Verify OpenShift Service')
method(name: 'openshiftVerifyService', type: 'Object', namedParams: [parameter(name: 'svcName', type: 'java.lang.String'), parameter(name: 'apiURL', type: 'java.lang.String'), parameter(name: 'authToken', type: 'java.lang.String'), parameter(name: 'namespace', type: 'java.lang.String'), parameter(name: 'retryCount', type: 'java.lang.String'), parameter(name: 'verbose', type: 'java.lang.String'), ], doc: 'Verify OpenShift Service')
method(name: 'powershell', type: 'Object', params: [script:'java.lang.String'], doc: 'PowerShell Script')
method(name: 'powershell', type: 'Object', namedParams: [parameter(name: 'script', type: 'java.lang.String'), parameter(name: 'encoding', type: 'java.lang.String'), parameter(name: 'returnStatus', type: 'boolean'), parameter(name: 'returnStdout', type: 'boolean'), ], doc: 'PowerShell Script')
method(name: 'pwd', type: 'Object', params: [:], doc: 'Determine current directory')
method(name: 'pwd', type: 'Object', namedParams: [parameter(name: 'tmp', type: 'boolean'), ], doc: 'Determine current directory')
method(name: 'readFile', type: 'Object', params: [file:'java.lang.String'], doc: 'Read file from workspace')
method(name: 'readFile', type: 'Object', namedParams: [parameter(name: 'file', type: 'java.lang.String'), parameter(name: 'encoding', type: 'java.lang.String'), ], doc: 'Read file from workspace')
method(name: 'sh', type: 'Object', params: [script:'java.lang.String'], doc: 'Shell Script')
method(name: 'sh', type: 'Object', namedParams: [parameter(name: 'script', type: 'java.lang.String'), parameter(name: 'encoding', type: 'java.lang.String'), parameter(name: 'returnStatus', type: 'boolean'), parameter(name: 'returnStdout', type: 'boolean'), ], doc: 'Shell Script')
method(name: 'stash', type: 'Object', params: [name:'java.lang.String'], doc: 'Stash some files to be used later in the build')
method(name: 'stash', type: 'Object', namedParams: [parameter(name: 'name', type: 'java.lang.String'), parameter(name: 'allowEmpty', type: 'boolean'), parameter(name: 'excludes', type: 'java.lang.String'), parameter(name: 'includes', type: 'java.lang.String'), parameter(name: 'useDefaultExcludes', type: 'boolean'), ], doc: 'Stash some files to be used later in the build')
method(name: 'step', type: 'Object', params: [delegate:'Map'], doc: 'General Build Step')
method(name: 'svn', type: 'Object', params: [url:'java.lang.String'], doc: 'Subversion')
method(name: 'svn', type: 'Object', namedParams: [parameter(name: 'url', type: 'java.lang.String'), parameter(name: 'changelog', type: 'boolean'), parameter(name: 'poll', type: 'boolean'), ], doc: 'Subversion')
method(name: 'tm', type: 'Object', params: [stringWithMacro:'java.lang.String'], doc: 'Expand a string containing macros')
method(name: 'unstash', type: 'Object', params: [name:'java.lang.String'], doc: 'Restore files previously stashed')
method(name: 'validateDeclarativePipeline', type: 'Object', params: [path:'java.lang.String'], doc: 'Validate a file containing a Declarative Pipeline')
method(name: 'wrap', type: 'Object', params: [delegate:Map, body:'Closure'], doc: 'General Build Wrapper')
method(name: 'writeFile', type: 'Object', namedParams: [parameter(name: 'file', type: 'java.lang.String'), parameter(name: 'text', type: 'java.lang.String'), parameter(name: 'encoding', type: 'java.lang.String'), ], doc: 'Write file to workspace')
method(name: '_OcAction', type: 'Object', namedParams: [parameter(name: 'server', type: 'java.lang.String'), parameter(name: 'project', type: 'java.lang.String'), parameter(name: 'verb', type: 'java.lang.String'), parameter(name: 'verbArgs', type: 'java.util.List'), parameter(name: 'userArgs', type: 'java.util.List'), parameter(name: 'options', type: 'java.util.List'), parameter(name: 'verboseOptions', type: 'java.util.List'), parameter(name: 'token', type: 'java.lang.String'), parameter(name: 'streamStdOutToConsolePrefix', type: 'java.lang.String'), parameter(name: 'reference', type: 'Map'), parameter(name: 'logLevel', type: 'int'), ], doc: 'Internal utility function for OpenShift DSL')
method(name: '_OcContextInit', type: 'Object', params: [:], doc: 'Advanced/Deprecated Internal utility function for OpenShift DSL')
method(name: '_OcWatch', type: 'Object', params: [body:Closure], namedParams: [parameter(name: 'server', type: 'java.lang.String'), parameter(name: 'project', type: 'java.lang.String'), parameter(name: 'verb', type: 'java.lang.String'), parameter(name: 'verbArgs', type: 'java.util.List'), parameter(name: 'userArgs', type: 'java.util.List'), parameter(name: 'options', type: 'java.util.List'), parameter(name: 'verboseOptions', type: 'java.util.List'), parameter(name: 'token', type: 'java.lang.String'), parameter(name: 'logLevel', type: 'int'), ], doc: 'Internal utility function for OpenShift DSL')
method(name: 'archive', type: 'Object', params: [includes:'java.lang.String'], doc: 'Advanced/Deprecated Archive artifacts')
method(name: 'archive', type: 'Object', namedParams: [parameter(name: 'includes', type: 'java.lang.String'), parameter(name: 'excludes', type: 'java.lang.String'), ], doc: 'Archive artifacts')
method(name: 'container', type: 'Object', params: [name:java.lang.String, body:'Closure'], doc: 'Advanced/Deprecated Run build steps in a container')
method(name: 'dockerFingerprintFrom', type: 'Object', namedParams: [parameter(name: 'dockerfile', type: 'java.lang.String'), parameter(name: 'image', type: 'java.lang.String'), parameter(name: 'toolName', type: 'java.lang.String'), ], doc: 'Record trace of a Docker image used in FROM')
method(name: 'unarchive', type: 'Object', params: [:], doc: 'Advanced/Deprecated Copy archived artifacts into the workspace')
method(name: 'unarchive', type: 'Object', namedParams: [parameter(name: 'mapping', type: 'Map'), ], doc: 'Copy archived artifacts into the workspace')
method(name: 'withDockerContainer', type: 'Object', params: [image:java.lang.String, body:'Closure'], doc: 'Advanced/Deprecated Run build steps inside a Docker container')
method(name: 'withDockerContainer', type: 'Object', params: [body:Closure], namedParams: [parameter(name: 'image', type: 'java.lang.String'), parameter(name: 'args', type: 'java.lang.String'), parameter(name: 'toolName', type: 'java.lang.String'), ], doc: 'Run build steps inside a Docker container')
method(name: 'withDockerRegistry', type: 'Object', params: [registry:Map, body:'Closure'], doc: 'Advanced/Deprecated Sets up Docker registry endpoint')
method(name: 'withDockerServer', type: 'Object', params: [server:Map, body:'Closure'], doc: 'Advanced/Deprecated Sets up Docker server endpoint')
}
}

// Errors on:
// class org.jenkinsci.plugins.workflow.cps.steps.ParallelStep: There's no @DataBoundConstructor on any constructor of class org.jenkinsci.plugins.workflow.cps.steps.ParallelStep

Auth token not masked when using openshift.verbose()

When running with openshift.verbose() enabled, token stored in jenkins as os-developer "OpenShift Token for OpenShift Client Plugin"-credentials is logged in plain text.

Example pipeline:

pipeline {
    agent any
    parameters {
        string(name: 'PROJECT', description: 'The project to describe resources from.')
    }
    stages {
        stage('Describe resources'){
            steps {
                script{
                    openshift.verbose()
                    openshift.withCluster( ) {
                        openshift.withProject( "${params.PROJECT}" ) {
                            openshift.doAs( 'os-developer' ) {
                                def selector = openshift.selector( 'all' )
                                selector.describe()
                            }
                        }
                    }
                    
                }
            }
        }
    }
}

The command is masked correctly:

Command> oc --server=https://172.30.0.1:443 --namespace=testproj --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --loglevel=8 --token=XXXXX describe all

But the token is printed in clear text when http traffic is logged:

...
StdErr> I1221 21:11:49.507844 1094 round_trippers.go:414] GET https://172.30.0.1:443/oapi/v1/namespaces/testproj/buildconfigs?includeUninitialized=true
I1221 21:11:49.507949 1094 round_trippers.go:421] Request Headers:
I1221 21:11:49.507955 1094 round_trippers.go:424] Accept: application/json
I1221 21:11:49.507960 1094 round_trippers.go:424] User-Agent: oc/v1.8.1+0d5291c (linux/amd64) kubernetes/0d5291c
I1221 21:11:49.507963 1094 round_trippers.go:424] Authorization: Bearer hrHoOFyeANcnhVKsXgLYs2o5M4Q-INYf72bTek6Ou_A
I1221 21:11:49.536286 1094 round_trippers.go:439] Response Status: 200 OK in 28 milliseconds
I1221 21:11:49.536326 1094 round_trippers.go:442] Response Headers:
I1221 21:11:49.536332 1094 round_trippers.go:445] Content-Length: 797
...

Compilation error with latest jenkins image

Using the latest Jenkins image (which was upgraded to use the Pipeline Shared Groovy Libraries Plugin v2.5 (https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Shared+Groovy+Libraries+Plugin), a compilation exception is thrown when we try to use the client plugin:

hudson.remoting.ProxyException: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy: 38: unable to resolve class java.lang$Enum 
 @ line 38, column 5.
       enum ContextId implements Serializable{
       ^

jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy: -1: unable to resolve class java.lang$Enum 
 @ line -1, column -1.
2 errors

	at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
	at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:946)
	at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:593)
	at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:542)
	at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
	at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
	at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:254)
	at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:250)
	at groovy.lang.GroovyClassLoader.recompile(GroovyClassLoader.java:766)
	at groovy.lang.GroovyClassLoader.loadClass(GroovyClassLoader.java:718)
	at groovy.lang.GroovyClassLoader.loadClass(GroovyClassLoader.java:787)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:411)
	at groovy.lang.GroovyClassLoader.loadClass(GroovyClassLoader.java:677)
	at groovy.lang.GroovyClassLoader$InnerLoader.loadClass(GroovyClassLoader.java:425)
	at groovy.lang.GroovyClassLoader.loadClass(GroovyClassLoader.java:787)
	at groovy.lang.GroovyClassLoader.loadClass(GroovyClassLoader.java:775)
	at com.openshift.jenkins.plugins.pipeline.OpenShiftGlobalVariable.getValue(OpenShiftGlobalVariable.java:34)
	at org.jenkinsci.plugins.workflow.cps.CpsScript.getProperty(CpsScript.java:121)
	at org.codehaus.groovy.runtime.InvokerHelper.getProperty(InvokerHelper.java:172)
	at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.getProperty(ScriptBytecodeAdapter.java:456)
	at org.kohsuke.groovy.sandbox.impl.Checker$4.call(Checker.java:243)
	at org.kohsuke.groovy.sandbox.GroovyInterceptor.onGetProperty(GroovyInterceptor.java:52)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:308)
	at org.kohsuke.groovy.sandbox.impl.Checker$4.call(Checker.java:241)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:238)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:221)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:221)
	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getProperty(SandboxInvoker.java:28)
	at com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:20)
	at WorkflowScript.run(WorkflowScript:3)
	at ___cps.transform___(Native Method)
	at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.get(PropertyishBlock.java:74)
	at com.cloudbees.groovy.cps.LValueBlock$GetAdapter.receive(LValueBlock.java:30)
	at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.fixName(PropertyishBlock.java:66)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
	at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
	at com.cloudbees.groovy.cps.Next.step(Next.java:74)
	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:154)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30)
	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:165)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:328)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:80)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:240)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:228)
	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Finished: FAILURE

Issue switching to another cluster

Hi, I changed the hostname in my cluster configuration in Jenkins and now when I run my pipeline I get this error:

ERROR: new-project returned an error;
{reference={}, err=error: unable to read certificate-authority /var/run/secrets/kubernetes.io/serviceaccount/ca.crt for oldhost.com:8443 due to open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory, verb=new-project, cmd=oc new-project petclinic-build --skip-config-write --server=https://newhost.com --token=XXXXX , out=, status=1}

I am using an external Jenkins instance outside Openshift

How to verify deployment ?

Hi,

With https://github.com/openshift/jenkins-plugin, you can use the helper step openshiftVerifyDeployment to check wether the deployment actually full fit our specs. It's easy.

So how to do the same thing with Openshift client ?

For a build, the status is well expressed, we are waiting the build to be Completed and if so it means the build finish with success.
There is no such status for Deployment making it harder to verify.

Regards,

Status -2 returned by oc client

Hi when I try to run any oc command using the jenkins plugin I get an error like this

ERROR: tag returned an error;
{reference={}, err=, verb=tag, cmd=oc tag pablo-test/petclinic:latest pablodemo-test/petclinic:latest --insecure-skip-tls-verify --server=xxxxxxx --namespace=pablo-test --token=XXXXX , out=, status=-2}

Not sure if it is a proxy issue. If it were I guess it would show some error message. I tried to set th eenv variable http_proxy but nothing.

'incorrect namespace' error from create() with objects for other namespaces

This issue was recently raised on the OpenShift RedHat Email List and recreated here for comment/consideration.

I have a template that can be successfully processed and the objects created using oc from the command-line. The template is supposed to run in one namespace (let’s call it Y) but it creates secrets that are placed in another namespace/project (let’s call that X). The namespaces are managed the same user. Both namespaces exist and the following command, when run on the command-line, is valid and produces the expected results:

oc process -f <template-file> | oc create -f -

But you cannot do this in a Jenkins pipeline without getting an error. Here's my pipeline code...

openshift.withCluster("${CLUSTER}") {
    openshift.withProject(“${Y}") {
        def objs = openshift.process('—filename=<template-file>’)
        openshift.create(objs)
    }
}

When I do this I get the following error reported in the Jenkins Job output:

err=error: the namespace from the provided object “X" does not match the namespace “Y". You must pass '—namespace=X' to perform this operation., verb=create

How do you replicate the actions that appear to be legitimate from the command-line by using the pipeline plugin? The plugin appears to assume that the objects passed to create() must reside in the project namespace, but they do not have to.

Incidentally, I can work-around the problem by iterating through the list of objects created by the call to process(), i.e. by doing this...

def objs = openshift.process('—filename=<template-file>’)
for (obj in objs) {
    if (obj.metadata.namespace == “X") {
        openshift.create(obj, "—namespace=X")
    } else {
        openshift.create(obj)
    }
}

Shouldn't create(), like its command-line-counterpart, honour the object namespace?

Prevent linefeeds from being included on oc command line

When a linefeed is accidentally included in token value for a credential used by the plugin, the script generated to execute oc contains an unexecutable line with a portion of the token on it.

In ClientCommandBuild.java (constructor or buildCommand(..)), and exception should be thrown if token variable contains linefeeds \r or \n. Every other component of the command line could technically be checked as well, but they are less likely to contain a linefeed since they are simple input fields in the Jenkins UI instead of textareas.

Error when using plugin in custom DSL

I am trying to extract a part of code that I will be reusing in different script. Currently, it is directly in the Jenkinsfile. For example:

stage('Check service directly from Jenkinsfile') {
     steps {
         script {
             openshift.withCluster("${OPENSHIFT_CLUSTER}") {
                 openshift.withProject("${OPENSHIFT_PROJECT}") {
                 def serviceSelector = openshift.selector('service', "${OPENSHIFT_SERVICE}")
                 def serviceDescription = serviceSelector.describe()
                 echo "service: ${serviceDescription}"
             }
         }
     }
 }

This is working as expected.

When extracting code in a groovy method (file vars/getServiceDescription.groovy):

def call(body) {
    def config = [:]
    body.resolveStrategy = Closure.DELEGATE_FIRST
    body.delegate = config
    body

    script {
        openshift.withCluster("${config.cluster}") {
            openshift.withProject("${config.project}") {
                def serviceSelector = openshift.selector('service', "${config.service}")
                def serviceDescription = serviceSelector.describe()
                echo "service: ${serviceDescription}"
            }
        }
    }
}

and calling it from my Jenkinsfile:

stage('Check service from external method') {
    steps {
        getServiceDescription cluster: "${OPENSHIFT_CLUSTER}", project: "${OPENSHIFT_PROJECT}", service: "${OPENSHIFT_SERVICE}"
    }
}

the resulting error/stacktrace is:

java.nio.file.NoSuchFileException: /var/run/secrets/kubernetes.io/serviceaccount/token
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
	at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
	at java.nio.file.Files.newByteChannel(Files.java:361)
	at java.nio.file.Files.newByteChannel(Files.java:407)
	at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
	at java.nio.file.Files.newInputStream(Files.java:152)
	at hudson.FilePath.read(FilePath.java:1771)
	at org.jenkinsci.plugins.workflow.steps.ReadFileStep$Execution.run(ReadFileStep.java:96)
	at org.jenkinsci.plugins.workflow.steps.ReadFileStep$Execution.run(ReadFileStep.java:86)
	at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution$1$1.call(SynchronousNonBlockingStepExecution.java:49)
	at hudson.security.ACL.impersonate(ACL.java:260)
	at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution$1.run(SynchronousNonBlockingStepExecution.java:46)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Any help would be appreciated.

Conditional "when" directive is ignored

When using the "when" directive within a declarative pipeline, it gets ignored if the openshift client plugin is used.

In the following pipeline, both Build and Dev stages run. If the openshift.withCluster() block is removed, then only Dev runs.

openshift.withCluster() {
    pipeline {
        agent {
            label 'maven'
        }
        stages {
            stage('Build') {
                when {
                    expression {
                        return false
                    }
                }
                steps {
                    echo "Build"
                }
            }
            stage('Dev') {
                when {
                    expression {
                        return true
                    }
                }
                steps {
                    echo "Dev"
                }
            }
        }
    }
}

Tested with 1.0.2 and Jenkins Jenkins 2.73.3 on OpenShift v3.7.0-rc.0+d076bb5-181.

Command not found when using tag action with insecure

OpenShift Version: v3.6.173.0.21
Jenkins Version: 2.46.3
Plugins: https://github.com/RHsyseng/jenkins-on-openshift/blob/master/jenkins/plugins.txt

Observations: It looks like --insecure-skip-tls-verify is on a newline from the output and the error.

Example Pipeline script section

script {
                    withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: "registry-api", usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD']]) {
                        
                        openshift.withCluster('insecure://internal-registry.host.prod.eng.rdu2.redhat.com:8443', env.PASSWORD) {
                            openshift.withProject('lifecycle') {

                                env.VERSION = readFile('app/VERSION')
                                echo "Hello from project: ${openshift.project()}"

                                istag = openshift.selector('istag')
                                
                                echo "${istag}"
                                
                                println("${openshift.raw('status').out}")
                                
                                openshift.verbose()
                                openshift.tag("${openshift.project()}/${env.IMAGE_STREAM_NAME}:${env.GIT_COMMIT}", "${openshift.project()}/${env.IMAGE_STREAM_NAME}:${env.VERSION}")
                                openshift.verbose(false)
                                
                            }
                        }
                    }

                }

Output and error

Hello from project: lifecycle
[Pipeline] echo
selector([name=istag],[labels=null],[list=null])
[Pipeline] _OcAction
[Pipeline] echo
In project Software lifecycle (lifecycle) on server https://internal-registry.host.prod.eng.rdu2.redhat.com:8443

1 warning identified, use 'oc status -v' to see details.

[Pipeline] _OcAction
Verbose sub-step output:
	Command> oc tag lifecycle/nodejs-mongo-persistent:b9ba509 lifecycle/nodejs-mongo-persistent:1.0
 --insecure-skip-tls-verify --server=https://internal-registry.host.prod.eng.rdu2.redhat.com:8443 --namespace=lifecycle --loglevel=8 --token=XXXXX 
	Status> 127
	StdOut>
	StdErr> /var/lib/jenkins/jobs/jcallen/workspace@tmp/durable-a15b1de2/script.sh: line 3: --insecure-skip-tls-verify: command not found
	Reference> {}
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Stage - OpenShift DeploymentConfig)
Stage 'Stage - OpenShift DeploymentConfig' skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Stage - Test)
Stage 'Stage - Test' skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Production - Push Image)
Stage 'Production - Push Image' skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: tag returned an error;
{reference={}, err=/var/lib/jenkins/jobs/jcallen/workspace@tmp/durable-a15b1de2/script.sh: line 3: --insecure-skip-tls-verify: command not found, verb=tag, cmd=oc tag lifecycle/nodejs-mongo-persistent:b9ba509 lifecycle/nodejs-mongo-persistent:1.0
 --insecure-skip-tls-verify --server=https://internal-registry.host.prod.eng.rdu2.redhat.com:8443 --namespace=lifecycle --loglevel=8 --token=XXXXX , out=, status=127}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.