deis / jenkins-jobs Goto Github PK
View Code? Open in Web Editor NEWDSL representations of Jenkins jobs for Deis
License: MIT License
DSL representations of Jenkins jobs for Deis
License: MIT License
Right now our release process is such that someone creates a PR that adds a chart with some semver notation, such as: workflow-beta2
and commits using our standard commit message style: chore(release): ...
and then we generate a special Jenkins job just to test that one PR.
It would be neat if the Jenkins job that tests PRs is aware of this special key (chore(release):...
) in a git commit message then figured out two things:
It would combine the findings above and "run the right tests" and vet our release for us.
As the release automation improvements for beta4 wrap up, update the Release Checklist with new processes added via the corresponding meta ticket
This proposes a solution to eliminating the manual Step #2 from the release-checklist.md. It currently involves cloning this repo, updating a value and pushing commit to master.
Instead, we can create a chron (daily?) job that can make the appropriate GH api call and determine the next immediate 'open' milestone. Once this value is acquired, the job would determine if it is different from the current value. If so, it would update via a commit to master. If no, finish the build with a no-op.
We need a way to vet a given DSL change set -- minimally, verify it parses OK -- rather than running it manually through our prod Jenkins server.
A good looking path forward is to setup a solution using the Gradle Jenkins Plugin, ideally with an on-demand Docker-ized Jenkins server
So close...
https://ci.deis.io/job/deis-seed-repos/5/console
The github API token present in the job is not present in the container.
We need a way for each <component>(-pr)
job to show the downstream workflow-test(-pr)
job it kicked off. (Currently one has to hunt through latter jobs looking for the one referencing the right upstream job.)
Current idea is to use https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin which will necessitate a Jenkins restart (scheduled for this evening, 3/30 at 6:30 MST)
Before we can execute #29 we have to move the following jobs to the jenkins job dsl in this repo:
Delete (might require a jenkins restart):
Part of Meta ticket #117
Estimated size: [M-L]
Replace the remaining jobs in jenkins-jobs to using the Jenkinsfile approach developed in #119 and #120.
The workflow-test
job has logic (seen here) which will auto-increment a vetted component image tag in the charts repo. This same auto-increment commit, however, invokes another (redundant) build of workflow-test
since this job is looking for git pushes in said charts
repo.
It would be nice to detect this and not use resources to run redundant tests.
This job should run the logic needed to claim a k8s cluster for use by the downstream e2e test job.
Reference the meta issue for upstream/downstream configuration.
Testing for Deis v1 should be less urgent but will not go away for a long time, so the all-important "test-ec2" job and its friends should be made Groovy and checked in here.
#104 adds the ability to require other commits for the inclusion in the build/test pipeline.
It would be preferable if instead of having to supply the commit sha in Requires <repo_name>#<commit_sha>
one could just reference the PR number: Requires <repo_name>#<PR_number>
as is the usual protocol when referencing another issue/PR in GitHub.
The api call would probably be this one and see here for an example of curl-ing the commit status GH API.
After #124 we are set up to build out workflow-cli(-pr)
jobs. They won't necessarily follow most of the common logic in the existing component_jobs.groovy
and the build/(unit) test/deploy logic is handled currently by travis/appveyor, so we just want jobs to get kicked off on commits to master/PR branches (respectively) and hand the commit sha, when applicable (for PRs) to the downstream e2e test job.
TODOs
See deis/workflow-e2e#188 for details of SKIP_FLAG
and how to run them.
Let's bring up Jenkins 2.0 host+slaves on a k8s cluster as part of the effort to 'Sip our own champagne'. Learnings from this 'Spike' will inform how best to move forward with a Jenkins Champagne cluster.
Thinking a combo of a Jenkins Master Docker container, a handful of Jenkins Slave Docker containers and this Jenkins K8s Plugin, plus perhaps creating a Jenkins Helm chart...
Part of Meta ticket #117
Estimated size: [M]
Take one Workflow component repo and convert to using Jenkins Pipeline/Jenkinsfile
Jenkinsfile
origin
are tested.Currently there is no way to know what job/build kicked off a current job.
This job should run the logic needed to bump the chart version of workflow-dev
if the upstream e2e test job is successful.
Reference the meta issue for upstream/downstream configuration.
Address the TODO
seen here
As we move our CI infrastructure to Jenkins, less functionality will be needed in Travis. Eventually, we are planning to cease the dependence on Travis entirely, but this ticket tracks the reduced scope of moving build and deploy steps for master commits to Jenkins.
Add build/deploy steps to Jenkins component_job(s):
Remove Travis build/deploy steps from components:
This meta tracks the tickets needed to replace existing job definitions with in-repo Jenkins Pipeline representations:
Bonus/if still warranted: Wrap up global DSL into 'official' Pipeline plugin(s)
Once we have confidence in the https://ci.deis.io/job/workflow-test-parallel/ we should have both workflow-test(-pr)
jobs run parallel by default.
TODOS
workflow_test.groovy
but keep the PARALLEL_TESTS
env var as true
This job should run the logic needed to 'clean up' a k8s cluster after the upstream e2e test job finishes (regardless of build status.)
Reference the meta issue for upstream/downstream configuration.
Modify the workflow_test such that it is only responsible for running the e2e tests. When finished, it will kick off the applicable downstream job(s) (#24 regardless of job result and #25 if success)
Our release process is still a highly manual process. This ticket attempts to list/track items for improving said process.
replace values that change with each release with template values deis/charts#228
add CHANGELOG.md generator deis/docker-go-dev#35
jenkins job for testing (release) chart pr: #48
workflow-test-release jenkins job #68
automate release chart(s) creation deis/deisrel#1
(more to come)
If we set $K8S_CLUSTER_NAME
then we can define it to be something like ${JOB_NAME}-${BUILD_NUMBER}
which makes it more discoverable which clusters are running what jobs.
This job will apply seed-repo to any component under our control. This will help with applying any changes made to seed-repo immediately without any user intervention.
Usage should be:
related to work at #55 for automating release tasks.
Right now we only have 2 jobs in the component 'release' pipeline:
<component>(-pr)
job which builds and deploys its immutable artifact (Docker image tag) and workflow-test(-pr)
which runs the tests against that artifact, officially promoting it to chart if success.
We have relied on having the first job block on the downstream test job, for updating the GitHub PR status. This already presents bottleneck issues and will only be exacerbated as more jobs are added to the pipeline (see #20 for more info on this endeavor).
Therefore, we would rather remove this blocking behavior and instead have each job in the pipeline be able to run a script to update the GitHub PR status with its build result. See https://developer.github.com/v3/repos/statuses/ for info on how this can be done.
See https://ci.deis.io/job/controller-pr/911/console:
14:05:48 Successfully built 8fb640adf706
14:05:48 docker tag deisci/controller:git-d7d1c9d deisci/controller:canary
14:05:48 docker push deisci/controller:git-d7d1c9d
14:05:48 The push refers to a repository [docker.io/deisci/controller]
14:05:49 8c1fc4ab0488: Preparing
14:05:49 6b772acdb3a3: Preparing
14:05:49 056edaa0a2eb: Preparing
14:05:49 9eb130d61eec: Preparing
14:05:49 f7e7f3d0c903: Preparing
14:05:49 5f70bf18a086: Preparing
14:05:49 f34e865127cf: Preparing
14:05:49 5f70bf18a086: Waiting
14:05:49 f34e865127cf: Waiting
14:05:50 6b772acdb3a3: Layer already exists
14:05:50 6b772acdb3a3: Layer already exists
14:05:50 8c1fc4ab0488: Layer already exists
14:05:50 8c1fc4ab0488: Layer already exists
14:05:50 f7e7f3d0c903: Layer already exists
14:05:50 f7e7f3d0c903: Layer already exists
14:05:50 056edaa0a2eb: Layer already exists
14:05:50 056edaa0a2eb: Layer already exists
14:05:50 9eb130d61eec: Layer already exists
14:05:50 9eb130d61eec: Layer already exists
14:05:50 f34e865127cf: Layer already exists
14:05:50 f34e865127cf: Layer already exists
14:05:50 5f70bf18a086: Layer already exists
14:05:50 5f70bf18a086: Layer already exists
14:05:51 git-d7d1c9d: digest: sha256:d161ee40d57dd3ab8c95855ccc514ea9aa078f47caf8904b069c338eedf265c2 size: 1760
14:05:52 WARNING: login credentials saved in /home/jenkins/.docker/config.json
14:05:52 Login Succeeded
14:06:23 Unable to connect to the server: dial tcp 104.197.71.127:443: i/o timeout
14:06:23 docker build --rm -t quay.io/deisci/controller:git-d7d1c9d rootfs
14:06:24 Sending build context to Docker daemon 319 kB
And eventually:
4:06:30 Triggering projects: workflow-test-pr
14:06:30 Setting status of 86cb3e34a7b1fc11df7435b71c13b05c5640c3b7 to SUCCESS with url https://ci.deis.io/job/controller-pr/911/ and message: 'Merge with caution! Test job(s) may still be in progress...
14:06:30 '
14:06:30 Using context: ci/jenkins/pr
14:06:30 Finished: SUCCESS
Ultimately this job should've failed on the i/o timeout and gone red at that point, because the downstream job failed pulling the image: https://ci.deis.io/job/workflow-test-pr/3188/
It was copy-pasted from deis/controller, which will eventually confuse someone.
This job should run the unit tests for a given component (see list of currently tracked components here).
Reference the meta issue for upstream/downstream configuration.
Part of Meta ticket #117
Estimated size: [L]
Extract common logic used in Workflow component pipelines created in #118 and #119 into the Global DSL library
Once logic is in Global DSL Library, iterate through each component repo and issue refactor PRs updating with use of common DSL (may amend this description with TBD PRs as they are created)
This job should run the integration tests for a given component (see list of currently tracked components here).
Reference the meta issue for upstream/downstream configuration.
Our CI pipeline currently only supports running e2e against one specific commit in one repo. However, our e2e jenkins job in isolation actually does have the capacity to track and test multiple commits (see all <COMPONENT>_SHA
env vars here).
One angle might be to make use of the ghprbLongDescription
value that we get in using the GitHub Pull Request Builder Plugin. As long as the committer supplied a standard message in the commit body, something along the lines of:
Requires slugrunner sha 80281d0
Requires builder sha 45ac6da
...
and so on, for the one or more required/'paired' changes, then our test job could feasibly be provided said sha's in the form of appropriate env vars from above.
We should make the seed job the source of record for all jenkins configuration.
workflow-test-seed-job
and component-seed-job
This ticket tracks the work needed to revise the current CI pipeline for a given Deis component change into the following state:
unit test
job DSL: #21integration test
job DSL: #22claim cluster
job DSL: #23workflow_test.groovy
) to only run tests, afterwards kicking off the following two jobs concurrently: #26cluster cleanup
job DSL: #24chart bumpver
job DSL: #25TODO
We have the need for a component-release-job
that would track git push events to a release*
branch on any/all of our Workflow component repos.
Use case: If someone creates a release branch, we want a Docker image created and pushed from that release branch.
Further, if they modify that branch (i.e. from fixes, cherry-picking or rebase), this job should fire again and push a new image, ready for use in a release chart.
As of this writing, we are leaning to one .groovy
file set up to track all component repos, rather than adding separate -release
jobs for each component via additional configuration in the existing component_jobs.groovy
file.
As we consolidate documentation into deis.com, we need to have a build pipeline for all of our docs/websites.
Upstream websites or docs projects (current):
Any green build of master from should trigger a downstream deis/gutenberg job to deploy to staging. Successful deploy of staging should deploy to production.
Will have staging and prod IAM credentials for s3 upload and cloudfront distribution invalidation.
We need to ensure that the in-place upgrade path is fully tested by Jenkins for a release.
continuation of deis/deis#4517
This way we can start a job which will create self-installing packages, much like what we did for the deis v1 client and deisctl release process. Alternatively, if we can just copy over the existing jobs and change the repository target then we're golden.
See https://ci.deis.io/job/build-deis-cli-installer-darwin/
See https://ci.deis.io/job/build-deis-cli-installer-linux/
... and use the fork in our jobs
We need an overview of the CI pipeline with details on all the moving parts.
I'm thinking text description added to the README
and in addition a visual aid (perhaps an html representation somewhere in Jenkins -- represented using the job DSL plugin)
See https://github.com/deis/workflow/blob/master/src/roadmap/release-checklist.md#step-2-create-a-new-helm-chart for the steps needing to be covered.
Related to the meta ticket #55
cross-post of deis/deis#4037
@jchauncey has found some interesting problems when running significant load through deis. I'd like to see an automated version of this test (or similar) so that we can watch deis' performance over the course of future releases. We could even run this on the various providers and compare performance. ๐
This change will allow tests to acquire a clean k8s cluster very quickly, instead of creating and tearing down a cluster on each run. See https://github.com/deis/k8s-claimer for more details on the claimer
The component-seed-repo job is now failing as of e30a74f. See https://ci.deis.io/job/component-seed-job/56/console
09:45:17 Processing DSL script monitor_jobs.groovy
09:45:17 ERROR: (monitor_jobs.groovy, line 43) No such property: isPr for class: javaposse.jobdsl.dsl.helpers.scm.GitContext
09:45:17 [BFA] Scanning build for known causes...
09:45:17 [BFA] No failure causes found
09:45:17 [BFA] Done. 0s
09:45:17 Finished: FAILURE
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.