Giter VIP home page Giter VIP logo

charts's Introduction

Deis Workflow is no longer maintained.
Please read the announcement for more detail.
09/07/2017 Deis Workflow v2.18 final release before entering maintenance mode
03/01/2018 End of Workflow maintenance: critical patches no longer merged
Hephy is a fork of Workflow that is actively developed and accepts code contributions.

Deis Chart Repository

Kubernetes Helm has replaced Helm Classic. The charts in this repository are now deprecated. Workflow v2.9.0 will be the last release installable with Helm Classic (helmc).

This repository contains Helm Classic Charts for Deis Workflow and its components. For more information about Deis Workflow, please visit the main project page at https://github.com/deis/workflow.

charts's People

Contributors

aboyett avatar aledbf avatar arschles avatar helgi avatar iancoffey avatar jackfrancis avatar jchauncey avatar joshua-anderson avatar kmala avatar krancour avatar mboersma avatar nicdoye avatar rimusz avatar slack avatar technosophos avatar vdice avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

collapse `deis-tests` and `deis-tests-parallel` into one chart

Unless I'm missing something, I don't believe we need both charts anymore. (CI defaults to using the parallel version.)

We should collapse them into one deis-tests chart (perhaps more properly named to indicate it's role as the tests chart to pair with the latest/dev version of our workflow chart, workflow-dev).

Ideally, this chart could run the test suites either in serial or in parallel via a configuration, e.g. env var.

Add more checks to make sure kubernetes is fully able to run e2e tests

Another common error case to wreak havoc on e2e test runs is when the router is seemingly unable to serve traffic on the internal kubernetes network. See the following output for more info:

Ensuring Deis cluster is up...
---> No chart named "deis-tests" in your workspace. Nothing to delete.
Waiting for all pods to be in running state...
..........All pods are running!
Ensuring non-null router ip...
....Checking DNS (http://deis.104.154.105.103.xip.io/v2/)...
..Deis is responding at http://deis.104.154.105.103.xip.io/v2/.
Successfully interacted with Deis platform 1 time(s).
Deis is responding at http://deis.104.154.105.103.xip.io/v2/.
Successfully interacted with Deis platform 2 time(s).
Deis is responding at http://deis.104.154.105.103.xip.io/v2/.
Successfully interacted with Deis platform 3 time(s).
Deis is responding at http://deis.104.154.105.103.xip.io/v2/.
Successfully interacted with Deis platform 4 time(s).
Deis is responding at http://deis.104.154.105.103.xip.io/v2/.
Successfully interacted with Deis platform 5 time(s).
Running e2e tests...
---> Fetched chart into workspace /var/lib/jenkins/.helm/workspace/charts/deis-tests
---> Done
Pod/deis-tests
---> Running `kubectl delete` ...
[WARN] Could not delete Pod deis-tests (Skipping): exit status 1
---> Error from server: pods "deis-tests" not found

---> Done
---> Running `kubectl create -f` ...
pod "deis-tests" created

---> Done
========================================
# Deis 2.0.0-beta Tests

End-to-end tests for the Deis v2 open source PaaS.

NOTE: This assumes a fresh `helm install` of Deis v2. The test suite creates
an initial user with admin privileges (who is deleted when tests complete).
See https://github.com/deis/workflow-e2e for more detail.

console
$ helm uninstall -n deis -y deis-tests ; helm install deis-tests
$ kubectl --namespace=deis logs -f deis-tests

========================================
Waiting for all pods to be in running state...
......All pods are running!
Running Suite: Deis Workflow
============================
Random Seed: 1456285917
Will run 34 of 57 specs

$ deis register http://deis.10.107.249.168.xip.io --username=admin --password=admin [email protected]
Error: http://deis.10.107.249.168.xip.io does not appear to be a valid Deis controller.
Make sure that the Controller URI is correct and the server is running.
Failure [5.175 seconds]
[BeforeSuite] BeforeSuite 
/go/src/github.com/deis/workflow-e2e/tests/tests_suite_test.go:113

  No future change is possible.  Bailing out early after 0.000s.
  Expected
      <int>: 1
  to match exit code:
      <int>: 0

  /go/src/github.com/deis/workflow-e2e/tests/tests_suite_test.go:166

Every Deis component should have its own service account

Ran into this a while back with the router...

Router depends on a service account token mounted at /var/run/kubernetes/secret/token in order to access the k8s API. By default, if you don't explicitly specify a service account in a manifest, the default service account for the namespace in question is used. The rub, however, is that depending on what admission controls are in place any given k8s cluster, default service accounts may not be created automatically when new namespaces are created.

To get around this, the deis service account was explicitly added to the deis namespace. Router has been using this for quite some time and it works great. At the time this SA was added, the expectation was that other Deis components might start using it as well. So far that hasn't happened, although it should.

In talking with @jchauncey about this, he has suggested explicitly creating a service account per component instead (instead of all sharing one). This will circumvent the condition where the default service account was not auto-generated in the deis namespace and will have the added benefit that operators can grant more finely-grained permissions to each component. For example, router simple has no need to write to the API. Read-only permissions will suffice. Workflow, by contrast, requires r/w permissions.

fwiw, k8s clusters that are adequately secured aren't something we're really running into often... yet. But once people start to wrap their heads around what can happen without ACLing in place, more clusters will be doing this, and these changes will help us future-proof ourselves against that.

Docker images not stored in S3 Backend

Hi,
I've configured Deis v2 with S3 backend.
I've create two s3 buckets, one for the database backups and another for the app's images.
In the database bucket there is the first backup.
I've deployed two Heroku apps and I expected see the images in S3, but the bucket is empty.
Is normal this behaviour?

Some logs:

Deis Registry:

2016/04/11 08:55:48 INFO: Starting registry...
2016/04/11 08:55:48 INFO: using s3 as the backend
2016/04/11 08:55:48 INFO: registry started.
time="2016-04-11T08:55:48.938542651Z" level=warning msg="No HTTP secret provided - generated random secret. This may cause problems with uploads if     multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the     REGISTRY_HTTP_SECRET environment variable." go.version=go1.5.3 instance.id=6b02048d-2376-4426-860f-75de2a7023b2 service=registry version=v2.1.1-495-    g1a90b2b 

time="2016-04-11T08:55:48.942004056Z" level=info msg="Starting upload purge in 0" go.version=go1.5.3 instance.    id=6b02048d-2376-4426-860f-75de2a7023b2 service=registry version=v2.1.1-495-g1a90b2b 

time="2016-04-11T08:55:48.942387245Z" level=info msg="redis not configured" go.version=go1.5.3 instance.id=6b02048d-2376-4426-860f-75de2a7023b2     service=registry version=v2.1.1-495-g1a90b2b 

time="2016-04-11T08:55:48.94431019Z" level=info msg="listening on [::]:5000" go.version=go1.5.3 instance.id=6b02048d-2376-4426-860f-75de2a7023b2     service=registry version=v2.1.1-495-g1a90b2b 

time="2016-04-11T08:55:48.942452798Z" level=info msg="PurgeUploads starting: olderThan=2016-04-04 08:55:48.942433596 +0000 UTC,     actuallyDelete=true" 
time="2016-04-11T08:55:48.956100457Z" level=info msg="Purge uploads finished.  Num deleted=0, num errors=1" 

time="2016-04-11T08:55:48.956169118Z" level=info msg="Starting upload purge in 24h0m0s" go.version=go1.5.3 instance.    id=6b02048d-2376-4426-860f-75de2a7023b2 service=registry version=v2.1.1-495-g1a90b2b    

and each 15 seconds:

time="2016-04-11T08:55:53.142921204Z" level=info msg="response completed" go.version=go1.5.3 http.request.host="10.244.0.6:5000" http.request.    id=c37b302b-e9c9-4cad-b553-ad28e80ad538 http.request.method=GET http.request.remoteaddr="10.244.0.1:33842" http.request.uri="/v2/" http.request.    useragent="Go 1.1 package http" http.response.contenttype="application/json; charset=utf-8" http.response.duration=2.115647ms http.response.    status=200 http.response.written=2 instance.id=6b02048d-2376-4426-860f-75de2a7023b2 service=registry version=v2.1.1-495-g1a90b2b

10.244.0.1 - - [11/Apr/2016:08:55:53 +0000] "GET /v2/ HTTP/1.1" 200 2 "" "Go 1.1 package http"

e2e test run passed but job failed

In https://ci.deis.io/job/deis-v2-e2e/1034/console the e2e test passed and yet the job was marked failed:

Ran 111 of 114 Specs in 2225.458 seconds
SUCCESS! -- 111 Passed | 0 Failed | 2 Pending | 1 Skipped --- PASS: TestTests (2225.47s)
PASS
---------------------------------------------
[info] chart-mate:test: ==> Waiting for pod exit code......�(B
0
[warn] chart-mate:test: --> Retrieving information about the kubernetes/deis cluster before exiting...
HEAD detached at de8b45e
nothing to commit, working directory clean
Tearing down kubernetes cluster...
[info] chart-mate:down: ==> Destroying cluster helm-testing-a954...�(B
Build step 'Execute shell' marked build as failure
Build did not succeed and the project is configured to only push after a successful build, so no pushing will occur.
[BFA] Scanning build for known causes...
............[BFA] No failure causes found
[BFA] Done. 12s
Finished: FAILURE

It seems this was due to a non-zero exit code from chart-mate:down. If so, do we want this to mark the job as a failure? cc @sgoings

Checking endpoint for expected HTTP status code errors out

In CI we've seen:

15:08:42 ========================================
15:08:42 # Workflow 2.0.0-beta
15:08:42 
15:08:42 WARNING: this chart is for testing only! Features may not work, and there are likely to be bugs.
15:08:42 
15:08:42 Please report any issues you find in testing Workflow to the appropriate GitHub repository:
15:08:42 - builder: https://github.com/deis/builder
15:08:42 - chart: https://github.com/deis/charts
15:08:42 - database: https://github.com/deis/postgres
15:08:42 - helm: https://github.com/helm/helm
15:08:42 - minio: https://github.com/deis/minio
15:08:42 - registry: https://github.com/deis/registry
15:08:42 - router: https://github.com/deis/router
15:08:42 - controller: https://github.com/deis/controller
15:08:42 - workflow manager: https://github.com/deis/workflow-manager
15:08:42 ========================================
15:08:45 deployment "slugruntesting" deleted
15:08:47 deployment "slugbuildtesting" deleted
15:08:47 [info] chart-mate:check: ==> Waiting for all pods to be running...
15:12:08 .....................................................................................................................[info] chart-mate:check: ==> All pods are running!...
15:12:08 [info] chart-mate:check: ==> Ensuring non-null router ip...
15:12:09 [info] chart-mate:check: ==> Checking endpoint http://deis.104.197.82.219.xip.io/v2/ for expected HTTP status code...
15:12:09 [info] chart-mate:down: ==> Destroying cluster helm-testing-4364...
15:12:11 Build step 'Execute shell' marked build as failure
15:12:11 Archiving artifacts
15:12:12 [BFA] Scanning build for known causes...
15:12:12 [BFA] No failure causes found
15:12:12 [BFA] Done. 0s
15:12:12 Finished: FAILURE

Which is hard to understand what exactly went wrong. Let's add some more debugging info + resolve this issue.

Unable to reach workflow using vagrant

Hi,

I just tried to start a Deis v2 cluster on Vagrant.
Everything is correctly started using deis chart (not deis-dev) but i'm getting stuck right after having retrieved the router IP.

"kubectl --namespace=deis get svc deis-router" only gives me a Cluster IP starting with 10.X.X.X which cannot be reached outside the cluster so the deis cli is not able to contact the controller behind.

Did I miss something?

Fix unconventional labels and selectors

There are numerous places in the chart's many manifests where labels and selectors such as name: foo have been used instead of the more conventional app: foo. This should be corrected.

Need for bumpver make task

Currently, the version bump logic (for promoting immutable git SHA-based image tags) is entwined in the chart/chart-mate install command which requires a running k8s cluster (with the intention of installing the deis-dev chart).

We have at least one case in the workflow-manager component where (at this time) we don't desire to run the e2e tests (meaning, no need to bring up a k8s cluster). All we would like to do is bump the version and make the commit. Therefore, the request for a make bumpver target in charts (which, following the current pattern, would simply invoke the to-be-created chart-mate correlate).

Alter charts to allow 100% reproducibility

The current Helm charts for deis will produce different manifest files each time they are run (actually, each time generate is run). Some situations may dictate that the manifests be reproducible.

Likewise, the reason for this is that many templates generate random secrets. Some may prefer to not have random secrets, but to supply their own.

The solution seems to be to simply support template variables for the params that currently allow only generated ones.

{{$pw := rand 16}}
Secret: {{default $pw .MyPassword}}

That would allow people to supply values via a TOML/YAML/JSON file like we do with the object storage stuff, but would default back to a randomly generated password otherwise.

Create workflow-beta1 chart

This will house a set of manifests that define an immutable reference to a version of each deis component needed to have a working Deis cluster.

Needs in this chart:

  • reference artifacts in deis org (quay.io)
  • set imagePullPolicy to IfNotPresent (not Always)

rename deis-dev components from deis-* to *

This seems ugly and redundant:

kubectl --namespace=deis get svc deis-router

If it serves no purpose then I propose that we rename service names to their bare names, i.e.

kubectl --namespace=deis get svc router

I understand that this is a holistic change (since it affects dns and the pod's environment) and probably shouldn't be done pre-beta but we should consider it.

Get more info on `make [check]: Error 56`

On many ci e2e test runs we have seen a make: ... Error 56 during a make check. My guess is that there's something wonky happening in wait-for-output - a method that needs some love and better error handling/tracing.

---> Done
========================================
# Deis 2.0.0-beta

WARNING: this chart is for testing only! Features may not work, the
components are not HA, and there are likely to be bugs.

Please report any issues you find in testing Deis v2 on Kubernetes
to the appropriate GitHub repository:
- builder: https://github.com/deis/builder
- chart: https://github.com/deis/charts
- database: https://github.com/deis/postgres
- helm: https://github.com/helm/helm
- minio: https://github.com/deis/minio
- registry: https://github.com/deis/registry
- router: https://github.com/deis/router
- workflow: https://github.com/deis/workflow
========================================
Ensuring Deis cluster is up...
---> No chart named "deis-tests" in your workspace. Nothing to delete.
Waiting for all pods to be in running state...
.....................................error: couldn't read version from server: Get https://104.197.13.57/api: dial tcp 104.197.13.57:443: i/o timeout
All pods are running!
Ensuring non-null router ip...
Checking DNS (http://deis.104.197.104.198.xip.io/v2/)...
make: *** [check] Error 56
Tearing down kubernetes cluster...
[info] chart-mate:down: ==> Destroying cluster helm-testing-3e88...�(B
Build step 'Execute shell' marked build as failure
Build did not succeed and the project is configured to only push after a successful build, so no pushing will occur.
[BFA] Scanning build for known causes...
[BFA] No failure causes found
[BFA] Done. 0s
Warning: you have no plugins providing access control for builds, so falling back to legacy behavior of permitting any downstream builds to be triggered
Finished: FAILURE

Use a support email address for maintainer

I put my email in the deis Chart.yaml initially, but this is a group effort. It would be better either to include all relevant maintainers, or to use a designated support email like "[email protected]".

If we prefer the latter, then we need to ensure the alias is set up and targeted to the relevant group.

No endpoints for deis-workflow

[kent@mbp router]$ k describe service deis-workflow
Name:           deis-workflow
Namespace:      deis
Labels:         heritage=deis,routable=true
Selector:       name=deis-workflow
Type:           ClusterIP
IP:         10.3.0.124
Port:           http    80/TCP
Endpoints:      
Session Affinity:   None
No events.

And I can't see anything glaringly wrong with the labels or selectors.

Rename charts from deis-* to workflow-*

Consistent with the directive to brand the PaaS as "Workflow," while "Deis" is the org / umbrella over all our "certified workloads," the chart names should probably change to "deis/workflow" and "deis/workflow-dev".

cc @slack

Add resilience to kubectl logs -f on test pod

We fairly consistently see a situation where a kubectl logs -f [deis-tests] --namespace=deis exits (like it got a connection reset or got an EOF) on ci e2e runs. This makes the test logic think that the tests are done running and therefore kick off the "wait for pod exit code" rigamarole. We should figure out a way to ensure that tests have either finished successfully, still are running, or have errored by adding in a check for pod status (other than container exit code) which can then rekick off a tailing of the test logs.

Remove workflow chart

As per our discussion this morning, we'd like to remove the "workflow" chart in this repository which currently contains v2 alpha. We should only do this once we have completed and merged the work for #167.

error when generating deis-dev

Updated to latest chart, using helm 0.4.0:

><> rm -rf ~/.helm/workspace/
><> helm fetch deis-dev
---> Fetched chart into workspace /home/bacongobbler/.helm/workspace/charts/deis-dev
---> Done
><> helm generate deis-dev
[ERROR] Error opening value file: Near line 45 (last key parsed 'gcs.key_json'): Expected value but found '`' instead.
[ERROR] Failed to complete generation: failed to execute helm template -o /home/bacongobbler/.helm/workspace/charts/deis-dev/manifests/deis-registry-rc.yaml -d /home/bacongobbler/.helm/workspace/charts/deis-dev/tpl/objectstorage.toml /home/bacongobbler/.helm/workspace/charts/deis-dev/tpl/deis-registry-rc.yaml (/home/bacongobbler/.helm/workspace/charts/deis-dev/tpl/deis-registry-rc.yaml): exit status 1

running helm uninstall deis gives the following error:

---> Running `kubectl delete` ...
[WARN] Could not delete Service deis-builder (Skipping): Error from server: services "deis-builder" not found
: exit status 1
[WARN] Could not delete Service deis-database (Skipping): Error from server: services "deis-database" not found
: exit status 1
[WARN] Could not delete Service deis-etcd-discovery (Skipping): Error from server: services "deis-etcd-discovery" not found
: exit status 1
[WARN] Could not delete Service deis-etcd-1 (Skipping): Error from server: services "deis-etcd-1" not found
: exit status 1
[WARN] Could not delete Service deis-logger (Skipping): Error from server: services "deis-logger" not found
: exit status 1
[WARN] Could not delete Service deis-minio (Skipping): Error from server: services "deis-minio" not found
: exit status 1
[WARN] Could not delete Service deis-registry (Skipping): Error from server: services "deis-registry" not found
: exit status 1
[WARN] Could not delete Service deis-router (Skipping): Error from server: services "deis-router" not found
: exit status 1
[WARN] Could not delete Service deis-workflow (Skipping): Error from server: services "deis-workflow" not found
: exit status 1
[WARN] Could not delete ReplicationController deis-builder (Skipping): Error from server: replicationControllers "deis-builder" not found
: exit status 1
[WARN] Could not delete ReplicationController deis-database (Skipping): Error from server: replicationControllers "deis-database" not found
: exit status 1
[WARN] Could not delete ReplicationController deis-etcd-discovery (Skipping): Error from server: replicationControllers "deis-etcd-discovery" not found
: exit status 1
[WARN] Could not delete ReplicationController deis-etcd-1 (Skipping): Error from server: replicationControllers "deis-etcd-1" not found
: exit status 1
[WARN] Could not delete ReplicationController deis-logger (Skipping): Error from server: replicationControllers "deis-logger" not found
: exit status 1
[WARN] Could not delete ReplicationController deis-minio (Skipping): Error from server: replicationControllers "deis-minio" not found
: exit status 1
[WARN] Could not delete ReplicationController deis-registry (Skipping): Error from server: replicationControllers "deis-registry" not found
: exit status 1
[WARN] Could not delete ReplicationController deis-router (Skipping): Error from server: replicationControllers "deis-router" not found
: exit status 1
[WARN] Could not delete ReplicationController deis-workflow (Skipping): Error from server: replicationControllers "deis-workflow" not found
: exit status 1
[WARN] Could not delete DaemonSet deis-logger-fluentd (Skipping): Error from server: daemonsets "deis-logger-fluentd" not found
: exit status 1
[WARN] Could not delete Secret deis-etcd-discovery-token (Skipping): Error from server: secrets "deis-etcd-discovery-token" not found
: exit status 1
[WARN] Could not delete Secret minio-admin (Skipping): Error from server: secrets "minio-admin" not found
: exit status 1
[WARN] Could not delete Secret minio-ssl (Skipping): Error from server: secrets "minio-ssl" not found
: exit status 1
[WARN] Could not delete Secret minio-user (Skipping): Error from server: secrets "minio-user" not found
: exit status 1
[WARN] Could not delete ServiceAccount deis (Skipping): Error from server: serviceaccounts "deis" not found
: exit status 1
---> Done

it deletes everything as expected but not sure why we are getting this error

Helm manifests vs. repo manifests

Right now, each repo has its own "generic" manifests, while the Helm repo has "standard" manifests.

The goal of the Helm repo is to make it really easy to install, so we want those manifests to work cooperatively.

What should be rules be for "standard" manifests, though? Completely stand-alone? Sorta Deis aware?

Migrate deis/charts related jobs to Jenkins Job DSL

Persistance storage for Deis-Lite

I would like to cut the Deis Lite (to work on one node k8s ) with persistent storage from host.
manifests (rc) and storage:

  • database
    • /var/lib/postgresql/data
  • etcd
    • /var/lib/etcd2
  • minio
    • ???
  • registry
    • /var/lib/registry

Please correct if anything is wrong or add missing folders/manifests

etcd crashloop

I am still getting the etcd crashloop when trying to start the deis chart. I thought we had this fixed.

builder and workflow pods fail to start

With the current deis-dev chart, builder and workflow pods consistently fail to start for me on a new k8s cluster:

$ kubectl --namespace=deis get po
NAME                        READY     STATUS          RESTARTS   AGE
deis-builder-d6wok          0/1       ImageNotReady   0          5m
deis-database-s8z9l         1/1       Running         0          5m
deis-etcd-1-2p99n           1/1       Running         0          5m
deis-etcd-1-yn4i5           1/1       Running         1          5m
deis-etcd-1-zcubc           1/1       Running         0          5m
deis-etcd-discovery-zi268   1/1       Running         0          5m
deis-minio-qayzv            1/1       Running         0          5m
deis-registry-f7xad         1/1       Running         0          5m
deis-router-o4tmi           1/1       Running         0          5m
deis-workflow-ro7sc         0/1       ImageNotReady   0          5m

I can docker pull the images in question locally and on the minion node.

Revisit immutable tag propogation for workflow(-dev/-canary) chart

Our current CI model for deis component changes to the workflow-dev chart is as follows:

  1. submit component PR
  2. CI creates/pushes immutable git-<SHA> tag and mutable canary tag of said component
  3. CI runs e2e tests against updated chart with immutable tag from step 2, sending back success or failure
  4. Once PR is green, commit is merged to master
  5. Steps 2 and 3 are re-run against this master commit
  6. If tests are green, CI pushes commit to charts repo substituting immutable tag produced from step 2 for appropriate component Docker image

This process is great in theory. Unfortunately, in practice, the e2e tests aren't 100% reliable and so some perfectly good component changes (immutable tags) fail to get into the master version of the workflow-dev chart in a timely manner.

However, if we went the alternate route of having every component image in the workflow-dev chart use the canary tag, we run the risk of hitting a situation where the master version of the chart is unstable, since the mutable canary tag is produced during component build/deploy stage of CI and before e2e has a chance to notify with a failure.

These are a few options I see moving forward:

  1. Keep model same for now, meaning workflow-dev uses immutable tags but chart may not be as fresh
  2. Switch to canary model so chart is always fresh but offer no guarantees that chart will be stable at any given point
  3. Switch to canary model and revisit CI build/deploy logic thusly:
    • on component PR build/deploy, only push immutable tag
    • if e2e green (and other prerequisites met, i.e. LGTMs), merge to master
    • on component master merge build/deploy, push immutable tag and canary tag
    • run e2e,
      • if green we are all good and workflow-dev components already using canary tags
      • if red, chart may be unstable/broken and could remain until a fix is provided

The benefit of option 3 is the chance of a broken chart gets moved to a later stage (after greenlit PR gets merged, rather than before) but there still exists a chance of a unstable/broken chart, though hoping the cases will be rarer assuming PR vetting is dependable.

Note: if we go any of the canary routes, I vote we change the chart name to workflow-canary

Looking for feedback on those 3 options and/or other ideas in this area. Thanks!

Run multiple Deis clusters inside of one Kubernetes cluster

We need to make sure that we can run multiple Deis clusters inside of the same Kubernetes cluster. In fact, it's highly likely that some of our early testers will be using this feature.

This means:

  • We should not assume the namespace is "deis"
  • We should make sure that the "child" namespaces we create for apps do not collide. We could probably accomplish this by prepending the "parent" namespace to the child (e.g. "wild-socks" becomes "deis1-wild-socks"
  • We need to think long and hard about how we handle Ingress

What other things (label queries? service acounts?) are impacted?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.