Giter VIP home page Giter VIP logo

flux's Introduction

Flux v1

This repository contains the source code of Flux Legacy (v1).

Flux v1 has reached end of life and has been replaced by fluxcd/flux2 and its controllers entirely.

If you consider using Flux, please take a look at https://fluxcd.io/flux/get-started to get started with v2.

If you are on Flux Legacy, please follow the migration guide. If you need hands-on help migrating, you can contact one of the companies listed here.

History

Flux was initially developed by Weaveworks and made open source in 2016. Over the years the community around Flux & GitOps grew and in 2019 Weaveworks decided to donate the project to CNCF.

After joining CNCF, the Flux project has seen massive adoption by various organisations. With adoption came a wave of feature requests that required a major overhaul of Flux monolithic code base and security stance.

In April 2020, the Flux team decided to redesign Flux from the ground up using modern technologies such as Kubernetes controller runtime and Custom Resource Definitions. The decision was made to break Flux functionally into specialised components and APIs with a focus on extensibility, observability and security. These components are now called the GitOps Toolkit.

In 2021, the Flux team launched Flux v2 with many long-requested features like support for multi-tenancy, support for syncing an arbitrary number of Git repositories, better observability and a solid security stance. The new version made it possible to extend Flux capabilities beyond its original GitOps design. The community rallied around the new design, with an overwhelming number of early adopters and new contributions, Flux v2 gained new features at a very rapid pace.

In 2022, Flux v2 underwent several security audits, and its code base and APIs became stable and production ready. Having a dedicated UI for Flux was the most requested feature since we started the project. For v2, Weaveworks launched a free and open source Web UI for Flux called Weave GitOps, which made Flux and the GitOps practices more accessible.

Today Flux is an established continuous delivery solution for Kubernetes, trusted by organisations around the world. Various vendors like Amazon AWS, Microsoft Azure, VMware, Weaveworks and others offer Flux as-a-service to their users. The Flux team is very grateful to the community who supported us over the years and made Flux what it is today.

flux's People

Contributors

2opremio avatar aaron7 avatar alex-shpak avatar awh avatar dimitropoulos avatar ellieayla avatar hiddeco avatar lelenanam avatar marccarre avatar ncabatoff avatar ogerbron avatar ordovicia avatar paulbellamy avatar paulfarver avatar peterbourgon avatar petervandenabeele avatar phillebaba avatar philwinder avatar rade avatar renovate-bot avatar richardcase avatar rndstr avatar rytswd avatar samb1729 avatar scjudd avatar squaremo avatar stefanprodan avatar stephenmoloney avatar tamarakaufler avatar yebyen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flux's Issues

Cloud-ready history database

The current database writes to disk. It'd be good to have one that keeps things in a more resilient place when running in AWS.

Show nature of in-progress releases in `service show`

$ fluxctl service show --service=helloworld
Service: helloworld
State: In progress

CONTAINER   IMAGE                          CREATED
helloworld  quay.io/weaveworks/helloworld  
            .-> master-9a16ff945b9e        20 Jul 16 13:19 UTC
            |   master-b31c617a0fe3        20 Jul 16 13:19 UTC
            '-- master-a000002             12 Jul 16 17:17 UTC
                master-a000001             12 Jul 16 17:16 UTC

This entails looking at the RC config while in motion and determining what's changing, of course.

Deal with many, many image tags

Some image repos will have many, many tags. For example, the sock shop has a new image for every commit.

Fluxy struggles with this because it fetches image metadata for each tag, to get the creation timestamp. And to make things worse, Docker Hub is super-slow. Like, really slow.

Things that might help:

  • use the registry API v2?
  • balk if there are lots of tags
  • (use the expanded quay.io API when on quay.io)

Implement automated workflow

As described here. The goal of this work is to make Fluxy viable for current Weave Cloud deployment shenanigans.

Proposed design

There is a component called an automator which sits on top of the platform and registry.

+------------------------+
| automator              |
+------------------------+
  ^             ^
  | services    | images 
  |             |
+----------+  +----------+
| platform |  | registry | 
+----------+  +----------+

The automator is modeled as a Kubernetes-style reconciliation loop. It receives services from the platform. To start, services are received by polling; eventually, with some kind of subscription endpoint.

Certain services are marked so that they will be continuously released by the automator. To start, this list of services is set by fluxctl and stored in-memory in fluxd; eventually, state may move somewhere else.

Those services are modeled as state machines inside of the automator.

                     +---------+
              .------| Waiting |<-----------------------.
              |      +---------+                        |
              |           ^                             |
              |           |                             |
              |    Don't need release                   |
              |           |                             |
+-------+     v      +----------+                 +-----------+
| Empty |--Refresh-->| UpToDate |--Need release-->| Releasing | 
+-------+            +----------+                 +-----------+
    ^                     |
    |             No longer candidate
    |                     |
    '---------------------'               

When a service is first discovered on the platform, it is refreshed. Refreshing means reading the current image from the platform, and polling the image registry for all available images. Once that is complete, the service enters the UpToDate state.

A service is in the UpToDate state very briefly; only long enough to decide the next state. If the service is no longer a candidate for automation, it is discarded. If the most recent registry image is newer than the currently active platform image, the service needs reconciliation and enters the Releasing state. Otherwise, the service is stable and enters the Waiting state with a long (ca. 1m) timeout.

In the Releasing state, the automator simply invokes service.Release and waits for it to complete. No matter the outcome, the managed service moves to the Waiting state with a short (ca. 15s) timeout.

In the Waiting state, the service is idle for the specified timeout. When the timeout expires, the service is again refreshed, and enters the UpToDate state. In the future, services may stay in the Waiting state indefinitely, until kicked out by a hook from e.g. CircleCI.

Document the expected workflow

It might be helpful to write down a "guide" through the expected workflow a user would use with this.

Both goal, and current.

-n, --dry-run mode

Useful to tell what fluxy would do. Particularly for fluxctl service release --latest --all --dry-run

User authentication

Users of fluxctl should probably have a way of authenticating with fluxd. The details are TBD.

Deal with multi-document yaml files

The Sock shop uses k8s definition files with multiple resources in them. Fluxy messes this up by replacing wrong bits of things.

Solution sketch: break files up by ---, and treat each piece separately, then reassemble at the end.

More detailed commit messages

E.g., after deploy, "Deploy weaveworks/fluxy:master-4727ef3 to dev by user".

(A full record in the commit body might be nice too)

service images -s flag seems odd

I naturally expected the command to be: fluxctl service images consul, doing fluxctl service images -s consul seems odd, when the -s flag is required.

Refactor or otherwise mitigate boilerplate

There's a lot of very similar snippets of code scattered through {endpoints,transport,client,service}.go; and, writing a new method involves adding a fresh batch of snippets. The prospect of adding a service method is more daunting than it really ought to be.

Surely there's some commonality that can be exploited?

More flexible manual deployment targetting

To fluxctl service release ... add the following flags:

--image=...     update the git repo and release a specific image version
--latest        update all images in the service to the latest available
--all           apply the image update to all services

if --file=... is not specified all changes are made, deployed, then committed to the git repo.
one of --all or --service=... are required.

Handle unversioned images in deployments/rcs

It's possible that an image used in a pod template doesn't have a version/tag, e.g.,

    spec:
      containers:
      - name: fluxy
        image: weaveworks/fluxy

We should be able to handle this. One way would be to return an error and make the user force the operation (assuming we had that force option). Another would be to just replace it with a versioned image, silently or otherwise.

Coming from the other direction, it'd be good to let people supply an unversioned image name to fluxctl release, to mean "use the latest".

Troubleshooting releases

I was rehearsing a demo earlier and managed to get the image name wrong in a release. At the minute the result of such a mistake is that the service release command sits for about five minutes, then fails.

In the meantime, I checked kubctl get pods and kubectl get replicationcontrollers to see what was happening, and figured out I'd got the image wrong and would have to abort the release. But how to do so was not obvious.

There's a few things we can do to mitigate this:

  • more sanity checking -- we could e.g., check that the image supplied is actually available, rather than relying on Kubernetes to (eventually) fail. In fact we could do that earlier, when we update config, as well.
  • A running report of how the release is going, while the fluxctl command waits
  • It should be possible to see the state of the release(s), and to abort a release

Instrument API

The API server should be instrumented; possibly both method calls and HTTP (e.g., so it catches 404s).

Account for N:M services:images in automation

The current automation does roughly this:

  1. For a service, find the images it uses (if more than one, bail out)
  2. For the service's image, see if there is a newer version available
  3. If so: clone the configured repository, and find the first file in the configured path that mentions the service's image
  4. update that file to the new image, and attempt to release the service with that file as the new definition
  5. If successful, commit and push the change to the git repo

This works well when you have each service use a distinct image. To generalise, we'll need to be more flexible and precise in how services relate to images:

  • we should look at all images used in a service (and decide which, if any, to upgrade)
  • we should look for the specific file(s) that corresponds to the service's replication controller or deployment, and update that

Slack notifications

Will need some way to configure it. @squaremo, and I discussed it and decided best place to configure it is wherever the repo is configured (cli flags to fluxd for now).

release hangs when config repo does not exist

ts=2016-08-30T13:09:37Z caller=helper.go:57 description="Clone the config repo."
Cloning into '/var/folders/dn/nkfvgz8n0k51lptgh1m8jfnw0000gn/T/fluxy-gitclone162305493/repo'...
Username for 'https://github.com': Password for 'https://github.com':

--dry-run and requiring confirmation

To encourage confidence in using the system, being able to show what would happen without committing to actually doing it is quite useful. So it would be nice to have a --dry-run option for manual releases, which would detail all the image updates that would happen were the command to be run.

Sometimes a command does not have a unambiguously defined effect, e.g., because of the loose coupling between kubernetes services and deployments (and file-based config). So it's also handy to be able to say "please do this general thing" and be led through it. In fact, for commands that have wide-ranging effects, like upgrading all services with a particular image, I would want it to be the default mode (with -y or something meaning "just go ahead and don't ask").

For example,

$ fluxctl release --all --image=quay.io/weaveworks/helloworld:master-a000002
This will result in the following regrades:
Service helloworld
  Deployment helloworld
    Container hello: quay.io/weaveworks/helloworld:master-a00001->master-a00002
Service worldhello
  Deployment worldhello
    Container hello: quay.io/weaveworks/helloworld:master-a000001->master-a000002
Service foobar
  Deployment foobar
    Container hello already uses quay.io/weaveworks/helloworld:master-a000002

Do you want to proceed? Y/n

--dry-run is similar but doesn't give you the option of proceeding -- it's just informational.

The fact that the system may have moved on between planning a release and actually doing is has some implications. One mitigation would be to enforce consistency: a plan gets a checksum, and if it doesn't correspond to the system when the plan is run, it will abort.

Validate --url before performing any action

Current doing a release with no FLUX_URL set looks like:

Starting release of helloworld with an update period of 5s... error! Get https://:8443/api/v1/namespaces/default/services: dial tcp :8443: getsockopt: connection refused
took 3.504386ms

Which is obscure.

Service deployment policy

A sketch for now:

fluxctl service policy --service=helloworld --deploy=latest -> a new image for any container in the RC/Deployment triggers a release

fluxctl service policy --service=helloworld --lock --message="..." -> lock the policy where it is and require an --unlock before it can change, to prevent accidental modification. Attempts to change policy or manually release an image return an error, citing the message given.

fluxctl service policy --service=helloworld --deploy=manual -> (the default) assume someone will trigger a release if they want one to happen.

Operations that update config do not always make it match the running system

If your config is different to the running system -- say, because you have changed either manually -- it is possible after a release to have config that (still) does not reflect the running system.

Arguably, the operation should only change the thing you asked it to, so if you ask to update to a particular image, and another container is different, it shouldn't change that. However, if you've asked for all images to be updated, then the result should be that both the running system and the config have the latest images, and this doesn't happen.

I think the problem is that the changes are calculated by looking at the system but are then applied to the files.

Easier kubernetes config

It'd be nice to auto-config kubernetes. For example, if you're running fluxd locally we could look up the current kubectl config. If you're running it in a cluster, we could look up the in-cluster config. Then you could just do fluxd --kubernetes and away you go.

Auto-negotiate https for k8s api

Is this possible? When launching fluxd, I did --kubernetes-host 192.168.99.100:8443, but forgot to prepend https:// to the IP and was confused.

EOF on service images

» fluxctl -u http://localhost:3030/v0 service images -s consul
Error: Do: Get http://localhost:3030/v0/service/consul/images?namespace=default: EOF

pod logs say:

ts=2016-07-22T16:58:30Z caller=kubernetes.go:89 component=platform method=services namespace=default count=13 err=null
ts=2016-07-22T16:58:30Z caller=kubernetes.go:166 component=platform method=replicationControllerFor namespace=default serviceName=consul rc=consul
ts=2016-07-22T16:58:30Z caller=kubernetes.go:241 component=platform method=ImagesFor namespace=default serviceName=consul containers="[{Name:consul Image:consul:v0.6.4} {Name:exporter Image:prom/statsd-exporter}]"
ts=2016-07-22T16:58:31Z caller=middlewares.go:52 method=ServiceImages namespace=default service=consul containers=0 err=null took=1.325900446s

500 when updating all images

Dear computer, fluxctl release --all --update-all-images --dry-run
Error: executing HTTP request: reading HTTP response: 500 Internal Server Error (fetching images for services: fetching containers for default/kubernetes: service has no selector)

fluxd:

ts=2016-08-30T13:07:19Z caller=helper.go:57 method=Release serviceSpec=<all> imageSpec="<all latest>" kind=plan
ts=2016-08-30T13:07:19Z caller=helper.go:57 method=releaseAllToLatest kind=plan
ts=2016-08-30T13:07:19Z caller=kubernetes.go:139 component=platform method=services namespace=default count=7 err=null
ts=2016-08-30T13:07:19Z caller=kubernetes.go:139 component=platform method=services namespace=deploy count=4 err=null
ts=2016-08-30T13:07:19Z caller=kubernetes.go:139 component=platform method=services namespace=extra count=2 err=null
ts=2016-08-30T13:07:19Z caller=kubernetes.go:139 component=platform method=services namespace=fluxy count=1 err=null
ts=2016-08-30T13:07:19Z caller=kubernetes.go:139 component=platform method=services namespace=frankenstein count=8 err=null
ts=2016-08-30T13:07:19Z caller=kubernetes.go:139 component=platform method=services namespace=kube-system count=3 err=null
ts=2016-08-30T13:07:19Z caller=kubernetes.go:139 component=platform method=services namespace=monitoring count=6 err=null
ts=2016-08-30T13:07:19Z caller=kubernetes.go:139 component=platform method=services namespace=scope count=10 err=null
ts=2016-08-30T13:07:19Z caller=kubernetes.go:128 component=platform method=service namespace=default service=authfe err=null
ts=2016-08-30T13:07:19Z caller=kubernetes.go:342 component=platform method=podControllerFor namespace=default serviceName=authfe result=authfe
ts=2016-08-30T13:07:19Z caller=kubernetes.go:449 component=platform method=ContainersFor namespace=default serviceName=authfe containers="[{Name:authfe Image:quay.io/weaveworks/authfe} {Name:logging Image:quay.io/weaveworks/logging}]"
ts=2016-08-30T13:07:19Z caller=kubernetes.go:128 component=platform method=service namespace=default service=frontend err=null
ts=2016-08-30T13:07:20Z caller=kubernetes.go:342 component=platform method=podControllerFor namespace=default serviceName=frontend result=frontend
ts=2016-08-30T13:07:20Z caller=kubernetes.go:449 component=platform method=ContainersFor namespace=default serviceName=frontend containers="[{Name:frontend Image:quay.io/weaveworks/frontend-mt}]"
ts=2016-08-30T13:07:20Z caller=kubernetes.go:128 component=platform method=service namespace=default service=kubernetes err=null
ts=2016-08-30T13:07:20Z caller=kubernetes.go:340 component=platform method=podControllerFor namespace=default serviceName=kubernetes err="service has no selector"
ts=2016-08-30T13:07:20Z caller=kubernetes.go:447 component=platform method=ContainersFor namespace=default serviceName=kubernetes err="service has no selector"
ts=2016-08-30T13:07:20Z caller=helper.go:57 method=releaseAllToLatest kind=plan res=0 err="fetching images for services: fetching containers for default/kubernetes: service has no selector"
ts=2016-08-30T13:07:20Z caller=helper.go:57 method=Release serviceSpec=<all> imageSpec="<all latest>" kind=plan res=0 err="fetching images for services: fetching containers for default/kubernetes: service has no selector"

Examples in fluxctl release help text are wrong

Dear computer, fluxctl release --service=scope/control
Error: please supply exactly one of --update-image=<image>, --update-all-images, or --no-update

Usage:
  fluxctl release [flags]

Examples:
  fluxctl release --service=default/foo --image=library/hello:v2
  fluxctl release --all --image=library/hello:v2

Flags:
      --all                   release all services
      --dry-run               do not release anything; just report back what would have been done
      --no-update             don't update images; just deploy the service(s) as configured in the git repo
  -s, --service string        service to release
      --update-all-images     update all images to latest versions
  -i, --update-image string   update a specific image

Global Flags:
  -u, --url string   base URL of the fluxd API server; you can also set the environment variable FLUX_URL (default "http://localhost:3030")

Dear computer, fluxctl release --service=scope/control --image=latest --dry-run
Error: unknown flag: --image

First draft of tests

We need some form of testing, ideally something comprehensive and very coarse-grained to start. Details to be discussed.

Putting quotes in repo argument crashes fluxy

e.g., if you have --repo-path="testdata" it crashes with something like

2016-08-19 12:51:28.064642 I | http: panic serving 172.20.0.9:52008: runtime error: invalid memory address or nil pointer dereference
goroutine 60 [running]:
net/http.(*conn).serve.func1(0xc820064600)
           /usr/lib/go-1.6/src/net/http/server.go:1389 +0xc1
panic(0x1a59a20, 0xc820018060)
           /usr/lib/go-1.6/src/runtime/panic.go:443 +0x4e9
github.com/weaveworks/fluxy/git.findFileFor.func1(0xc822640320, 0x41, 0x0, 0x0, 0x7f4b25679180, 0xc82057cea0, 0x0, 0x0)
           /home/mikeb/space/fluxy-gopath/src/github.com/weaveworks/fluxy/git/releasing.go:127 +0x5b
path/filepath.Walk(0xc822640320, 0x41, 0xc823120d00, 0x0, 0x0)
           /usr/lib/go-1.6/src/path/filepath/path.go:394 +0xa5
github.com/weaveworks/fluxy/git.findFileFor(0xc82057cd20, 0x21, 0x7ffda6ffb1fa, 0x1f, 0xc8226400a0, 0x18, 0x0, 0x0, 0x0, 0x0)
           /home/mikeb/space/fluxy-gopath/src/github.com/weaveworks/fluxy/git/releasing.go:139 +0x1b6
github.com/weaveworks/fluxy/git.Repo.Release(0x7ffda6ffb194, 0x2c, 0x7ffda6ffb1cc, 0x21, 0x7ffda6ffb1fa, 0x1f, 0xc82057c1e0, 0xc82042f140, 0xc8217ba337, 0x7, ...)
           /home/mikeb/space/fluxy-gopath/src/github.com/weaveworks/fluxy/git/releasing.go:50 +0x463
github.com/weaveworks/fluxy.(*service).Release.func2(0x0, 0x0)
           /home/mikeb/space/fluxy-gopath/src/github.com/weaveworks/fluxy/service.go:166 +0x146
...

when you try to release something.

Humane errors

We should use some kind of enhanced error package (e.g.) to produce humane i.e. human-readable errors at the fluxctl command line. I'm thinking something multi-line and literate: this has happened, here are the likely causes, here are some avenues to pursue for debugging/resolution.

  • if the platform is unavailable, we should explain it might not be connected, or if it is (how can you check?), it could be worth trying the operation again (this includes "nats timeout")
  • if the git repo can't be read, that often means the deploy key is missing, or the git URL is wrong
  • fetching image metadata often fails. In this case, trying again may well work; or check the specific repository exists etc.
  • GetRelease returns a 500 when a job id doesn’t exist: source

`updatePeriod` does not apply to deployments

The option update-period to service release only applies to replication controllers, for which it is supplied to the kubectl rolling update command.

There is no such thing with deployments, which have a more fine-grained definition of the update process, given by minReadySeconds (a new pod must be running for this long before being considered available) and maxUnavailable (the most pods that can be out of commission at any point during the process). Further, these are supplied in the definition of the deployment, not when initiating the upgrade.

There's a few ways of dealing with this difference:

  • accept that update-period applies only to rolling updates, and ignore it (with a warning?) if it's used with a deployment
  • remove it from the API and fluxctl, and have a hard-wired value instead
  • remove it from the API and fluxctl, and try to calculate a reasonable value from the information known about the replication controller (and possibly other information supplied as annotations)

panic in repo images

$ fluxctl repo images --repo=alpine:3.3

http: panic serving [::1]:57218: interface conversion: error is registry.RegistryError, 
  not *registry.RegistryError

goroutine 68 [running]:
net/http.(*conn).serve.func1(0xc820328280)
        /usr/local/Cellar/go/1.6.2/libexec/src/net/http/server.go:1389 +0xc1
panic(0x12e9520, 0xc820013140)
        /usr/local/Cellar/go/1.6.2/libexec/src/runtime/panic.go:443 +0x4e9
github.com/weaveworks/fluxy/registry.(*Client).GetRepository(0xc820438cc0, 0xc8202a254f, 0xa, 0x13147e, 0x0, 0x0)
        /Users/peter/src/github.com/weaveworks/fluxy/registry/registry.go:97 +0x63f
github.com/weaveworks/fluxy.(*service).Images(0xc820496910, 0xc8202a254f, 0xa, 0x0, 0x0, 0x0, 0x0, 0x0)
        /Users/peter/src/github.com/weaveworks/fluxy/service.go:49 +0x59
 . . .

Minor: history EventsForService needs a bit of love

Right now it returns an error if there is no history for the passed service, but that's probably not right: services can have empty histories. That does raise the problem of differentiating "this extant service has no history" from "this requested service doesn't exist"...

Also, Paul suggested it should be named HistoryForService, which probably makes sense but is a bit less precise and involves a bit of stuttering. Up to the implementor...

Refactor the API

After a design discussion we have emerged with a more purpose-driven API.

type Service interface {
    ListServices() ([]ServiceDescription, error)
    ListImages(ServiceSpec) ([]ImageDescription, error)
    Release(ServiceSpec, ImageSpec, ReleaseKind) ([]ReleaseAction, error)
    Automate(ServiceID) error
    Deautomate(ServiceID) error
    History(ServiceSpec) ([]HistoryEntry, error)
}

type ServiceID string   // "default/helloworld"
type ImageID string     // "quay.io/weaveworks/helloworld:v1"
type ServiceSpec string // ServiceID or "<all>"
type ImageSpec string   // ImageID or "<latest>"

type ReleaseKind string

const (
    ReleaseKindPlan    ReleaseKind = "plan"
    ReleaseKindExecute             = "execute"
)

type ReleaseAction struct {
    Description string // change me eventually!
}

type ServiceDescription struct {
    ID         ServiceID
    Containers []Container
}

type Container struct {
    Name      string
    Current   ImageDescription
    Available []ImageDescription
}

type ImageDescription struct {
    ID        ImageID
    CreatedAt time.Time
}

// Ask me for more details.
type HistoryEntry struct {
    Stamp time.Time
    Type  string
    Data  string
}

which will allow us to implement the reduced set of use-cases in the README with the following reduced fluxctl surface area

fluxctl release {--service=S, --all} {--update-image=I, --update-all-images} [--dry-run]
fluxctl {automate, deautomate} --service=S
fluxctl list-services
fluxctl list-images --service=S
fluxctl history --service=S

Do people really need to specify --repo separately?

I kind of assumed that the repo would be configured on a per-"environment" basis, and there would just be a default so you could run fluxctl repo images. Though that wouldn't really work well for dockerhub, I guess?

multi-tenant/auth

Use the users service, to convert a user-supplied SERVICE_TOKEN to an org_id (aka instance_id), and key all data on that.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.