azure / draft-classic Goto Github PK
View Code? Open in Web Editor NEWA tool for developers to create cloud-native applications on Kubernetes.
Home Page: https://draft.sh
License: MIT License
A tool for developers to create cloud-native applications on Kubernetes.
Home Page: https://draft.sh
License: MIT License
I was going through the getting started example and ran into a roadblock. My app was deploying just fine (super cool piece of software btw! I've been doing this by helm create
) but it wasn't hooking into my ingress controller. Looking in the python/chart/ingress.yaml
it looked like the annotation to allow my ingress controller to pick it up was missing:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "fullname" . }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
...
I went ahead and added it and after starting over with the tutorial I was able to get it working. Should I open a PR to add this annotation to the different packs by default?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: {{ template "fullname" . }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
I have a working pack (under ~/.draft/packs) and I'd like a convenient way to generate the corresponding defaultpacks/*.go boilerplate (or eliminate the need for them.)
Right now the user story to inject registry auth details into draftd is to echo a JSON string, pipe it to base64 and use the output as the registry.authtoken
field in the chart.
We should test out the following scenarios:
docker login
on draft init
somehow, asking users for their credentialsThat way we can "hide" this process from the end user and do it for them without any terminal hackery.
Of course, this process of attaching your own base64-encoded string as the auth token might still be useful for those that want to do things the hard way or have their own custom setup.
+ make build-cross
CGO_ENABLED=0 gox -output="_dist/{{.OS}}-{{.Arch}}/{{.Dir}}" -osarch='darwin/amd64 linux/amd64 linux/386 linux/arm windows/amd64' -tags 'kqueue' -ldflags ' -X github.com/Azure/draft/pkg/version.Version=v0.1.0 -X github.com/Azure/draft/pkg/version.GitCommit=e1e9b96b0347ca380e3766f423ff3ba74ad6b4c4 -X github.com/Azure/draft/pkg/version.GitTreeState=clean -extldflags "-static"' github.com/Azure/draft/cmd/draft
Number of parallel builds: 1
--> darwin/amd64: github.com/Azure/draft/cmd/draft
--> linux/386: github.com/Azure/draft/cmd/draft
--> linux/amd64: github.com/Azure/draft/cmd/draft
--> linux/arm: github.com/Azure/draft/cmd/draft
--> windows/amd64: github.com/Azure/draft/cmd/draft
1 errors occurred:
--> windows/amd64 error: exit status 2
Stderr: # github.com/Azure/draft/vendor/github.com/docker/docker/pkg/term/windows
vendor/github.com/docker/docker/pkg/term/windows/ansi_reader.go:192: cannot use keyEvent.VirtualKeyCode (type winterm.WORD) as type uint16 in argument to formatVirtualKey
vendor/github.com/docker/docker/pkg/term/windows/ansi_reader.go:192: cannot use keyEvent.ControlKeyState (type winterm.DWORD) as type uint32 in argument to formatVirtualKey
vendor/github.com/docker/docker/pkg/term/windows/ansi_reader.go:195: cannot use keyEvent.ControlKeyState (type winterm.DWORD) as type uint32 in argument to getControlKeys
Firstly, I wonder if it should bail, or at least complain about default registry being a Microsoft's one (to which I have no access), and tell me what to do...
Secondly, there is clearly more guidance needed around some initial configuration parameters, it's probably in the docs, but I think interactive prompt during draft init
would work better.
Requested from twitter: https://twitter.com/elimallon/status/870086003809173504
@sgoings do we have the ability for something like creating a public azurecontainers.slack.com to host these channels?
As part of the Deis -> Azure migration, we need to put the docker images under Azure's dockerhub organization.
I have configured to use docker hub but wanted to try a different registry. I can't figure out how to do that other than I got "Warning: Draftd is already installed in the cluster." when I run the draft init
cmd with my new registry config.
It would be neat to also generate a Jenkinsfile to help with builds once out of the draft
phase and into the rookie
stage. Some tagging scheme would be cool as well to allow for versioned tags (e.g. pushes to master
branch).
Maybe this could be a new command like draft shipit
.
It's the 2nd most popular build system in JVM land and often preferred for newer projects.
Right now if you set up draft to talk to an auth-backed registry such as ACR, you will be able to push the image, but not pull, causing the image to never be deployed.
The current idea is to take the creds that you already provide to draft init
and push those into a secret in the kube-system namespace (or wherever you installed draftd). On a draft up
, draftd will look in the destination namespace (provided as a flag in draft.toml), determine if the secret exists, if not, it will copy the secret into that namespace and add the imagepullsecret to the default service account in that namespace.
Configured the repo to use docker hub. Saw image pushed over.
Hit this error:
--> Deploying to Kubernetes
Release "bald-newt" does not exist. Installing it now.
Could not install release: rpc error: code = Unknown desc = error converting YAML to JSON: yaml: did not find expected alphabetic or numeric character
We would like to be able to have custom build hooks in the draft up
.
Our use case would be to cross-compile our go application, do some operations, etc. We could then use a scratch container instead of the gaoling:onbuild
image.
Currently draft has a hard requirement on Helm/Tiller v2.3. We should update to the latest version of Helm.
0.3.0 was built with support for Helm 2.3.* so a version bump of draft makes sense now that we're consuming Helm 2.4.* libraries.
Working on this.
https://asciinema.org/a/dd2ydtfj0hdsy39mgnwt7jpu2
This was the second attempt. I had a successful first attempt but nuked the scaffolding files and wanted to try again. Running under github.com/Azure/draft/examples/nodejs
$ draft up
--> Building Dockerfile
Step 1 : FROM node:onbuild
# Executing 5 build triggers...
Step 1 : ARG NODE_ENV
---> Using cache
Step 1 : ENV NODE_ENV $NODE_ENV
---> Using cache
Step 1 : COPY package.json /usr/src/app/
---> Using cache
Step 1 : RUN npm install && npm cache clean
---> Running in 49b52780240f
npm info it worked if it ends with ok
npm info using [email protected]
npm info using [email protected]
npm info lifecycle [email protected]~preinstall: [email protected]
npm info linkStuff [email protected]
npm info lifecycle [email protected]~install: [email protected]
npm info lifecycle [email protected]~postinstall: [email protected]
npm info lifecycle [email protected]~prepublish: [email protected]
npm info lifecycle [email protected]~prepare: [email protected]
npm info lifecycle undefined~preshrinkwrap: undefined
npm info lifecycle undefined~shrinkwrap: undefined
npm notice created a lockfile as package-lock.json. You should commit this file.
npm info lifecycle undefined~postshrinkwrap: undefined
npm WARN [email protected] No description
npm WARN [email protected] No repository field.
npm WARN [email protected] No license field.
up to date in 0.15s
npm info ok
npm info it worked if it ends with ok
npm info using [email protected]
npm info using [email protected]
npm ERR! As of npm@5, the npm cache self-heals from corruption issues and data extracted from the cache is guaranteed to be valid. If you want to make sure everything is consistent, use 'npm cache verify' instead.
npm ERR!
npm ERR! If you're sure you want to delete the entire cache, rerun this command with --force.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2017-05-31T22_45_42_521Z-debug.log
Error encountered streaming JSON response: The command '/bin/sh -c npm install && npm cache clean' returned a non-zero code: 1
Watching local files for changes...
On draft up
with watch enabled, there are files potentially being autogenerated in the background by dev tooling that should not trigger a draft up
when the file changes. For now there are a few hardcoded file extensions that we ignore, but it'd be better to give the power back to the user by writing out a .draftignore to the app directory on draft create
, which can be further modified.
Our Jenkinsfile currently does not upload the built client binaries and docker images when a release is tagged. It only uploads on a merge to master.
Using a vanilla go project with only a simple main.go
file results in starter pack not selected.
We currently must have a glide.yaml file.
$ mkdir go
$ cd go
$ touch main.go
$ draft create
Error: Unable to select a starter pack Q_Q
$ touch glide.yaml
$ draft create .
--> Go app detected
--> Ready to sail
With 2 terminals open, I can't seem to get draft up
to work for me. It doesn't look like it's watching the local filesystem for changes.
$ touch hi
In the other terminal, nothing happens.
...
Watching local files for changes...
It turned out that my machine was out of inotify watches, so it was a matter of bumping this up to a higher number and it worked once more.
$ sudo apt-get install inotify-tools
$ inotifywait -rme modify,attrib,move,close_write,create,delete,delete_self /tmp/watcher-test/
Setting up watches. Beware: since -r was given, this may take a while!
Failed to watch /tmp/watcher-test/; upper limit on inotify watches reached!
Please increase the amount of inotify watches allowed per user via `/proc/sys/fs/inotify/max_user_watches'.
Bumping from the default 8192 to 16384 fixes the underlying issue, so the root issue here is that it seems like the notify library will not tell you if you're out of watchers, as described in rjeczalik/notify#109.
Successfully built ef07ff712900
--> Pushing quay.io/bacongobbler/draft:6df6cff18ec67436fef61408a6109231ddd08f88
The push refers to a repository [quay.io/bacongobbler/draft]
0214640423ab: Preparing
23b9c7b43573: Preparing
23b9c7b43573: Layer already exists
0214640423ab: Pushed
6df6cff18ec67436fef61408a6109231ddd08f88: digest: sha256:fbe62a5e86056da29b0743558dfb780fcf7f879c6b797d3e2fe5a082632b7242 size: 3640
--> Deploying to Kubernetes
Error: there was an error running 'draft up': websocket: close 1006 (abnormal closure): unexpected EOF
I just don't think draft can deploy itself "gracefully" because the pod is replaced immediately after deployment.
Funnily enough, there appears to be an interim moment where there is no pod present at all:
--> Deploying to Kubernetes
Error: there was an error running 'draft up': websocket: close 1006 (abnormal closure): unexpected EOF
><> kk get po
NAME READY STATUS RESTARTS AGE
kube-addon-manager-minikube 1/1 Running 0 4d
kube-dns-v20-8nlbr 3/3 Running 0 4d
kubernetes-dashboard-hf8tk 1/1 Running 0 4d
tiller-deploy-3066893457-dft47 1/1 Running 0 4d
><> helm list
NAME REVISION UPDATED STATUS CHART NAMESPACE
example-dockerfile-http 1 Mon Mar 20 11:10:28 2017 DEPLOYED example-dockerfile-http-1.0.0 default
draft 2 Mon Mar 20 11:14:09 2017 DEPLOYED draftd-v0.2.0-rc3 kube-system
Then, about a minute later I try again and it appears!
><> kk get po
NAME READY STATUS RESTARTS AGE
kube-addon-manager-minikube 1/1 Running 0 4d
kube-dns-v20-8nlbr 3/3 Running 0 4d
kubernetes-dashboard-hf8tk 1/1 Running 0 4d
draft-draftd-1721787164-750wr 1/1 Running 0 55s
tiller-deploy-3066893457-dft47 1/1 Running 0 4d
It appears that kubernetes is having issues with the RollingDeploy recreate strategy, which is the default for deployments or else it wouldn't be killing it off until the new server is up.
><> go test -cover -run . ./cmd/draft
ok github.com/deis/draft/cmd/draft 0.046s coverage: 13.2% of statements
We will want to make the draft job in Jenkins public once we have made the draft github repository public.
For draft
to work in a large organization, there needs to be a mechanism to acquire and update a standard set of packs. Developers need to ensure they are using the same set of possibly team-specific language packs and charts.
This would mean:
draft init
draft-packs
plugin to fetch, update and otherwise manage installed packsCurrently we're sitting at 15.9% coverage in pkg api.
><> go test -cover -run . ./api/...
ok github.com/deis/draft/api 0.041s coverage: 15.9% of statements
It's sorta useful in the fact that the information about the chart is displayed, but the actual information is not truly useful outside of debugging the chart. It would be cool if this could be improved to make it load-able via helm install
, dumped to a .tar.gz or something.
Last week I've tried examples/node
, and got a weird NPM error... Today I've noticed that @kris-nova has fixed the issue with downgrade of Node base image, so I've ran make bootstrap && make
, followed by bin/draft init --upgrade
.
Next I've done (cd examples/node && rm -rf Dockerfile draft.toml chart/ && draft create)
, but the Dockerfile
didn't reflect the change I was expecting.
I've resolved the issue with rm -rf ~/.draft && bin/draft init
, but I think the use experience can be improved, I'd expect ~/.draft/packs
to be overwritten on draft init
(at least for the default packs).
The link at the bottom of this page is broken: https://github.com/Azure/draft/blob/master/docs/ingress.md
Is it possible to add a built-in docker registry inside kubernet cluster, no third party docker registry need.
For enterprise usage, we often not allow to put our image outside or to the third party docker registry.
So if we can set up a built-in docker registry inside our kubernete cluster , it will be very useful.
It appears that the -a appName
is never used in the code. I'm not sure if it is supposed to be used in the TOML, in the Chart.yaml, or both.
First time through installing Draft, I misconfigured the repo.
Subsequent attempts to revise the configuration by repeating draft init
(appear to have) failed.
I tried helm list
and helm delete draft
to no avail.
Recommend:
Providing (or documenting) a way to delete draft or reconfigure it
Sometimes, the output from draft up
can appear somewhat choppy as opposed to a nice flow.
This may or may not be a limitation of websockets. We need to do more research on how we can improve upon our current workflow or if there's a bug in gorilla/websockets we can fix.
><> go test -cover -run . ./cmd/draft/installer
ok github.com/deis/draft/cmd/draft/installer 0.006s coverage: 0.0% of statements
I accidentally ran draft up
in draft's root directory, causing an upgrade of draftd. I intended to cancel the operation with a CTRL+C to close the connection and stop the deploy, but the server carried on the operation, disregarding that the client closed the connection. This caused draftd to be upgraded with unintentional chart values which broke my local dev cluster.
We should add some handling logic to check if the client closed the connection on their end, and if so we should stop.
Currently packs are built into the draft binary. It'd be much nicer to download/install packs similar to the UX of helm
plugins (helm plugin install <url>
).
Externalizing packs would:
A number of folks asked about this after the launch. No idea on the feasibility of this right now...
When using minikube, we can re-use the docker daemon that is running on the VM without having to use an external registry, as stated in the documentation:
When using a single VM of Kubernetes, it’s really handy to reuse the minikube’s built-in Docker daemon; as this means you don’t have to build a docker registry on your host machine and push the image into it - you can just build inside the same docker daemon as minikube which speeds up local experiments.
Would it be possible to replicate this experience with draft and document how to do it?
I can't tell, but it seems like this only supports a single Dockerfile
as a sibling to the chart. Could draft
be extended to support pushing multiple tarballs to draftd
to build/push to the registry?
Sometimes when I'm making rapid code changes, draft loses connection with draftd.
The client errors with:
E0531 15:44:35.761289 67206 portforward.go:332] an error occurred forwarding 63658 -> 44135: error forwarding port 44135 to pod draftd-2336841885-pfbkf_kube-system, uid : exit status 1: 2017/05/31 21:44:36 socat[25667] E connect(5, AF=2 127.0.0.1:44135, 16): Connection refused
Error: there was an error running 'draft up': there was an error while dialing the server: unexpected EOF
Watching the Kubernetes pods during draft up
shows:
default zeroed-ladybird-zeroed-ladybird-1047063997-h5cvw 0/1 Pending 0 0s
default zeroed-ladybird-zeroed-ladybird-1115286856-l7pgp 1/1 Terminating 0 1m
default zeroed-ladybird-zeroed-ladybird-1047063997-h5cvw 0/1 Pending 0 0s
default zeroed-ladybird-zeroed-ladybird-1047063997-h5cvw 0/1 ContainerCreating 0 0s
default zeroed-ladybird-zeroed-ladybird-1047063997-pkj8p 0/1 Pending 0 0s
default zeroed-ladybird-zeroed-ladybird-1047063997-pkj8p 0/1 Pending 0 0s
default zeroed-ladybird-zeroed-ladybird-1047063997-pkj8p 0/1 ContainerCreating 0 0s
default zeroed-ladybird-zeroed-ladybird-1115286856-l7pgp 0/1 Terminating 0 1m
default zeroed-ladybird-zeroed-ladybird-1115286856-l7pgp 0/1 Terminating 0 1m
default zeroed-ladybird-zeroed-ladybird-1115286856-l7pgp 0/1 Terminating 0 1m
kube-system draftd-2336841885-pfbkf 0/1 Error 12 17h
default zeroed-ladybird-zeroed-ladybird-1047063997-h5cvw 0/1 Running 0 1s
kube-system draftd-2336841885-pfbkf 0/1 CrashLoopBackOff 12 17h
default zeroed-ladybird-zeroed-ladybird-1047063997-pkj8p 0/1 Running 0 3s
default zeroed-ladybird-zeroed-ladybird-1047063997-h5cvw 1/1 Running 0 10s
default zeroed-ladybird-zeroed-ladybird-1115286856-3f7mt 1/1 Terminating 0 2m
default zeroed-ladybird-zeroed-ladybird-1047063997-pkj8p 1/1 Running 0 10s
default zeroed-ladybird-zeroed-ladybird-1115286856-3f7mt 0/1 Terminating 0 2m
default zeroed-ladybird-zeroed-ladybird-1115286856-3f7mt 0/1 Terminating 0 2m
default zeroed-ladybird-zeroed-ladybird-1115286856-3f7mt 0/1 Terminating 0 2m
The error'd draft pod's logs were:
kubectl logs draftd-2336841885-pfbkf --previous -n kube-system
time="2017-05-31T21:46:56Z" level=info msg="server is now listening at tcp://0.0.0.0:44135"
time="2017-05-31T21:47:54Z" level=info msg="POST /apps/zeroed-ladybird"
panic: repeated read on failed websocket connection
goroutine 23 [running]:
panic(0x1299040, 0xc42045b0e0)
/usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/Azure/draft/vendor/github.com/gorilla/websocket.(*Conn).NextReader(0xc420078000, 0x0, 0xc4206061e0, 0xab79a4, 0x180001, 0x0)
/home/jenkins/go/src/github.com/Azure/draft/vendor/github.com/gorilla/websocket/conn.go:893 +0x16d
github.com/Azure/draft/vendor/github.com/gorilla/websocket.(*Conn).ReadMessage(0xc420078000, 0x8, 0xc4205fdba0, 0x1e, 0x1e, 0xed0c1312a, 0xc418638e62)
/home/jenkins/go/src/github.com/Azure/draft/vendor/github.com/gorilla/websocket/conn.go:954 +0x2f
github.com/Azure/draft/api.pingClient(0xc420078000, 0xc42036b3e0)
/home/jenkins/go/src/github.com/Azure/draft/api/server.go:515 +0x40
created by github.com/Azure/draft/api.serveWs
/home/jenkins/go/src/github.com/Azure/draft/api/server.go:290 +0xa8f
While testing a change to the default python pack I noticed that draft init -c
did not re-create the pack if it already exists, although it implies it did:
$ draft init -c
Creating pack python
...
Happy Sailing!
The CLI should inform the user that the pack will not be overwritten.
CI broke when we moved from github.com/deis/draft to github.com/Azure/draft. We are in the midst of getting that pipeline working again.
Right now we are manually kicking off all jobs. We need to figure out what's causing issues with CI to be unable to kick off tests when PRs are pushed. Perhaps it's just because this repo has been made private and this will all go away once it's made public.
Any PR that comes in performs a CI job that updates the canary tag on dockerhub. This should only be merged to on a merge to master.
deadwood(~/src/github.com/Azure/draft/examples/python) % draft create
Error: there was an error reading /home/dfc/.draft/packs: open /home/dfc/.draft/packs: no such file or directory
Presumably because I have not run draft init
yet.
At the present time, we are using Helm v2.3 as there are some vendoring issues when bumping to v2.4. This should be first priority once the dust settles.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.