bazelbuild / rules_k8s Goto Github PK
View Code? Open in Web Editor NEWThis repository contains rules for interacting with Kubernetes configurations / clusters.
License: Apache License 2.0
This repository contains rules for interacting with Kubernetes configurations / clusters.
License: Apache License 2.0
As primitives the k8s_defaults
and k8_object
are quite fine grained. I envisage the current k8s_defaults
, k8s_object
to evolve to provide some templating parameterised by make variables, json files, statically provided dicts -- this will invariably be limiting.
Here are some suggestions:
k8s_defaults
more than just defaults for a single type of resource, or as a holder for some configuration parameters. A k8s_defaults
that holds jsonnet file(s) (libsonnet) can model a simple dict of params -- at it does now -- all the way to modelling a particular deployment environment.k8s_object
to represent an arbitrary set of k8s resources instead of just a single resource. A single jsonnet file renders to multiple Json or Yaml files. A real world micro service is going to have a service, a controller, ingress, secrets, configmap. These should likely be deployed in one step.I working on wiring up kubecfg into our codebase. I have run rules for my k8s_object
equivalent that map to the kubecfgs (apply, delete, validate, show) commands. I can provide more detail if interested.
... without having to use a registry.
I recently replaced the use of f.short_path
all over docker_build
with _get_runfile_path
to make it so that the scripts produced by things like :dev.create
can be built and run separately.
rules_k8s
exhibits the same symptom as the broken before picture, we should apply the same fixes here so that higher-level rules can bundle our actions into higher-level actions.
This links to both of my PRs so far fixing this class of issue.
This is related to this issue with k8s_objects
.
This manifests when the input to k8s_object
is a multi-document yaml file delimited by ---
, such as the Istio release.
In the case of Istio what happens is that they create the CRDs before instances of the CRDs, otherwise kubectl create
would fail.
However, when you kubectl delete
this file, you run into problems because (evaluating in the same order) you delete the CRD definition before its instances, and so the former cascades the deletion and the latter result in 404 errors.
k8s_object(kind) is the same as yaml(kind)
k8s_object(namespace) is the same as yaml(metadata/namespace)
Replicating the on the k8s_object rule is IMHO not a good idea. This brings the danger that one specifies the different kind on the rule and in the template.
Even worse one needs to repeat data if they are different from the default (e.g. namespace).
What is the advantage of having those on the rule in the first place?
Can we change the way this works:
1.) Print a warning if the value from the rule differs from the yaml/json
2.) If the value is not given in the rule, take it from the yaml/json, else use the default?
Is it possible to get the generated output without deploying the images to an external registry? For k8_object {name}.yaml
is also not defined.
We are using Cloud Endpoints to serve an API from a Kubernetes cluster. We can't use rules_k8s to update the service as is, since we need to publish both the container image and the service definition (see the Endpoints docs). The service definition is uploaded with gcloud endpoints services deploy
, which generates a service version number that must be edited into the deployment YAML.
The best UX for deploying such a service in development would be to extend :foo.apply
to deploy the service definition before resolving the deployment YAML. However, Bazel doesn't model dependencies of run actions on other run actions, and since service deployment is non-hermetic we can't use build-phase templating (like rules_jsonnet).
What do you think about extending rules_k8s in one of the following ways?
If that's out-of-scope, we'd most likely use a script that gets the YAML from rules_k8s, does further templating, and reimplements the create/apply/replace actions.
I'm currently working on k8s CI with bazel it would be nice to have a place to chat.
A Bazel wide slack presence would be best there is no response on IRC whenever I ask something.
This is a followup for the discussion in #71
Lets imagine
BUILD:
k8s_object(
name = "deployment",
template = ":deployment.yaml",
images = {
"web-frontend": "//src/docker/web_frontend:web_frontend",
},
)
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
spec:
replicas: 1
template:
spec:
containers:
- name: web-frontend
image: web-frontend
Since the rules are using a simple expand_template action and the substitution keys are simple words, this will replace web-frontend in both name and image.
A naive approach would be to make the key in the substitution something like "image: " + spec
so that it only applies to image tags. But this would not be robust for whilespaces and formatting variants (template can be yaml and json).
A better approach would be to mark the substitutions (this is what we do). That is in the file write e.g. "${web-frontend}" you wrap the substitutions keys with '${' + '}', but this won't be backwards compatible. I think this would be a much better approach though. Maybe we can add a boolean flag to enable substitution_guards
. Or if people write the key in images as '${web-frondend}' you remove the guards to get the actual image name. Then we don't need the flag.
Instead of just a CRD with an async controller, we should provide a sample (in at least Go) that hooks into the API Aggregation layer.
Ideally we'd have this in all K8s client languages.
Maybe following this
When I bazel run a k8s_object target it fails like below. The code should probably handle the exception.
The BUILD file:
k8s_object(
name = "deployment",
template = ":deployment.yaml.expanded",
images = {
"xxx_image": "//src/docker/web_frontend:web_frontend",
},
cluster = "cluster-name", # actual value omitted
image_chroot = "eu.gcr.io/project-id/", # actual value omitted
)
The output
raceback (most recent call last):
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-opt/bin/src/kubernetes/web-frontend/deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 201, in <module>
main()
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-opt/bin/src/kubernetes/web-frontend/deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 173, in main
(tag, digest) = Publish(transport, args.image_chroot, **kwargs)
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-opt/bin/src/kubernetes/web-frontend/deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 151, in Publish
with v2_2_session.Push(name_to_publish, creds, transport, threads=_THREADS) as session:
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-opt/bin/src/kubernetes/web-frontend/deployment.runfiles/containerregistry/client/v2_2/docker_session_.py", line 71, in __init__
name, creds, transport, docker_http.PUSH)
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-opt/bin/src/kubernetes/web-frontend/deployment.runfiles/containerregistry/client/v2_2/docker_http_.py", line 193, in __init__
self._Refresh()
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-opt/bin/src/kubernetes/web-frontend/deployment.runfiles/containerregistry/client/v2_2/docker_http_.py", line 272, in _Refresh
'Authorization': self._basic_creds.Get()
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-opt/bin/src/kubernetes/web-frontend/deployment.runfiles/containerregistry/client/docker_creds_.py", line 153, in Get
stderr=subprocess.STDOUT)
File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
ERROR: Non-zero return code '1' from command: Process exited with status 1
Rather than depending on the kubectl that is available on the system, it would be nifty to be hermetic and to compile kubectl. Since kubernetes is using Bazel and Gazelle to manage things, I believe you could just add it as a git_repository (it vendors all its deps so no need to add more) and depend on @com_github_kubernetes_kubernetes//cmd/kubectl
as a data dependency and then shell out to external/com_github_kubernetes_kubernetes/cmd/kubectl/kubectl
instead. A bash wrapper that checked for the existence of that file and called it and fell back to the system version if you didn't want to force the dependency.
The 'run' commands for create/apply/delete shell out to kubectl and gcloud. This should maybe be mentioned at least in a README. Ideally the rules will pull the commands through the WORKSPACE for hermeticity, right?
For background when running them through dazel, one will need to installed them to the dazel docker container.
If I have:
k8s_object(
name = "foo",
cluster = "{MISSING_KEY}",
....
)
This will result in the painful error:
$ bazel run foo.create
INFO: Analysed target //:foo.create (0 packages loaded).
INFO: Found 1 target...
ERROR: /..../BUILD:68:1: Stamp foo.create.cluster-name failed (Exit 1)
Traceback (most recent call last):
File "/.../execroot/.../bazel-out/host/bin/external/io_bazel_rules_k8s/k8s/stamper.runfiles/..../../io_bazel_rules_k8s/k8s/stamper.py", line 56, in <module>
main()
File "/.../execroot/.../bazel-out/host/bin/external/io_bazel_rules_k8s/k8s/stamper.runfiles/.../../io_bazel_rules_k8s/k8s/stamper.py", line 52, in main
f.write(args.format.format(**format_args))
KeyError: 'MISSING_KEY'
Target //:foo.create failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 1.239s, Critical Path: 0.29s
FAILED: Build did NOT complete successfully
ERROR: Build failed. Not running target
https://github.com/bazelbuild/rules_k8s#k8s_object says for the cluster
param
If this is omitted, the create, replace, delete, describe actions will not exist.
likewise for the namespace
param
If this is omitted, it will default to "default".
I think these should be moved to k8s_defaults with a note in k8s_object that the defaults come from k8s_defaults. If this is correct I can send a PR.
These create a non-linear git history, so my
git_repository(
name = "io_bazel_rules_k8s",
commit = "055db1f75e00e805762798bbb14afb945955f5c1",
remote = "https://github.com/bazelbuild/rules_k8s.git",
)
broke this morning now that the referenced commit isn't in the history of the master branch.
You should disable the green merge button, or whatever affordance allows merge commits.
If one does:
k8s_robco_object(
name = "config",
kind = "configmap",
template = ":config.yaml",
)
you will get a an error (see subject). One can work around it by using name="x-config"
but that is ugly.
The idea is to enable developers to use a different registry for development with the same template by chrooting image names.
e.g. deployment.yaml
may reference: gcr.io/my-prod/image:prod
However, you don't want to publish there for development and today we just push to the actual image reference in the template.
So instead, we could enable developers to "chroot" the image reference into a different environment. e.g.
k8s_defaults(
name = "k8s_dev_deploy",
kind = "deployment",
image_chroot = "gcr.io/my-dev/{BUILD_USER}",
)
Then the following:
k8s_dev_deploy(
name = "dev",
template = "template.yaml",
images = {
"gcr.io/my-prod/image:prod": "//path/to:image",
}
)
Would substitute references to gcr.io/my-prod/image:prod
with:
gcr.io/my-dev/{BUILD_USER}/gcr.io/my-prod/image@sha256:published-digest
.
Lets assume I have a deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-frontend
spec:
replicas: 1
template:
spec:
- name: web-frontend
image: XXXX
and a build rule:
k8s_object(
name = "x-deployment",
kind = "deployment",
template = ":deployment.yaml",
images = {
"XXXX": "//src/docker/web_frontend:web-frontend",
}
)
I get:
INFO: Running command line: bazel-bin/src/kubernetes/web-frontend/x-deployment
Traceback (most recent call last):
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 180, in <module>
main()
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 166, in main
(tag, digest) = Publish(transport, args.image_chroot, **kwargs)
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 136, in Publish
name_to_replace = docker_name.Tag(name)
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/containerregistry/client/docker_name_.py", line 196, in __init__
_check_tag(self._tag)
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/containerregistry/client/docker_name_.py", line 80, in _check_tag
_check_element('tag', tag, _TAG_CHARS, 1, 127)
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/containerregistry/client/docker_name_.py", line 64, in _check_element
% (name, element, min_len))
containerregistry.client.docker_name_.BadNameException: Invalid tag: , must be at least 1 characters
ERROR: Non-zero return code '1' from command: Process exited with status 1
The docs need to explain what format the key in the images param in the BUILD file needs to be and how to specify this in the deployment.yaml.
I also tried $(GCP_REGISTRY)/$(GCP_PROJECT)/web-frontend
where GCP_REGISTRY and GCP_PROJECT are defines in my bazelrc, but I don't want to hardcode this in the template (deployment.yaml).
It looks like it tries to parse the key. E.g. when I change "XXXX" to "XXXX:latest" I get
"""
containerregistry.client.docker_name_.BadNameException: A Docker registry domain must be specified.
"""
If that is the case, I don't understand how I can avoid hard-coding the registry domains into the templates. I think it would be nicer to specify this in k8s_defaults() as a param.
If I put gcr.io/<project-id>/web-fontend:latest
into the deployment.yaml and "gcr.io/<project-id>/web-fontend:latest": "//src/docker/web_frontend:web-frontend"
into the BUILD file, the rule runs, but it apparently interacts with the cloud (I need to have a valid regsitry login), I though it would only resolve (aka substitue) the template. Also the produced output template is not expanded, it only prints the resolved template to stdout.
The current config on ci.bazel.io is dumb (see https://github.com/bazelbuild/continuous-integration/blob/master/jenkins/jobs/configs/empty.json), it is better to add a non dumb config to the repo so it can be tested on ci.bazel.io
This is basically an umbrella item to consider how these rules will interact with Istio. Perhaps what we have is enough, and we just need samples. Perhaps Istio is special enough that we want rules_istio
.
Consider:
k8s_objects(
name = "foo",
objects = [
":namespace",
":deployment", # in the namespace
],
)
This is ordered properly for :foo.create
, but :foo.delete
will always fail deleting :deployment
because :namespace
was already cleaned up.
I'm trying to deploy a go_image (rules_docker) from my mac (High Sierra) to a Kubernetes cluster and am seeing the following unexpected behavior.
bazel run --cpu k8 //risk/jeopardy_ui:image -- --norun # A go_image rule
...
Loaded image ID: c9f1edf07b2e
bazel run --cpu k8 //risk/jeopardy_ui:deployment.create # A k8s_object rule
...
# jeopardy_ui id deployed -> 67bc10da402d
bazel run --cpu k8 //risk/jeopardy_ui:full_deploy.create # A k8s_objects rule
...
# jeopardy_ui id deployed -> 26d62db52b54
Running the container output by the first command works as expected, the other two error out with the following:
standard_init_linux.go:185: exec user process caused "exec format error"
In other words, I get different digests when deploying a go_image with k8s_object
, as well as when said rule is triggered by a k8s_objects
collection.
Not sure what other details I could provide, but would be happy if someone was able to shed some light on this?
Reading through the "build" docs it says
Build builds all of the constituent elements, and makes the template available as {name}.yaml
But no such yaml file is created when running bazel build //path/to:k8s_object_thing
. Instead, some shell files are created.
Is there a way to see what yaml will be used when written back to the k8s without accessing container registries?
Investigate factoring WORKSPACE
into .bzl
files we can put alongside each sample, so that they may more self-describing.
Since bazelbuild/bazel@a6ad2d5, use of the set
constructor instead of depset
is an error.
https://github.com/pubref/rules_protobuf/releases has fixed this in their latest release. Please upgrade rules_k8s to the latest release or the next Bazel release will break your build.
Error message:
$ bazel-dev build //examples/hellogrpc/proto:all
...
ERROR: /usr/local/google/home/jcater/repos/rules_k8s/examples/hellogrpc/proto/BUILD:5:1: Traceback (most recent call last):
File "/usr/local/google/home/jcater/repos/rules_k8s/examples/hellogrpc/proto/BUILD", line 5
cc_proto_library(name = "cc", protos = ["simple.pro..."])
File "/usr/local/google/home/jcater/.cache/bazel/_bazel_jcater/5c3b0c3e069ecb95f72b7ff62a3e5298/external/org_pubref_rules_protobuf/cpp/rules.bzl", line 97, in cc_proto_library
native.cc_library(name = name, srcs = (srcs + [(name...")]), <2 more arguments>)
File "/usr/local/google/home/jcater/.cache/bazel/_bazel_jcater/5c3b0c3e069ecb95f72b7ff62a3e5298/external/org_pubref_rules_protobuf/cpp/rules.bzl", line 100, in native.cc_library
list(set(((deps + proto_deps) + compi...)))
File "/usr/local/google/home/jcater/.cache/bazel/_bazel_jcater/5c3b0c3e069ecb95f72b7ff62a3e5298/external/org_pubref_rules_protobuf/cpp/rules.bzl", line 100, in list
set(((deps + proto_deps) + compile_d...))
The `set` constructor for depsets is deprecated and will be removed. Please use the `depset` constructor instead. You can temporarily enable the deprecated `set` constructor by passing the flag --incompatible_disallow_set_constructor=false
...
Since k8s_object is already running substitution to apply project settings, it would be nice if one culd substitute other variables too.
1.) One option could be to specify an extra dict of key:value pairs that it would add to the substitutions it sets up for the images:
k8s_object(
name = "x-deployment",
template = ":deployment.yaml",
images = { ... },
substitutions = {
"${KEY}": "value",
},
)
Issues:
- one might want to setup some global substitution in k8s_defaults() and add more in the aliased rule
2.) Just add ctx.vars to the list of substitutions. All keys are changed to `${key}`or something simillar.
This is from a good while ago when I first played with Bazel + gRPC. There is surely a newer / better version.
If you run:
bazel run :k8s.create -- --save-config
the --save-config
is silently ignored. You won't find out until you try to apply
a change later and it doesn't work. For this case you can just use apply
instead of create
, but the current behaviour masks the mistake, and passing command-line arguments through to kubectl could be useful in other cases.
I'd suggest:
kubectl
.There are several logically separable legs of our Travis build:
buildifier
bazel build/test
container_push()
(https://github.com/bazelbuild/rules_docker/blob/master/container/push.bzl#L62) lets you use make variables. The registry
and repository
parameters are straight forward to use.
When using k8s_object()
I would need to customize image_chroot
. This only support stamp-variables which are not documented by bazel and customizing them is not straight forward (and not documented either).
I have this in my WORKSPACE:
k8s_defaults(
name = "k8s_my_object",
cluster = "gke_$(GCP_PROJECT)_$(GCP_ZONE)_xxx",
image_chroot = "$(GCP_REGISTRY)/$(GCP_PROJECT)/"
)
and in my .bazelrc:
build --define GCP_REGISTRY=eu.gcr.io
...
But those are not getting substituted. I'll probably try the workspace_status_command, but it makes things more complicated :/
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.