Giter VIP home page Giter VIP logo

rules_k8s's Introduction

(ARCHIVED) Bazel Kubernetes Rules

This repository is no longer maintained.

Prow Bazel CI
Build status Build status

Rules

Overview

This repository contains rules for interacting with Kubernetes configurations / clusters.

Setup

Add the following to your WORKSPACE file to add the necessary external dependencies:

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

# https://github.com/bazelbuild/rules_docker/#setup
# http_archive("io_bazel_rules_docker", ...)

http_archive(
    name = "io_bazel_rules_k8s",
    strip_prefix = "rules_k8s-0.5",
    urls = ["https://github.com/bazelbuild/rules_k8s/archive/v0.5.tar.gz"],
    sha256 = "773aa45f2421a66c8aa651b8cecb8ea51db91799a405bd7b913d77052ac7261a",
)

load("@io_bazel_rules_k8s//k8s:k8s.bzl", "k8s_repositories")

k8s_repositories()

load("@io_bazel_rules_k8s//k8s:k8s_go_deps.bzl", k8s_go_deps = "deps")

k8s_go_deps()

Kubernetes Authentication

As is somewhat standard for Bazel, the expectation is that the kubectl toolchain is preconfigured to authenticate with any clusters you might interact with.

For more information on how to configure kubectl authentication, see the Kubernetes documentation.

NOTE: we are currently experimenting with toolchain features in these rules so there will be changes upcoming to how this configuration is performed

Container Engine Authentication

For Google Container Engine (GKE), the gcloud CLI provides a simple command for setting up authentication:

gcloud container clusters get-credentials <CLUSTER NAME>

NOTE: we are currently experimenting with toolchain features in these rules so there will be changes upcoming to how this configuration is performed

Dependencies

New: Starting https://github.com/bazelbuild/rules_k8s/commit/ff2cbf09ae1f0a9c7ebdfc1fa337044158a7f57b

These rules can either use a pre-installed kubectl tool (default) or build the kubectl tool from sources.

The kubectl tool is used when executing the run action from bazel.

The kubectl tool is configured via a toolchain rule. Read more about the kubectl toolchain here.

If GKE is used, also the gcloud sdk needs to be installed.

NOTE: we are currently experimenting with toolchain features in these rules so there will be changes upcoming to how this configuration is performed

Examples

Basic "deployment" objects

load("@io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")

k8s_object(
  name = "dev",
  kind = "deployment",

  # A template of a Kubernetes Deployment object yaml.
  template = ":deployment.yaml",

  # An optional collection of docker_build images to publish
  # when this target is bazel run.  The digest of the published
  # image is substituted as a part of the resolution process.
  images = {
    "gcr.io/rules_k8s/server:dev": "//server:image"
  },
)

Aliasing (e.g. k8s_deploy)

In your WORKSPACE you can set up aliases for a more readable short-hand:

load("@io_bazel_rules_k8s//k8s:k8s.bzl", "k8s_defaults")

k8s_defaults(
  # This becomes the name of the @repository and the rule
  # you will import in your BUILD files.
  name = "k8s_deploy",
  kind = "deployment",
  # This is the name of the cluster as it appears in:
  #   kubectl config view --minify -o=jsonpath='{.contexts[0].context.cluster}'
  cluster = "my-gke-cluster",
)

Then in place of the above, you can use the following in your BUILD file:

load("@k8s_deploy//:defaults.bzl", "k8s_deploy")

k8s_deploy(
  name = "dev",
  template = ":deployment.yaml",
  images = {
    "gcr.io/rules_k8s/server:dev": "//server:image"
  },
)

Note that in load("@k8s_deploy//:defaults.bzl", "k8s_deploy") both k8s_deploy's are references to the name parameter passed to k8s_defaults. If you change name = "k8s_deploy" to something else, you will need to change the load statement in both places.

Multi-Object Actions

It is common practice in the Kubernetes world to have multiple objects that comprise an application. There are two main ways that we support interacting with these kinds of objects.

The first is to simply use a template file that contains your N objects delimited with ---, and omitting kind="...".

The second is through the use of k8s_objects, which aggregates N k8s_object rules:

# Note the plurality of "objects" here.
load("@io_bazel_rules_k8s//k8s:objects.bzl", "k8s_objects")

k8s_objects(
   name = "deployments",
   objects = [
      ":foo-deployment",
      ":bar-deployment",
      ":baz-deployment",
   ]
)

k8s_objects(
   name = "services",
   objects = [
      ":foo-service",
      ":bar-service",
      ":baz-service",
   ]
)

# These rules can be nested
k8s_objects(
   name = "everything",
   objects = [
      ":deployments",
      ":services",
      ":configmaps",
      ":ingress",
   ]
)

This can be useful when you want to be able to stand up a full environment, which includes resources that are expensive to recreate (e.g. LoadBalancer), but still want to be able to quickly iterate on parts of your application.

Developer Environments

A common practice to avoid clobbering other users is to do your development against an isolated environment. Two practices are fairly common-place.

  1. Individual development clusters
  2. Development "namespaces"

To support these scenarios, the rules support using "stamping" variables to customize these arguments to k8s_defaults or k8s_object.

For per-developer clusters, you might use:

k8s_defaults(
  name = "k8s_dev_deploy",
  kind = "deployment",
  cluster = "gke_dev-proj_us-central5-z_{BUILD_USER}",
)

For per-developer namespaces, you might use:

k8s_defaults(
  name = "k8s_dev_deploy",
  kind = "deployment",
  cluster = "shared-cluster",
  namespace = "{BUILD_USER}",
)

You can customize the stamp variables that are available at a repository level by leveraging --workspace_status_command. One pattern for this is to check in the following:

$ cat .bazelrc
build --workspace_status_command="bash ./print-workspace-status.sh"

$ cat print-workspace-status.sh
cat <<EOF
VAR1 value1
# This can be overriden by users if they "export VAR2_OVERRIDE"
VAR2 ${VAR2_OVERRIDE:-default-value2}
EOF

For more information on "stamping", you can see also the rules_docker documentation on stamping here.

Don't tread on my tags

Another ugly problem remains, which is that image references are still shared across developers, and while our resolution to digests avoids races, we may not want them trampling on the same tag, or on production tags if shared templates are being used.

Moreover, developers may not have access to push to the images referenced in a particular template, or the development cluster to which they are deploying may not be able to pull them (e.g. clusters in different GCP projects).

To resolve this, we enable developers to "chroot" the image references, publishing them instead to that reference under another repository.

Consider the following, where developers use GCP projects named company-{BUILD_USER}:

k8s_defaults(
  name = "k8s_dev_deploy",
  kind = "deployment",
  cluster = "gke_company-{BUILD_USER}_us-central5-z_da-cluster",
  image_chroot = "us.gcr.io/company-{BUILD_USER}/dev",
)

In this example, the k8s_dev_deploy rules will target the developer's cluster in their project, and images will all be published under the image_chroot.

For example, if the BUILD file contains:

k8s_deploy(
  name = "dev",
  template = ":deployment.yaml",
  images = {
    "gcr.io/rules_k8s/server:dev": "//server:image"
  },
)

Then the references to gcr.io/rules_k8s/server:dev will be replaced with one to: us.gcr.io/company-{BUILD_USER}/dev/gcr.io/rules_k8s/server@sha256:....

Custom resolvers

Sometimes, you need to replace additional runtime parameters in the YAML file. While you can use expand_template for parameters known to the build system, you'll need a custom resolver if the parameter is determined at deploy time. A common example is Google Cloud Endpoints service versions, which are determined by the server.

You can pass a custom resolver executable as the resolver argument of all rules:

sh_binary(
  name = "my_script",
  ...
)

k8s_deploy(
  name = "dev"
  template = ":deployment.yaml",
  images = {
    "gcr.io/rules_k8s/server:dev": "//server:image"
  },
  resolver = "//my_script",
)

This script may need to invoke the default resolver (//k8s/go/cmd/resolver) with all its arguments. It may capture the default resolver's output and apply additional modifications to the YAML, printing the final YAML to stdout.

Usage

The k8s_object[s] rules expose a collection of actions. We will follow the :dev target from the example above.

Build

Build builds all of the constituent elements, and makes the template available as {name}.yaml. If template is a generated input, it will be built. Likewise, any docker_build images referenced from the images={} attribute will be built.

bazel build :dev

Resolve

Deploying with tags, especially in production, is a bad practice because they are mutable. If a tag changes, it can lead to inconsistent versions of your app running after auto-scaling or auto-healing events. Thankfully in v2 of the Docker Registry, digests were introduced. Deploying by digest provides cryptographic guarantees of consistency across the replicas of a deployment.

You can "resolve" your resource template by running:

bazel run :dev

The resolved template will be printed to STDOUT.

This command will publish any images = {} present in your rule, substituting those exact digests into the yaml template, and for other images resolving the tags to digests by reaching out to the appropriate registry. Any images that cannot be found or accessed are left unresolved.

This process only supports fully-qualified tag names. This means you must always specify tag and registry domain names (no implicit :latest).

Create

Users can create an environment by running:

bazel run :dev.create

This deploys the resolved template, which includes publishing images.

Update

Users can update (replace) their environment by running:

bazel run :dev.replace

Like .create this deploys the resolved template, which includes republishing images. This action is intended to be the workhorse of fast-iteration development (rebuilding / republishing / redeploying).

Apply

Users can "apply" a configuration by running:

bazel run :dev.apply

:dev.apply maps to kubectl apply, which will create or replace an existing configuration. For more information see the kubectl documentation.

This applies the resolved template, which includes republishing images. This action is intended to be the workhorse of fast-iteration development (rebuilding / republishing / redeploying).

Delete

Users can tear down their environment by running:

bazel run :dev.delete

It is notable that despite deleting the deployment, this will NOT delete any services currently load balancing over the deployment; this is intentional as creating load balancers can be slow.

Describe (k8s_object-only)

Users can "describe" their environment by running:

bazel run :dev.describe

Diff

Users can "diff" a configuration by running:

bazel run :dev.diff

:dev.diff maps to kubectl diff, which will diff the live against the would-be applied version. For more information see the kubectl documentation.

This diffs the resolved template, but does not include republishing images.

k8s_object

k8s_object(name, kind, template)

A rule for interacting with Kubernetes objects.

Attributes
name

Name, required

Unique name for this rule.

kind

Kind, required

The kind of the Kubernetes object in the yaml.

If this is omitted, the apply, create, replace, delete, describe actions will not exist.

cluster

string, optional

The name of the cluster to which create, replace, delete, describe should speak. Subject to "Make" variable substitution.

If this is omitted, the apply, create, replace, delete, describe actions will not exist.

context

string, optional

The name of a kubeconfig context to use. Subject to "Make" variable substitution.

If this is omitted, the current context will be used.

namespace

string, optional

The namespace on the cluster within which the actions are performed. Subject to "Make" variable substitution.

If this is omitted, it will default to the value specified in the template or if also unspecified there, to the value "default".

user

string, optional

The user to authenticate to the cluster as configured with kubectl. Subject to "Make" variable substitution.

If this is omitted, kubectl will authenticate as the user from the current context.

kubeconfig

kubeconfig file, optional

The kubeconfig file to pass to the `kubectl` tool via the `--kubeconfig` option. Can be useful if the `kubeconfig` is generated by another target.

substitutions

string_dict, optional

Substitutions to make when expanding the template.

Follows the same rules as expand_template Values are "make variable substituted." You can also use the Bazel command line option --define to define your own custom variables.

  # Example
  k8s_object(
    name = "my_ingress",
    kind = "ingress",
# A template of a Kubernetes ingress object yaml.
template = ":ingress.yaml",

# An optional collection of docker_build images to publish
# when this target is bazel run.  The digest of the published
# image is substituted as a part of the resolution process.
substitutions = {
  "%{expand_template_variable}": "$(make_expanded_variable}",
})

Which is then invoked with bazel run --define make_expanded_variable=value :target and will replace any occurrences of the literal token %{expand_template_variable} in your template with the value "value" by way of first make variable substitution and then string replacement.

Any stamp variables are also replaced with their values. This is done after make variable substitution.

template

yaml or json file; required

The yaml or json for a Kubernetes object.

images

string to label dictionary

When this target is bazel run the images referenced by label will be published to the tag key.

The published digests of these images will be substituted directly, so as to avoid a race in the resolution process

Subject to "Make" variable substitution

image_chroot

string, optional

The repository under which to actually publish Docker images.

resolver

target, optional

A build target for the binary that's called to resolves references inside the Kubernetes YAML files.

args

string_list, optional

Additional arguments to pass to the kubectl command at execution.

NOTE: You can also pass args via the cli by run something like: bazel run some_target -- some_args

NOTE: Not all options are available for all kubectl commands. To view the list of global options run: kubectl options

resolver_args

string_list, optional

Additional arguments to pass to the resolver directly.

NOTE: This option is to pass the specific arguments to the resolver directly, such as --allow_unused_images.

k8s_objects

k8s_objects(name, objects)

A rule for interacting with multiple Kubernetes objects.

Attributes
name

Name, required

Unique name for this rule.

objects

Label list or dict; required

The list of objects on which actions are taken.

When bazel run this target resolves each of the object targets which includes publishing their associated images, and will print a --- delimited yaml.

If a dict is provided it will be converted to a select statement.

k8s_defaults

k8s_defaults(name, kind)

A repository rule that allows users to alias k8s_object with default values.

Attributes
name

Name, required

The name of the repository that this rule will create.

Also the name of rule imported from @name//:defaults.bzl

kind

Kind, optional

The kind of objects the alias of k8s_object handles.

cluster

string, optional

The name of the cluster to which create, replace, delete, describe should speak.

This should match the cluster name as it would appear in kubectl config view --minify -o=jsonpath='{.contexts[0].context.cluster}'

context

string, optional

The name of a kubeconfig context to use.

namespace

string, optional

The namespace on the cluster within which the actions are performed.

user

string, optional

The user to authenticate to the cluster as configured with kubectl.

image_chroot

string, optional

The repository under which to actually publish Docker images.

resolver

target, optional

A build target for the binary that's called to resolves references inside the Kubernetes YAML files.

Testing

To test rules_k8s, you can run the provided e2e tests locally on Linux by following these instructions.

Support

Users find on stackoverflow, slack and Google Group mailing list.

Stackoverflow

Stackoverflow is a great place for developers to help each other.

Search through existing questions to see if someone else has had the same issue as you.

If you have a new question, please [ask] the stackoverflow community. Include rules_k8s in the title and add [bazel] and [kubernetes] tags.

Google group mailing list

The general bazel support options links to the official bazel-discuss Google group mailing list.

Slack and IRC

Slack and IRC are great places for developers to chat with each other.

There is a #bazel channel in the kubernetes slack. Visit the kubernetes community page to find the slack.k8s.io invitation link.

There is also a #bazel channel on Freenode IRC, although we have found the slack channel more engaging.

Adopters

Here's a (non-exhaustive) list of companies that use rules_k8s in production. Don't see yours? You can add it in a PR!

rules_k8s's People

Contributors

alex1545 avatar benley avatar chases2 avatar codesuki avatar ensonic avatar erain avatar fejta avatar globegitter avatar goodspark avatar hlopko avatar kevingessner avatar laurentlb avatar mariusgrigoriu avatar mattmoor avatar meteorcloudy avatar michelle192837 avatar mishas avatar mmikitka avatar nlopezgi avatar pcj avatar philwo avatar pierreis avatar renovate-bot avatar rohansingh avatar samschlegel avatar smukherj1 avatar vladmos avatar xingao267 avatar ymotongpoo avatar zegl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rules_k8s's Issues

Expand more data in the template

Since k8s_object is already running substitution to apply project settings, it would be nice if one culd substitute other variables too.

1.) One option could be to specify an extra dict of key:value pairs that it would add to the substitutions it sets up for the images:

k8s_object(
  name = "x-deployment",
  template = ":deployment.yaml",
  images = { ... },
  substitutions = {
    "${KEY}": "value",
  },
)

Issues:
- one might want to setup some global substitution in k8s_defaults() and add more in the aliased rule

2.) Just add ctx.vars to the list of substitutions. All keys are changed to `${key}`or something simillar.

rules are not hermetic at all

The 'run' commands for create/apply/delete shell out to kubectl and gcloud. This should maybe be mentioned at least in a README. Ideally the rules will pull the commands through the WORKSPACE for hermeticity, right?

For background when running them through dazel, one will need to installed them to the dazel docker container.

Hermetically include kubectl in rules_k8s

Rather than depending on the kubectl that is available on the system, it would be nifty to be hermetic and to compile kubectl. Since kubernetes is using Bazel and Gazelle to manage things, I believe you could just add it as a git_repository (it vendors all its deps so no need to add more) and depend on @com_github_kubernetes_kubernetes//cmd/kubectl as a data dependency and then shell out to external/com_github_kubernetes_kubernetes/cmd/kubectl/kubectl instead. A bash wrapper that checked for the existence of that file and called it and fell back to the system version if you didn't want to force the dependency.

Better errors when stamp variables are missing.

If I have:

k8s_object(
    name = "foo",
    cluster = "{MISSING_KEY}",
    ....
)

This will result in the painful error:

$ bazel run foo.create
INFO: Analysed target //:foo.create (0 packages loaded).
INFO: Found 1 target...
ERROR: /..../BUILD:68:1: Stamp foo.create.cluster-name failed (Exit 1)
Traceback (most recent call last):
File "/.../execroot/.../bazel-out/host/bin/external/io_bazel_rules_k8s/k8s/stamper.runfiles/..../../io_bazel_rules_k8s/k8s/stamper.py", line 56, in <module>
main()
File "/.../execroot/.../bazel-out/host/bin/external/io_bazel_rules_k8s/k8s/stamper.runfiles/.../../io_bazel_rules_k8s/k8s/stamper.py", line 52, in main
f.write(args.format.format(**format_args))
KeyError: 'MISSING_KEY'
Target //:foo.create failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 1.239s, Critical Path: 0.29s
FAILED: Build did NOT complete successfully
ERROR: Build failed. Not running target

Consider using Jsonnet (and maybe Kubecfg) as a foundation for the rules.

As primitives the k8s_defaults and k8_object are quite fine grained. I envisage the current k8s_defaults, k8s_object to evolve to provide some templating parameterised by make variables, json files, statically provided dicts -- this will invariably be limiting.

  1. I don't think they will scale to express multiple build / CI environments without a lot of skylark macro code.
  2. If a production environment faithfull can be sufficiently modelled using skylark macros all of that work will be specific to the build / CI environment. To produce manifests for production a separate manifest management tool would stiil to be maintained.

Here are some suggestions:

  • Build upon Jsonnet. It is super lightweight as a templating language and easy to include as an external dependency (no external dependencies with an existing bazel build).
  • Expand the scope of k8s_defaults more than just defaults for a single type of resource, or as a holder for some configuration parameters. A k8s_defaults that holds jsonnet file(s) (libsonnet) can model a simple dict of params -- at it does now -- all the way to modelling a particular deployment environment.
  • Expand the scope of k8s_object to represent an arbitrary set of k8s resources instead of just a single resource. A single jsonnet file renders to multiple Json or Yaml files. A real world micro service is going to have a service, a controller, ingress, secrets, configmap. These should likely be deployed in one step.
  • Consider the use of kubecfg -- the kubecfg tool takes care of some of the problems that would be need to be solved to use jsonnet in the manner I am describing above. Kubecfg has image resolution, simple multi yaml file rendering and CI steps that would likely mean a dependency on kubectl is not necessary.

I working on wiring up kubecfg into our codebase. I have run rules for my k8s_object equivalent that map to the kubecfgs (apply, delete, validate, show) commands. I can provide more detail if interested.

k8s_defaults(image_chroot) does not support makevars

I have this in my WORKSPACE:

k8s_defaults(
  name = "k8s_my_object",
  cluster = "gke_$(GCP_PROJECT)_$(GCP_ZONE)_xxx",
  image_chroot = "$(GCP_REGISTRY)/$(GCP_PROJECT)/"
)

and in my .bazelrc:

build --define GCP_REGISTRY=eu.gcr.io
...

But those are not getting substituted. I'll probably try the workspace_status_command, but it makes things more complicated :/

Service deployment when publishing images

We are using Cloud Endpoints to serve an API from a Kubernetes cluster. We can't use rules_k8s to update the service as is, since we need to publish both the container image and the service definition (see the Endpoints docs). The service definition is uploaded with gcloud endpoints services deploy, which generates a service version number that must be edited into the deployment YAML.

The best UX for deploying such a service in development would be to extend :foo.apply to deploy the service definition before resolving the deployment YAML. However, Bazel doesn't model dependencies of run actions on other run actions, and since service deployment is non-hermetic we can't use build-phase templating (like rules_jsonnet).

What do you think about extending rules_k8s in one of the following ways?

  • to take a service definition and handle deployment for Cloud Endpoints
  • to run arbitrary executables, parse key-value pairs from the output, and use this for templating

If that's out-of-scope, we'd most likely use a script that gets the YAML from rules_k8s, does further templating, and reimplements the create/apply/replace actions.

Support chrooting image references for images we publish.

The idea is to enable developers to use a different registry for development with the same template by chrooting image names.

e.g. deployment.yaml may reference: gcr.io/my-prod/image:prod

However, you don't want to publish there for development and today we just push to the actual image reference in the template.

So instead, we could enable developers to "chroot" the image reference into a different environment. e.g.

k8s_defaults(
    name = "k8s_dev_deploy",
    kind = "deployment",
    image_chroot = "gcr.io/my-dev/{BUILD_USER}",
)

Then the following:

k8s_dev_deploy(
    name = "dev",
    template = "template.yaml",
    images = {
       "gcr.io/my-prod/image:prod": "//path/to:image",
    }
)

Would substitute references to gcr.io/my-prod/image:prod with:
gcr.io/my-dev/{BUILD_USER}/gcr.io/my-prod/image@sha256:published-digest.

some rule attributes shadow the template data

k8s_object(kind) is the same as yaml(kind)
k8s_object(namespace) is the same as yaml(metadata/namespace)

Replicating the on the k8s_object rule is IMHO not a good idea. This brings the danger that one specifies the different kind on the rule and in the template.
Even worse one needs to repeat data if they are different from the default (e.g. namespace).

What is the advantage of having those on the rule in the first place?

Can we change the way this works:
1.) Print a warning if the value from the rule differs from the yaml/json
2.) If the value is not given in the rule, take it from the yaml/json, else use the default?

Better docs for image substitutions

Lets assume I have a deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: web-frontend
spec:
  replicas: 1
  template:
    spec:
      - name: web-frontend
        image: XXXX

and a build rule:

k8s_object(
  name = "x-deployment",
  kind = "deployment",
  template = ":deployment.yaml",
  images = {
    "XXXX": "//src/docker/web_frontend:web-frontend",
  }
)

I get:

INFO: Running command line: bazel-bin/src/kubernetes/web-frontend/x-deployment
Traceback (most recent call last):
  File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 180, in <module>
    main()
  File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 166, in main
    (tag, digest) = Publish(transport, args.image_chroot, **kwargs)
  File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 136, in Publish
    name_to_replace = docker_name.Tag(name)
  File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/containerregistry/client/docker_name_.py", line 196, in __init__
    _check_tag(self._tag)
  File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/containerregistry/client/docker_name_.py", line 80, in _check_tag
    _check_element('tag', tag, _TAG_CHARS, 1, 127)
  File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/containerregistry/client/docker_name_.py", line 64, in _check_element
    % (name, element, min_len))
containerregistry.client.docker_name_.BadNameException: Invalid tag: , must be at least 1 characters
ERROR: Non-zero return code '1' from command: Process exited with status 1

The docs need to explain what format the key in the images param in the BUILD file needs to be and how to specify this in the deployment.yaml.

I also tried $(GCP_REGISTRY)/$(GCP_PROJECT)/web-frontend where GCP_REGISTRY and GCP_PROJECT are defines in my bazelrc, but I don't want to hardcode this in the template (deployment.yaml).

It looks like it tries to parse the key. E.g. when I change "XXXX" to "XXXX:latest" I get
"""
containerregistry.client.docker_name_.BadNameException: A Docker registry domain must be specified.
"""

If that is the case, I don't understand how I can avoid hard-coding the registry domains into the templates. I think it would be nicer to specify this in k8s_defaults() as a param.

If I put gcr.io/<project-id>/web-fontend:latest into the deployment.yaml and "gcr.io/<project-id>/web-fontend:latest": "//src/docker/web_frontend:web-frontend" into the BUILD file, the rule runs, but it apparently interacts with the cloud (I need to have a valid regsitry login), I though it would only resolve (aka substitue) the template. Also the produced output template is not expanded, it only prints the resolved template to stdout.

Remove unchanged/useless output from k8s_object

If one does:

k8s_robco_object(
  name = "config",
  kind = "configmap",
  template = ":config.yaml",
)

you will get a an error (see subject). One can work around it by using name="x-config" but that is ugly.

Istio Samples

This is basically an umbrella item to consider how these rules will interact with Istio. Perhaps what we have is enough, and we just need samples. Perhaps Istio is special enough that we want rules_istio.

This repo allows merge commits

These create a non-linear git history, so my

git_repository(
    name = "io_bazel_rules_k8s",
    commit = "055db1f75e00e805762798bbb14afb945955f5c1",
    remote = "https://github.com/bazelbuild/rules_k8s.git",
)

broke this morning now that the referenced commit isn't in the history of the master branch.

You should disable the green merge button, or whatever affordance allows merge commits.

unhelpful error when running a k8s_object

When I bazel run a k8s_object target it fails like below. The code should probably handle the exception.

The BUILD file:

k8s_object(
    name = "deployment",
    template = ":deployment.yaml.expanded",
    images = {
      "xxx_image": "//src/docker/web_frontend:web_frontend",
    },
    cluster = "cluster-name",   # actual value omitted
    image_chroot = "eu.gcr.io/project-id/",       # actual value omitted
  )

The output

raceback (most recent call last):
  File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-opt/bin/src/kubernetes/web-frontend/deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 201, in <module>
    main()
  File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-opt/bin/src/kubernetes/web-frontend/deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 173, in main
    (tag, digest) = Publish(transport, args.image_chroot, **kwargs)
  File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-opt/bin/src/kubernetes/web-frontend/deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 151, in Publish
    with v2_2_session.Push(name_to_publish, creds, transport, threads=_THREADS) as session:
  File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-opt/bin/src/kubernetes/web-frontend/deployment.runfiles/containerregistry/client/v2_2/docker_session_.py", line 71, in __init__
    name, creds, transport, docker_http.PUSH)
  File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-opt/bin/src/kubernetes/web-frontend/deployment.runfiles/containerregistry/client/v2_2/docker_http_.py", line 193, in __init__
    self._Refresh()
  File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-opt/bin/src/kubernetes/web-frontend/deployment.runfiles/containerregistry/client/v2_2/docker_http_.py", line 272, in _Refresh
    'Authorization': self._basic_creds.Get()
  File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-opt/bin/src/kubernetes/web-frontend/deployment.runfiles/containerregistry/client/docker_creds_.py", line 153, in Get
    stderr=subprocess.STDOUT)
  File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
    errread, errwrite)
  File "/usr/lib/python2.7/subprocess.py", line 1335, in _execute_child
    raise child_exception
OSError: [Errno 2] No such file or directory
ERROR: Non-zero return code '1' from command: Process exited with status 1

`build` docs claim a yaml file will be generated but none is

Reading through the "build" docs it says

Build builds all of the constituent elements, and makes the template available as {name}.yaml

But no such yaml file is created when running bazel build //path/to:k8s_object_thing. Instead, some shell files are created.

Is there a way to see what yaml will be used when written back to the k8s without accessing container registries?

APIServer Samples

Instead of just a CRD with an async controller, we should provide a sample (in at least Go) that hooks into the API Aggregation layer.

Ideally we'd have this in all K8s client languages.

Parallelize Travis builds

There are several logically separable legs of our Travis build:

  • buildifier
  • bazel build/test
  • foreach example
    • examples/{example}/e2e-test.py (supported languages need to be serial)

Remove `.short_path` references

I recently replaced the use of f.short_path all over docker_build with _get_runfile_path to make it so that the scripts produced by things like :dev.create can be built and run separately.

rules_k8s exhibits the same symptom as the broken before picture, we should apply the same fixes here so that higher-level rules can bundle our actions into higher-level actions.

This links to both of my PRs so far fixing this class of issue.

Command-line arguments are ignored

If you run:

bazel run :k8s.create -- --save-config

the --save-config is silently ignored. You won't find out until you try to apply a change later and it doesn't work. For this case you can just use apply instead of create, but the current behaviour masks the mistake, and passing command-line arguments through to kubectl could be useful in other cases.

I'd suggest:

  • either generated scripts should print a warning/error if they get unexpected arguments,
  • or they should pass them through to kubectl.

Update use of org_pubref_rules_protobuf to fix issue with set constructor

Since bazelbuild/bazel@a6ad2d5, use of the set constructor instead of depset is an error.

https://github.com/pubref/rules_protobuf/releases has fixed this in their latest release. Please upgrade rules_k8s to the latest release or the next Bazel release will break your build.

Error message:

$ bazel-dev build //examples/hellogrpc/proto:all
...
ERROR: /usr/local/google/home/jcater/repos/rules_k8s/examples/hellogrpc/proto/BUILD:5:1: Traceback (most recent call last):
	File "/usr/local/google/home/jcater/repos/rules_k8s/examples/hellogrpc/proto/BUILD", line 5
		cc_proto_library(name = "cc", protos = ["simple.pro..."])
	File "/usr/local/google/home/jcater/.cache/bazel/_bazel_jcater/5c3b0c3e069ecb95f72b7ff62a3e5298/external/org_pubref_rules_protobuf/cpp/rules.bzl", line 97, in cc_proto_library
		native.cc_library(name = name, srcs = (srcs + [(name...")]), <2 more arguments>)
	File "/usr/local/google/home/jcater/.cache/bazel/_bazel_jcater/5c3b0c3e069ecb95f72b7ff62a3e5298/external/org_pubref_rules_protobuf/cpp/rules.bzl", line 100, in native.cc_library
		list(set(((deps + proto_deps) + compi...)))
	File "/usr/local/google/home/jcater/.cache/bazel/_bazel_jcater/5c3b0c3e069ecb95f72b7ff62a3e5298/external/org_pubref_rules_protobuf/cpp/rules.bzl", line 100, in list
		set(((deps + proto_deps) + compile_d...))
The `set` constructor for depsets is deprecated and will be removed. Please use the `depset` constructor instead. You can temporarily enable the deprecated `set` constructor by passing the flag --incompatible_disallow_set_constructor=false
...

k8s_objects should delete in reverse order.

Consider:

k8s_objects(
    name = "foo",
    objects = [
        ":namespace",
        ":deployment",  # in the namespace
    ],
)

This is ordered properly for :foo.create, but :foo.delete will always fail deleting :deployment because :namespace was already cleaned up.

Reverse prebuilt multi-object yamls in k8s_object's delete

This is related to this issue with k8s_objects.

This manifests when the input to k8s_object is a multi-document yaml file delimited by ---, such as the Istio release.

In the case of Istio what happens is that they create the CRDs before instances of the CRDs, otherwise kubectl create would fail.

However, when you kubectl delete this file, you run into problems because (evaluating in the same order) you delete the CRD definition before its instances, and so the former cascades the deletion and the latter result in 404 errors.

Avoid undesired image expansion in the template

This is a followup for the discussion in #71

Lets imagine
BUILD:

k8s_object(
    name = "deployment",
    template = ":deployment.yaml",
    images = {
      "web-frontend": "//src/docker/web_frontend:web_frontend",
    },
  )

deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
spec:
  replicas: 1
  template:
    spec:
      containers:
      - name: web-frontend
        image: web-frontend 

Since the rules are using a simple expand_template action and the substitution keys are simple words, this will replace web-frontend in both name and image.

A naive approach would be to make the key in the substitution something like "image: " + spec so that it only applies to image tags. But this would not be robust for whilespaces and formatting variants (template can be yaml and json).

A better approach would be to mark the substitutions (this is what we do). That is in the file write e.g. "${web-frontend}" you wrap the substitutions keys with '${' + '}', but this won't be backwards compatible. I think this would be a much better approach though. Maybe we can add a boolean flag to enable substitution_guards. Or if people write the key in images as '${web-frondend}' you remove the guards to get the actual image name. Then we don't need the flag.

go_image + k8s_object on macOS with --cpu k8

I'm trying to deploy a go_image (rules_docker) from my mac (High Sierra) to a Kubernetes cluster and am seeing the following unexpected behavior.

bazel run --cpu k8 //risk/jeopardy_ui:image -- --norun  # A go_image rule
...
Loaded image ID: c9f1edf07b2e


bazel run --cpu k8 //risk/jeopardy_ui:deployment.create  # A k8s_object rule
...
# jeopardy_ui id deployed -> 67bc10da402d


bazel run --cpu k8 //risk/jeopardy_ui:full_deploy.create  # A k8s_objects rule
...
# jeopardy_ui id deployed -> 26d62db52b54

Running the container output by the first command works as expected, the other two error out with the following:

standard_init_linux.go:185: exec user process caused "exec format error"

In other words, I get different digests when deploying a go_image with k8s_object, as well as when said rule is triggered by a k8s_objects collection.

Not sure what other details I could provide, but would be happy if someone was able to shed some light on this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.