Giter VIP home page Giter VIP logo

kustomize's Introduction

kustomize

kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is.

kustomize targets kubernetes; it understands and can patch kubernetes style API objects. It's like make, in that what it does is declared in a file, and it's like sed, in that it emits edited text.

This tool is sponsored by sig-cli (KEP).

Build Status Go Report Card

kubectl integration

To find the kustomize version embedded in recent versions of kubectl, run kubectl version:

> kubectl version --short --client
Client Version: v1.26.0
Kustomize Version: v4.5.7

The kustomize build flow at v2.0.3 was added to kubectl v1.14. The kustomize flow in kubectl remained frozen at v2.0.3 until kubectl v1.21, which updated it to v4.0.5. It will be updated on a regular basis going forward, and such updates will be reflected in the Kubernetes release notes.

Kubectl version Kustomize version
< v1.14 n/a
v1.14-v1.20 v2.0.3
v1.21 v4.0.5
v1.22 v4.2.0
v1.23 v4.4.1
v1.24 v4.5.4
v1.25 v4.5.7
v1.26 v4.5.7
v1.27 v5.0.1

For examples and guides for using the kubectl integration please see the kubernetes documentation.

Usage

1) Make a kustomization file

In some directory containing your YAML resource files (deployments, services, configmaps, etc.), create a kustomization file.

This file should declare those resources, and any customization to apply to them, e.g. add a common label.


base: kustomization + resources

kustomization.yaml                                      deployment.yaml                                                 service.yaml
+---------------------------------------------+         +-------------------------------------------------------+       +-----------------------------------+
| apiVersion: kustomize.config.k8s.io/v1beta1 |         | apiVersion: apps/v1                                   |       | apiVersion: v1                    |
| kind: Kustomization                         |         | kind: Deployment                                      |       | kind: Service                     |
| commonLabels:                               |         | metadata:                                             |       | metadata:                         |
|   app: myapp                                |         |   name: myapp                                         |       |   name: myapp                     |
| resources:                                  |         | spec:                                                 |       | spec:                             |
|   - deployment.yaml                         |         |   selector:                                           |       |   selector:                       |
|   - service.yaml                            |         |     matchLabels:                                      |       |     app: myapp                    |
| configMapGenerator:                         |         |       app: myapp                                      |       |   ports:                          |
|   - name: myapp-map                         |         |   template:                                           |       |     - port: 6060                  |
|     literals:                               |         |     metadata:                                         |       |       targetPort: 6060            |
|       - KEY=value                           |         |       labels:                                         |       +-----------------------------------+
+---------------------------------------------+         |         app: myapp                                    |
                                                        |     spec:                                             |
                                                        |       containers:                                     |
                                                        |         - name: myapp                                 |
                                                        |           image: myapp                                |
                                                        |           resources:                                  |
                                                        |             limits:                                   |
                                                        |               memory: "128Mi"                         |
                                                        |               cpu: "500m"                             |
                                                        |           ports:                                      |
                                                        |             - containerPort: 6060                     |
                                                        +-------------------------------------------------------+

File structure:

~/someApp
├── deployment.yaml
├── kustomization.yaml
└── service.yaml

The resources in this directory could be a fork of someone else's configuration. If so, you can easily rebase from the source material to capture improvements, because you don't modify the resources directly.

Generate customized YAML with:

kustomize build ~/someApp

The YAML can be directly applied to a cluster:

kustomize build ~/someApp | kubectl apply -f -

2) Create variants using overlays

Manage traditional variants of a configuration - like development, staging and production - using overlays that modify a common base.


overlay: kustomization + patches

kustomization.yaml                                      replica_count.yaml                      cpu_count.yaml
+-----------------------------------------------+       +-------------------------------+       +------------------------------------------+
| apiVersion: kustomize.config.k8s.io/v1beta1   |       | apiVersion: apps/v1           |       | apiVersion: apps/v1                      |
| kind: Kustomization                           |       | kind: Deployment              |       | kind: Deployment                         |
| commonLabels:                                 |       | metadata:                     |       | metadata:                                |  
|   variant: prod                               |       |   name: myapp                 |       |   name: myapp                            |
| resources:                                    |       | spec:                         |       | spec:                                    |
|   - ../../base                                |       |   replicas: 80                |       |  template:                               |
| patches:                                      |       +-------------------------------+       |     spec:                                |
|   - path: replica_count.yaml                  |                                               |       containers:                        |
|   - path: cpu_count.yaml                      |                                               |         - name: myapp                    |  
+-----------------------------------------------+                                               |           resources:                     |
                                                                                                |             limits:                      |
                                                                                                |               memory: "128Mi"            |
                                                                                                |               cpu: "7000m"               |
                                                                                                +------------------------------------------+

File structure:

~/someApp
├── base
│   ├── deployment.yaml
│   ├── kustomization.yaml
│   └── service.yaml
└── overlays
    ├── development
    │   ├── cpu_count.yaml
    │   ├── kustomization.yaml
    │   └── replica_count.yaml
    └── production
        ├── cpu_count.yaml
        ├── kustomization.yaml
        └── replica_count.yaml

Take the work from step (1) above, move it into a someApp subdirectory called base, then place overlays in a sibling directory.

An overlay is just another kustomization, referring to the base, and referring to patches to apply to that base.

This arrangement makes it easy to manage your configuration with git. The base could have files from an upstream repository managed by someone else. The overlays could be in a repository you own. Arranging the repo clones as siblings on disk avoids the need for git submodules (though that works fine, if you are a submodule fan).

Generate YAML with

kustomize build ~/someApp/overlays/production

The YAML can be directly applied to a cluster:

kustomize build ~/someApp/overlays/production | kubectl apply -f -

Community

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

kustomize's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kustomize's Issues

Improve error messages on Go errors.

It's a lot of work but it would be helpful if errors had some notion of the YAML file that caused the error so you don't have to dig through the source code. For example today I saw

Error: <nil> is not expected to be a primitive type

When you look at the source it could be one of two errors and there is no context provided so I couldn't have guessed (other than a diff of the configuration changes).

return fmt.Errorf("%#v is not expected to be a primitive type", typedV)

return "", fmt.Errorf("%#v is not expected to be a primitive type", typedV)

Name substitutions done for containers but not initContainers

I have a secret generated with the "secretGenerator", and I'd like to use a value in that secret in the definition of an init container. Something like:

spec:
  initContainers:
  - name: my-init-container
    env:
      - name: PASSWORD
      valueFrom:
        secretKeyRef:
          name: passwords
          key: my_password

The name of the secret should be substituted by kustomize with the real generated name of the secret. This works for normal containers, but not for init containers.

Missing nameReferences

There may be more but so far I noticed that Ingress doesn't get updated with the Service name and HorizontalPodAutoscaler doesn't get updated with the Deployment name.

edit should preserve kustomization field order and comments

The edit command unmarshalls kustomize.yaml to an unstruct object, edits it, and marshalls it back without retaining field ordering or comments. This also introduces fields with zero values that weren't in the original content (perhaps harmless but ugly)

Support for rewriting API versions

Given a base with a resource with a given kubernetes API version (lets say a statefulset with API version app/v1beta1), there seems to be no way to patch this API version within an overlay

I cannot use a patch to rewrite the API version because patches already use the API version to match a resource in the first place.

To make migrating between Kubernetes versions easier, perhaps it would be quite useful to have another common section to redefine the resource versions within the kustomization.yaml:

namePrefix: test-
namespace: test
commonAPIVersions:
  statefulset: apps/v1
  service: v1
  deployment: v1

This would replace the existing API versions with new ones.

The reason for requesting this feature, is that I have started creating a directory tree with bases and overlays. It seems necessary to group the "base" directory structure by system/version/subsystem/app, like druid/0.9.2/platform/zookeeper

For the usecase of migrating from one kubernetes version to another or from an outdated manifest to a newer one, I would have to either completely rewrite the base tree which would be disruptive for the overlays, or I would have to organize the base tree by kubernetes version as well application version which is making these directory trees deep and complex.

Add nameref for openshift routes?

Hi,

I understand that this is likely going to be rejected, but I'll try anyway :) Something that is really needed to make the nameprefix mechanism work in a OpenShift context, is replacing the name of the target service in Route objects (apiVersion: route.openshift.io/v1). Would you maybe consider doing the nameref subtitution for that too?

A typical route definition looks as follows:

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: frontend
spec:
  tls:
    termination: edge
  to:
    kind: Service
    name: frontend

See: https://docs.openshift.org/latest/dev_guide/routes.html

Respect ordering in kustomization resources array

It can be extremely useful for namespace creation. Namespace have to be created before all objects you want to place inside. Example:

$ cat kustomization.yaml 
commonLabels:
  app: hello
resources:
- namespace.yaml
- cm.yaml

$ kustomize build .

apiVersion: v1
data:
  hello.txt: |
    "hello"
kind: ConfigMap
metadata:
  labels:
    app: hello
  name: hello-config-ck54t29m9k
  namespace: hello
---
apiVersion: v1
kind: Namespace
metadata:
  labels:
    app: hello
  name: hello

$ kustomize build . | kubectl apply -f -      
namespace "hello" created
Error from server (NotFound): error when creating "STDIN": namespaces "hello" not found

Namespace manifest is situated at the end of stdout, which cause that error message.

diff doesn't compare uncustomized resources to customized resources

RawResourses purports to return uncustomized resources so they
can be diffed against customized resources.

However,
applicationImpl.RawResources
calls a.rawResources
calls a.subAppResources()
which does a for loop over the bases, calling subapp.SemiResources() for each base.
That function performs transformation.

So RawResources only skips transforming the resources in the topmost kustomization file.

See TODO in TestRawResources2 - that test shouldn't pass
https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/app/application_test.go#L289

Improper use of os.Stat

os.Stat is called directly in the bowels of the code, despite having a means to mock it, meaning it's impossible to inject a fakefs there.

find ./ -name "*.go" | grep -v vendor/| xargs grep os.Stat\(

Likewise, there's an if statement that ignores an error check on the result of os.Stat here:

https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/resmap/secret.go#L63

if the error isn't ignored (and it should not be) then application_test fails (with secret generation).

I noticed this when investigating #85

So this issue is to

it would be even better to completely divorce the transformation/generation stage from the filesystem by loading everything into memory first, retaining a FS like tree as needed, but that's a bigger issue to maybe attempt later

Improve secret generator error messages

When secret generator commands fail, they currently print out the process status code and no other information (which secret/key failed, for example), making it difficult to debug the issue.

e.g.

[bran@dupont dev (master)]$ kustomize build .
Error: exit status 127

Proposal: Add the secret name and secret key name to the error message.

Request to isolate core yaml-altering code from kubernetes-specific code

Hi. Quick fly-by idea for you while the project is early on:

Could you aim to/continue to separate the yaml-altering code from the kubernetes-specific code? ie either as a separate repo or, failing that, as a separate go-gettable library.

Why?

I've been thinking about infrastructure-as-code recently and really like this project's approach. It's possible that the OSS community might find other uses for the approach and could reuse and also contribute to the core non-kubernetes-specific code if it were an isolated go library.

Examples

As a example, take Hashicorp's memberlist package that implements the SWIM gossip protocol. It is a separate repo to their Consul repo and receives lots of contributions and use in other projects that I suspect it wouldn't have received if it was inside of the Consul repo.

What other possible uses for the core approach are there? Probably a few, yaml manifests are popular! Take Google Cloud's deployment manager for example. Very similar to Helm it does declarative deploys based on yaml config files. I was wondering if there was a nice way to add some Kustomize magic to it instead of having to use python or jinja for templating just to change a few params between dev/prod environments - which lead me to post this.

PS Ignore me if this doesn't really make sense. I haven't gone through the codebase to determine if there's much non-kubernetes-specific code to isolate. Thought I'd just suggest the idea.

support creating secrets from file contents

For example:

secretGenerator:
- name: secret-in-base
  behavior: merge
  commands:
    - type: env
      command: "blackbox_cat secret.env"    
    - type: yaml
      command: "sops -d secret.yaml"    
    - type: json
      command: "cat secret.json"    
    - type: inline
      hello: "echo world"    

Secrets are not namespaced with base.namespace when generated with secretGenerator

In order to reproduce this:

  • Create a kustomization.yaml file in base directory with namespace: foo
namespace: foo-bar
resources:
- foo.yaml
  • Create a kustomization.yaml file in overlay directory with secretGenerator
bases:
- "../../base"
secretGenerator:
- name: my-secret
  commands:
    config.secret: "uname -a"
  type: "Opaque"

Result: generated secret is not namespaced in foo-bar

Workaround: set also namespace in overlayed kustomization.yaml

Create a SECURITY_CONTACTS file.

As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.

The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".

Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)

Thanks so much, let me know if you have any questions.

(This issue was generated from a tool, apologies for any weirdness.)

[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md

Allow using kustomize as HELM plugin

We have use case of a lot of production environment (we manage with HELM) and one development environment which requires overlay (open debug port for example) and strictly must not be used in production. Therefore I suggest to allow kustomize to be used as HELM downloader plugin so it will be possible to pass the chart through kustomize before it will be used in development environment.

Improve error message on YAML format errors

Currently if you have a syntax error in one of the yaml files, you get an error like (kustomize 1.0.2):

$ kustomize build deploy/prod
Error: NewResourceSliceFromPatches: error converting YAML to JSON: yaml: line 3: mapping values are not allowed in this context

Usage:
  kustomize build [path] [flags]

Examples:
Use the file somedir/kustomization.yaml to generate a set of api resources:
build somedir/

Proposed improvements:

  • Include the name of the failing yaml file (the line number doesn't help much otherwise :))
  • Do not output usage information. That just dilutes the message.

Support variable substitution using downward API syntax

The original issue is about supporting container arguments that depend on kustomize generated names. We have a POC done with downward API syntax, we need to productionize it.

Here is a concrete example I ran into while migrating cockroachdb's example to work with kinflate:

For more info on the cockroachdb manifest YAML, refer this link https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html

The "secure" version of cockroachdb uses a init container which initiates request-signing process which depends on the service-name of cockroachdb. service-name is passed as argument to the container arg.
Using kinflate, if we use the name prefix, service name will be prefixed, but init-container's arg will still use the old name. Which breaks.

Second case in the same example is that cluster nodes use host-name derived from statefulset's name in container args. Name prefixing will change the statefulset's name and breaking the container args.

So this completely blocked migration of cockroachdb manifests to work with Kustomize.

Jobs should not add spec.selector

Kustomize adds spec.selector to Job definitions, which normally you don't want to do.

apiVersion: batch/v1
kind: Job
metadata:
  name: migration
spec:
  backoffLimit: 4
  template:
    spec:
      restartPolicy: Never
      containers:
      - name: migration
        image: "core:latest"
        imagePullPolicy: IfNotPresent
        command: ["python3", "manage.py", "initializedb"]
        envFrom:
          - configMapRef:
              name: settings-map
          - secretRef:
              name: secrets-map
        env:
          - name: SECRET_KEY
            valueFrom:
              secretKeyRef:
                name: secrets-map
                key:  django-secret-key

Results in

apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app: xylok
  name: migration
  namespace: default
spec:
  backoffLimit: 4
  selector:
    matchLabels:
      app: xylok
  template:
    metadata:
      labels:
        app: xylok
    spec:
      containers:
      - command:
        - python3
        - manage.py
        - initializedb
        env:
        - name: SECRET_KEY
          valueFrom:
            secretKeyRef:
              key: django-secret-key
              name: secrets-map-hmh762fg95
        envFrom:
        - configMapRef:
            name: settings-map-5hhf57bc2d
        - secretRef:
            name: secrets-map-hmh762fg95
        image: core:latest
        imagePullPolicy: IfNotPresent
        name: migration
      restartPolicy: Never

Applying it results in:

The Job "migration" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"xylok", "controller-uid":"a2126cd5-7316-11e8-86df-08002746865b"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: `selector` not auto-generated

Commenting out the spec.selector portion allows the Job to be correctly applied.

Is there a reason Kustomize should add matchLabels?

I assume removing lines 85-89 of the transformations is all that's needed. I can submit a PR if it's confirmed this is an issue.

Variable Docker image tags

My application has a number of services which use the same Docker image (based on the application repository's GIT SHA). In my application I have a CI workflow like:

  1. Build my app's docker image and push it to a registry with ${APP}:${GIT_COMMIT_SHA}
  2. helm install --upgrade --set GIT_COMMIT_SHA=${GIT_COMMIT_SHA} charts/${APP}

Can you suggest how to make this workflow work with kustomize? It seems like I would have to add a step which would substitute the image (with sed?) in all the templates prior to kustomize build or generate a deployment patch for each service?

Provide for omission of resources.

I'm new to kustomize, so let me know if this even makes sense.

Use case: I'm transforming an all-in-one YAML (--- delimited). This works well, except I want to skip one of the resources in the upstream YAML stream.

Something like this:

omissionPatches:
- svc.some-public-service.yaml

Where svc.some-public-service.yaml contains:

apiVersion: v1
kind: Service
metadata:
  name: some-public-service

In the example scenario, kustomize should not include svc/some-public-service in the output stream.

Convention-Based Defaults for kustomization.yaml

Has there been any discussion on a system for convention-based defaults? It seems like many of the fields in a kustomization.yaml could be omitted if the user follows some conventional directory structure. For example, the structure outlined in the tutorials:

someApp
├── base
│   ├── manifest1.yaml
│   ├── manifest2.yaml
│   └── ...
└── overlays
    ├── overlay1
    │   ├── manifest1.yaml
    │   ├── manifest2.yaml

use namePrefix in every reference/direct link


copied from kubernetes/kubectl#293

I have some following resources:
one ingress, one svc

---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: webapp
  name: webapp
spec:
  ports:
  - name: http
    port: 5000
    protocol: TCP
    targetPort: 5000
  selector:
    k8s-app: webapp
  type: ClusterIP

and

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: webapp
spec:
  rules:
  - host: webapp-example.com
    http:
      paths:
      - backend:
          serviceName: webapp
          servicePort: 5000

In this case, using namePrefix isn't practical because I have to edit manually the svc:

- backend:
          serviceName: $namePrefix-webapp
          servicePort: 5000

I would expect a replacement in the serviceName too

Reference to location of root kustomization.yaml

Scripts executing in a secret generator have the working directory of the kustomization.yaml file that defined them. To create a re-usable secret generator, I would like to use a secret generator as a base with paths relative to the kustomization.yaml file I'm building. I would be useful if we had some variable or built-in environment variable referencing that file.

I realize it may be more "kustomizeable" to try and use an overlay secret generator that merges into a base, so as one does not have to reason so much about what context a base will be used in, or open up for using bases with arguments/variables in general. I do think this could simplify repetitive configuration, however.

Example

In this example, I have .pgpass sitting in the same directory as the secret generator pg. It will generate a secret from that file, and I can use it as a base in my foobar kustomization. However, I would like to put .pgpass with the foobar file, or an overlay using it.

I can replace the relative path with an environment variable (such as $PGPASS) and make sure I pass an absolute path to kustomize build (e.g. PGPASS=$PWD/.pgpass kustomize build).

tree -a:

.
├── bases
│   └── pg
│       ├── .pgpass
│       └── kustomization.yaml
└── foobar
    ├── deploy.yaml
    └── kustomization.yaml

3 directories, 4 files

bases/pg/kustomization.yaml:

secretGenerator:
- name: pg
  commands:
    # will reference files in `bases/pg`, not directory of root `kustomization.yaml` file
    # replace with `$PGPASS` and pass that to make it work
    PGDATABASE: "cut -d: -f3 .pgpass"
    PGUSER: "cut -d: -f4 .pgpass"
    PGPASSWORD: "cut -d: -f5 .pgpass"
  type: Opaque

bases/pg/.pgpass:

hostname:port:database:username:password

foobar/deploy.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: foobar
  labels:
    app: foobar
spec:
  selector:
    matchLabels:
      app: foobar
  template:
    metadata:
      labels:
        app: foobar
    spec:
      containers:
      - name: foobar
        image: foobar
        env:
        - name: PGDATABASE
          valueFrom: { secretKeyRef: { name: pg, key: PGDATABASE }}
        - name: PGUSER
          valueFrom: { secretKeyRef: { name: pg, key: PGUSER }}
        - name: PGPASSWORD
          valueFrom: { secretKeyRef: { name: pg, key: PGPASSWORD }}

foobar/kustomization.yaml:

namePrefix: foobar-
bases:
- ../bases/pg
resources:
- deploy.yaml

Easy way to update the container image name

In order to reduce copy and paste in large applications, it would be nice to be able to patch multiple resources with one patch via some kind of selector other than name. For instance, in my Kubernetes application, I have separate web and job pods that share the same Docker image, most of the same pod values which I need to copy and paste into each. If one could write a patch that matched on a label or some other kind of selector, these shared values could be factored out.

Support to writing back the yaml to files

Hi, thanks for this great project, I was really looking for something like that and I am excited for the future developments!

I was thinking if it would make sense from your point of view to have the option to write the output yamls to files following the original names instead of STDOUT.
My use cause would be to use them in a deploy script that knowns about some conventions, i.e. I don't want to apply a deployment and thus replacing 100% of my instances, but have a blue/green approach. Such thing should be not part of kustomize and having the file written back to disk would be enough for me to build a logic around the generated file.

Do you mind sharing your opinion on that proposal?

Soft-coding transformable resource structs and paths

In kustomize transformation, we rely on so called pathConfig to determine the proper transformation: adding nameprefix, namespace, name reference, hash, variable reference. Current pathConfig contains hard coded type and field path. Every time we want to support an extra type or field, we need to add them in the hard coded part. In addition, we found that some types/fields need to be skipped in certain transormations, such as PRs, #70, #81.
We want to design a more generic solution for the transformation config. So that

  • easy to set certain transformations on certain types with straightforward logic, such as skip or ignore
  • don't need to handle a lot of hard coded pathConfig

Make generated configmaps composable

Using generated config maps (via configMapGenerator) seems to work only in simple base-overlay setups.
However, referencing a base that contains a generated config map fails in several scenarios:

given the following arrangement:

- base
  +- myapp
      +- mycomponent
           - kustomization.yaml
              - namePrefix "myapp-"
              - configMapGenerator constructs "mycomponent-config" with mode "create"
       +- mycomponent2
            ...
- overlay
  + - dev
       +- myapp
            +- mycomponent
                - kustomization.yaml
                   - bases: 
                      - ../../../../base/myapp/mycomponent 
                   - namePrefix "dev-"
                   - configMapGenerator merges additional config-key into "mycomponent-config" with mode "merge"
            +- mycomponent2
                 ...
            - kustomization.yaml
               - bases:
                  - mycomponent.yaml
                  - mycomponent2.yaml

If I run "kustomize build dev/myapp/mycomponent" all is good.
Same for "kustomize build/dev/myapp/mycomponent2" - all good.
When I run "kustomize build "dev/myapp" which points to the kustomization file that only declares the two subcomponents and does nothing else, I get the following exception:

Error: No merge or replace is allowed for non existing gvkn types.GroupVersionKindName{GVK:schema.GroupVersionKind{Group:"", Version:"v1", Kind:"ConfigMap"}, Name:"mycomponent-config"}

Same error happens when trying to merge/replace a configmap from within an overlay, if that config map has been declared in a base as a merged configmap.
To explain it perhaps more simply: only a single merge/replace is possible via a configmap generator. After that, no other kustomization setup can refer to it anymore without causing an error.

Cron jobs aren't supported

kustomize creates an invalid metadata block in the CronJob Job spec and kubectl correctly complains about an invalid spec.

Steps to reproduce:

  1. Unzip cronjobs.zip to create the directory structure below:
cronjobs
├── base
│   ├── cronjob.yaml
│   └── kustomization.yaml
└── overlays
    └── dev
        └── kustomization.yaml
  1. Under cronjobs, run kustomize build overlays/dev | kubectl apply --dry-run -f -.

Output:

error: error validating "STDIN": error validating data: ValidationError(CronJob.spec.jobTemplate.spec): unknown field "metadata" in io.k8s.api.batch.v1.JobSpec; if you choose to ignore these errors, turn validation off with --validate=false

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.