Giter VIP home page Giter VIP logo

crds-catalog's People

Contributors

adifayer avatar adrianmoisey avatar brokenjacobs avatar eyarz avatar francoispoinsot avatar gilbahat avatar hadar-co avatar ishioni avatar itspngu avatar jacksgt avatar jotasixto avatar karlisag avatar kmculpepper avatar lambarchie avatar ledouxpl avatar lindaarende avatar logandavies181 avatar markormesher avatar mghantous avatar reitermarkus avatar rissson avatar sebastian-dyroff avatar shimont avatar shirmondt avatar tricktron avatar vladimir-babichev avatar watcherwhale avatar wyardley avatar yellowhat avatar zhekazuev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

crds-catalog's Issues

why Invalid schema error with ingressclassparams_v1beta1.json ?

I am trying to validate the schema through the schema of CRDs-catalog/elbv2.k8s.aws with kubectl-datree.
When entering the following command, an invalid schema error occurs.
Is there something I'm missing?

input command

kubectl datree test --schema-location ingressclassparams_v1beta1.json -- --all

output

image

CRD Extractor utility inspects API group and organizes schemas by subdirectory

I think it would be helpful if the tool could have a look at the value of the spec.group attribute, create a subdirectory for it under $HOME/.datree/crdSchemas, and organize the corresponding schemas underneath it. This will help with avoiding to manually organize the schemas before providing the pull request for this repository.

Update schema of Traefik

Hi,

There are schema for Traefik: https://github.com/datreeio/CRDs-catalog/tree/main/traefik.containo.us

However, it fails to be detected by kubeconform, because they published it as v1 now:

$ helm kubeconform ../. --values ./preview.yaml
ERROR: Testing failed: kubeconform failed: failed to run kubeconform: rc=1
stdin - IngressRoute release-name-traefik-sso-chart-auth-tls failed validation: could not find schema for IngressRoute
stdin - IngressRoute release-name-traefik-dashboard failed validation: could not find schema for IngressRoute
stdin - Middleware release-name-traefik-sso-chart-traefik-forward-auth failed validation: could not find schema for Middleware
stdin - ServersTransport release-name-traefik-sso-chart-insecure failed validation: could not find schema for ServersTransport
Error: plugin "kubeconform" exited with error
$ cat ../.kubeconform
schema-location:
  - default
  - https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json

See https://doc.traefik.io/traefik/reference/dynamic-configuration/kubernetes-crd/ for their documentation.

I'm open to help work on this but I miss some tips about the best way to do it (see #118).

alternate filenames / structures possible?

Would it be possible to add symlinks for alternate file formats like "{kind}-{group}-{version}"? (e.g., frontendconfig-networking-v1beta1.json for frontendconfig_v1beta1.json? This would allow kubeval / kubeconform to more seamlessly use these by pointing to a single directory, which I don't think is possible with the current structure / layout?

monitoring.coreos.com/servicemonitor_v1.json - missing support for the enableHttp2 field

In https://github.com/prometheus-community/helm-charts/blob/main/charts/prometheus-operator-crds/templates/crd-servicemonitors.yaml
There is a field called enableHttp2.
It seems like the servicemonitor schema (https://github.com/datreeio/CRDs-catalog/blob/main/monitoring.coreos.com/servicemonitor_v1.json) is unaware of this field and I get the following error with kubeconform:

"For field spec.endpoints.0: Additional property enableHttp2 is not allowed"

I think the schema may simply just need updating but my attempt at using the crd-extractor tool gives me the same output that we currently have and the enablehttp2 field isn't mentioned anywhere

Missing of 'ingressClassName' in cert-manager.io/issuer_v1

Hi,

In our project we're using ingressClassName field to specify Ingress class in cert-manager Issuer. When schema validation runs is failing with error: additionalProperties 'ingressClassName' not allowed
I've noticed that this field was removed from the catalog in this PR. Is there any particular reason for this?
In the official cert-manager repo ingressClassName field is present.

Regards,
Ada

Issue with importing as custom policy

Hello again,

When I tried to import the json schema as policy as code, it prompted

Publish failed:
customRules.0.jsonSchema: strict mode: unknown keyword: "x-kubernetes-preserve-unknown-fields"

Is it something with the import ajv/dist/jtd issue?

Would like to know how to import multiple crd policies, can the policies do the purpose of checking if the kubernetes cluster has the crd installed and in some specific configuration scenarios?

Thanks a lot!

Policies.yaml
`apiVersion: v1
customRules:

  • identifier: ARGOCD_APPLICATIONSET_REQUIRED
    name: Argo
    defaultMessageOnFailure: Please check if the applicationset is installed
    jsonSchema: |
    {
    "properties": {
    "apiVersion": {
    "type": "string"
    },
    "kind": {
    "type": "string"
    },
    "metadata": {
    "type": "object"
    },
    "spec": {
    "properties": {
    "generators": {
    "items": {
    "properties": {
    "clusterDecisionResource": {
    "properties": {
    "configMapRef": {
    "type": "string"`

./crd-extractor.sh fails with two CRDs of the same short name. (externalsecrets)

My cluster has CRDs for two things both called short name externalsecrets

< kubectl get crd | grep externalsecret
externalsecrets.external-secrets.io                        2022-05-03T00:44:59Z
externalsecrets.kubernetes-client.io                       2020-12-30T18:45:17Z

With one type installed, kubectl get externalsecrets and it will get some. However, since both are trying to be externalsecrets that command only fetches the first kind (not the second), and the full CRD name is required to pick the correct one.

Is it possible to have crd-extractor extract both kinds of external secrets.

Right now it only extracts the first kind.

GKE issues

crd-extractor.sh fails on GKE.

 datree-demo git:(main) ./crd-extractor.sh

The output:

./crd-extractor.sh: line 41: /Users/vfarcic/.datree/crds/To learn more, consult https://cloud.yaml: No such file or directory
Traceback (most recent call last):
  File "/Users/vfarcic/.datree/crds/openapi2jsonschema.py", line 134, in <module>
    for y in yaml.load_all(f, Loader=yaml.SafeLoader):
  File "/opt/homebrew/lib/python3.10/site-packages/yaml/__init__.py", line 93, in load_all
    yield loader.get_data()
  File "/opt/homebrew/lib/python3.10/site-packages/yaml/constructor.py", line 45, in get_data
    return self.construct_document(self.get_node())
  File "/opt/homebrew/lib/python3.10/site-packages/yaml/composer.py", line 27, in get_node
    return self.compose_document()
  File "/opt/homebrew/lib/python3.10/site-packages/yaml/composer.py", line 58, in compose_document
    self.get_event()
  File "/opt/homebrew/lib/python3.10/site-packages/yaml/parser.py", line 118, in get_event
    self.current_event = self.state()
  File "/opt/homebrew/lib/python3.10/site-packages/yaml/parser.py", line 193, in parse_document_end
    token = self.peek_token()
  File "/opt/homebrew/lib/python3.10/site-packages/yaml/scanner.py", line 129, in peek_token
    self.fetch_more_tokens()
  File "/opt/homebrew/lib/python3.10/site-packages/yaml/scanner.py", line 223, in fetch_more_tokens
    return self.fetch_value()
  File "/opt/homebrew/lib/python3.10/site-packages/yaml/scanner.py", line 577, in fetch_value
    raise ScannerError(None, None,
yaml.scanner.ScannerError: mapping values are not allowed here
  in "/Users/vfarcic/.datree/crds/NAME                                                       CREATED AT.yaml", line 2, column 50
cp: /Users/vfarcic/.datree/crdSchemas/*.json: No such file or directory
Successfully converted 44 CRDs to JSON schema

To validate a CR using various tools, run the relevant command:

- datree:
$ datree test --schema-location '/Users/vfarcic/.datree/crdSchemas/{{ .ResourceKind }}_{{ .ResourceAPIVersion }}.json' /path/to/file


- kubeconform:
$ kubeconform -summary -output json -schema-location default -schema-location '/Users/vfarcic/.datree/crdSchemas/{{ .ResourceKind }}_{{ .ResourceAPIVersion }}.json' /path/to/file

- kubeval:
$ kubeval --additional-schema-locations file:"/Users/vfarcic/.datree/crdSchemas" /path/to/file

The same works on Rancher Desktop. Did not yet test it on other Kubernetes distributions.

Add all velero CRDs

Currently the catalog only contains the Schedule CRD of velero.io, but Velero comes with many more CRDs:

kubectl get crd | grep velero
backups.velero.io                                                 2021-03-23T22:37:55Z
backupstoragelocations.velero.io                                  2021-03-23T22:37:54Z
deletebackuprequests.velero.io                                    2021-03-23T22:37:54Z
downloadrequests.velero.io                                        2021-03-23T22:37:54Z
podvolumebackups.velero.io                                        2021-03-23T22:37:55Z
podvolumerestores.velero.io                                       2021-03-23T22:37:55Z
resticrepositories.velero.io                                      2021-03-23T22:37:55Z
restores.velero.io                                                2021-03-23T22:37:55Z
schedules.velero.io                                               2021-03-23T22:37:55Z
serverstatusrequests.velero.io                                    2021-03-23T22:37:54Z
volumesnapshotlocations.velero.io                                 2021-03-23T22:37:55Z

It would be great if we can add those as well.

Kyverno CRDs not working

~ wget https://github.com/kyverno/kyverno/releases/download/v1.10.0/install.yaml
~ cat install.yaml | kubeconform -schema-location default -schema-location 'https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json' --summary --kubernetes-version 1.26.2


stdin - CustomResourceDefinition cleanuppolicies.kyverno.io failed validation: could not find schema for CustomResourceDefinition
stdin - CustomResourceDefinition clusteradmissionreports.kyverno.io failed validation: could not find schema for CustomResourceDefinition
stdin - CustomResourceDefinition admissionreports.kyverno.io failed validation: could not find schema for CustomResourceDefinition
stdin - CustomResourceDefinition backgroundscanreports.kyverno.io failed validation: could not find schema for CustomResourceDefinition
stdin - CustomResourceDefinition clusterbackgroundscanreports.kyverno.io failed validation: could not find schema for CustomResourceDefinition
stdin - CustomResourceDefinition clustercleanuppolicies.kyverno.io failed validation: could not find schema for CustomResourceDefinition
stdin - CustomResourceDefinition clusterpolicies.kyverno.io failed validation: could not find schema for CustomResourceDefinition
stdin - CustomResourceDefinition policyexceptions.kyverno.io failed validation: could not find schema for CustomResourceDefinition
stdin - CustomResourceDefinition updaterequests.kyverno.io failed validation: could not find schema for CustomResourceDefinition
stdin - CustomResourceDefinition clusterpolicyreports.wgpolicyk8s.io failed validation: could not find schema for CustomResourceDefinition
stdin - CustomResourceDefinition policyreports.wgpolicyk8s.io failed validation: could not find schema for CustomResourceDefinition
stdin - CustomResourceDefinition policies.kyverno.io failed validation: could not find schema for CustomResourceDefinition
Summary: 62 resources found parsing stdin - Valid: 50, Invalid: 0, Errors: 12, Skipped: 0

Problematic CRD imported from bitnami sealed secrets

Hi,

the bitnami sealed secrets CRD is problematic and will poison other schema validations. By design, this is an extensible CRD of the following form:


{
  "type": "object",
  "properties": {
    "spec": {
      "type": "object",
      "x-kubernetes-preserve-unknown-fields": true
    },
    "status": {
      "x-kubernetes-preserve-unknown-fields": true
    }
  }
}

importing this schema will cause all otherwise unknown schema validations to succeed regardless of their contents:

- against correct schema -
✗ kubeconform -summary -output json -strict -schema-location default -schema-location ../schemastuff/CRDs-catalog/keda.sh/scaledobject_v1alpha1.json -verbose apps clusters infrastructure | more
<snip>
    {
      "filename": "redacted/keda.yaml",
      "kind": "ScaledObject",
      "name": "redacted",
      "version": "keda.sh/v1alpha1",
      "status": "statusInvalid",
      "msg": "For field spec.triggers.0.metadata.value: Invalid type. Expected: string, given: integer"
    },

- against sealedsecrets schema -
✗ kubeconform -summary -output json -strict -schema-location default -schema-location ../schemastuff/CRDs-catalog/bitnami.com/sealedsecret_v1alpha1.json -verbose apps clusters infrastructure | less
    {
      "filename": "redacted/keda.yaml",
      "kind": "ScaledObject",
      "name": "redacted",
      "version": "keda.sh/v1alpha1",
      "status": "statusValid",
      "msg": ""
    },

I believe this schema should be somehow limited in scope or otherwise not imported to the CRD database until it can be contained properly

`kafkaconnector_v1beta2.json` missing `autoRestart.maxRestarts` configuration parameter

👋

I've been using kubeconform with the Strimzi Kafka Connect catalog files here for validation, and noticed that the kafkaconnector_v1beta2.json file is missing the autoRestart.maxRestarts configuration parameter, and causes valid connector manifest files to fail validation.

In both the Strimzi documentation for autoRestart, here, and the source code for the strimzi-kafka-operator AutoRestart class, here, it defines the maxRestarts parameter as an option. The default behavior is to restart failed connectors/tasks an unlimited number of times, so we set an upper bound for this. I'd like to point to the JSON catalog validation file here instead of manually pulling it down & editing it or ignoring the property, though that requires an update to kafkaconnector_v1beta2.json.

With the current configuration, a valid connector manifest fails validation.

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnector
metadata:
  name: s3-sink-connector-aiven
  labels:
    strimzi.io/cluster: kafka-connect-cluster
spec:
  class: io.aiven.kafka.connect.s3.AivenKafkaConnectS3SinkConnector
  tasksMax: 12

  autoRestart:
    enabled: true
    maxRestarts: 10
    
    ...
$ kubeconform --verbose -summary -schema-location default -schema-location https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/kafka.strimzi.io/kafkaconnector_v1beta2.json deployment/templates/s3-sink-connector.yaml

deployment/templates/s3-sink-connector.yaml - KafkaConnector s3-sink-connector-aiven is invalid: problem validating schema. Check JSON formatting: jsonschema: '/spec/autoRestart' does not validate with https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/kafka.strimzi.io/kafkaconnector_v1beta2.json#/properties/spec/properties/autoRestart/additionalProperties: additionalProperties 'maxRestarts' not allowed
Summary: 1 resource found in 1 file - Valid: 0, Invalid: 1, Errors: 0, Skipped: 0

With the following update in kafkaconnector_v1beta2.json, the same connector passes validation.

"maxRestarts": {
    "description": "# of restart attempts when a connector or task fails",
    "type": "integer"
}
$ kubeconform --verbose -summary -schema-location default -schema-location ./manifest-schemas/kafkaconnector_v1beta2.json deployment/templates/s3-sink-connector.yaml

deployment/templates/s3-sink-connector.yaml - KafkaConnector s3-sink-connector-aiven is valid
Summary: 1 resource found in 1 file - Valid: 1, Invalid: 0, Errors: 0, Skipped: 0

Feel free to ping me with any questions!

Automation

Hello,

I stumbled upon this repo and it is a very cool idea.
I wanted to add some CRDs for openshift operators, since openshift is the k8s distribution we use at my company.

But I am not sure how the CRDs are generated. Is there some kind of automaton or is it copy pasted by hand.
Thank you,
Amanti

Outdated schema of rollout.argoproj.io/v1alpha1

problem validating schema. Check JSON formatting: jsonschema: '/spec/strategy/canary/trafficRouting' does not validate with https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/argoproj.io/rollout_v1alpha1.json#/properties/spec/properties/strategy/properties/canary/properties/trafficRouting/additionalProperties: additionalProperties 'managedRoutes' not allowed

Sooo... what's happening to this repo?

Hey datree 👋 I heard you unfortunately had to close shop, and I'm really sorry for everyone who lost their job as a consequence 😞 Hope everyone finds a new place really soon.

On your main repo you mention that "All the archived open source repositories under datreeio org will no longer be maintained and accept any new code changes, including any security patches." - this repo has not been archived though, and here have been a few PRs merged since 🤔 is there a community effort to keep it up-to-date or at least alive?

Many thanks for all the work and collaboration through the years 👋

Add CloudNative-PG CRD's

CloudNative-PG, is a project building a postgresql operator.
Worthwhile to have the CRD's added for this one.

kustomize resource (non-custom) definitions

I am guessing this project isn't the right place to host kustomize.config.k8s.io resource definitions, since they're builtin, and not custom. However, they're also not hosted on the project we typically use that has the main schemas (see context in yannh/kubernetes-json-schema#13).

Is it reasonable to assume that since they're not "custom", they shouldn't be added here?

Support for strict / non-strict validation

An interesting discussion started over here: yannh/kubeconform#171
A user is pointing out (correctly I believe) that we don't really make the distinction for strict vs non-strict validation for CRDs, whereas kubernetes-json-schema has schemas for both "strict" and "non-strict" mode. The strict schemas will complain when keys are used in the resource that are not present in the schema, vs the non-strict mode will not count this as an error.

In the user's case, they use a self-generated json schema, but I was wondering what the viewpoint was here - whether CRDs-catalog should also provide a strict and non-strict JSON schema for each CRD, or what the default really should be (I would be tempted to say, default to strict?).

Best,
Yann

Missing contribution guidelines

Hi there,

Firstly, thank you for sharing your work with the community. I believe that your project has the potential to be a valuable resource for many developers.

However, I noticed that there are currently no contributing guidelines available for this project. As someone who is interested in contributing to open-source projects, I find this lack of guidance a bit concerning.

Without contributing guidelines, it's unclear to me whether your project is open-source and open-contribution, or whether it is open-source but not open-contribution, like SQLite. I do not if am the only person who is unsure about this, and it might be discouraging for potential contributors who are unclear about how they can contribute to your project.

Therefore, I kindly ask that you consider adding contributing guidelines to your repository. I believe that this will benefit both you and potential contributors. Here are some suggestions on what information should be included in such guidelines:

  • Purpose and Goals: Clearly state the purpose of your project and implementation goals and why contributions are welcomed. Do we prefer upstream projects to self-publish the schema in the appropriate format upstream, or do we want to collect all schemas here, also when upstream it's already properly published?
  • Review Process: Explain how contributions are reviewed and what the process looks like.
  • Coding Standards: Provide clear coding standards to ensure consistency and readability of the codebase.
  • Testing Guidelines: Explain how to run the project tests and provide instructions for writing new tests.
  • Contact Information: Provide contact information for the project maintainers or community members who can help with questions and issues.

By adding these guidelines, you can make it easier for potential contributors to understand how they can contribute to your project, and ensure that contributions are made in a consistent and constructive manner.

I know that I am asking for a lot, but these are only suggestions for issues that can be raised. They can be noted at the beginning in a minimum, and develop as more doubts begin to appear.

Thank you for your time and consideration, and I look forward to hearing your thoughts on this matter.

Best regards,
Adam Dobrawy

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.