Giter VIP home page Giter VIP logo

starboard-exporter's Introduction

CircleCI OpenSSF Scorecard

starboard-exporter

Exposes Prometheus metrics from Trivy Operator's VulnerabilityReport, ConfigAuditReport, and other custom resources (CRs).

Metrics

This exporter exposes several types of metrics:

CIS Benchmarks

Report Summary

A report summary series exposes the count of checks of each status reported in a given CISKubeBenchReport. For example:

starboard_exporter_ciskubebenchreport_report_summary_count{
    node_name="bj56o-master-bj56o-000000"
    status="FAIL"
    } 31

Section Summary

For slightly more granular reporting, a section summary series exposes the count of checks of each status reported in a given CISKubeBenchSection. For example:

starboard_exporter_ciskubebenchreport_section_summary_count{
    node_name="bj56o-master-bj56o-000000"
    node_type="controlplane"
    section_name="Control Plane Configuration"
    status="WARN"
    } 4

Result Detail

A CIS benchmark result info series exposes fields from each instance of an Aqua CISKubeBenchResult. For example:

starboard_exporter_ciskubebenchreport_result_info{
    node_name="bj56o-master-bj56o-000000"
    node_type="controlplane"
    pod="starboard-exporter-859955f485-cwkj6"
    section_name="Control Plane Configuration"
    test_desc="Client certificate authentication should not be used for users (Manual)"
    test_number="3.1.1"
    test_status="WARN"
    } 1

Vulnerability Reports

Report Summary

A summary series exposes the count of CVEs of each severity reported in a given VulnerabilityReport. For example:

starboard_exporter_vulnerabilityreport_image_vulnerability_severity_count{
    image_digest="",
    image_namespace="demo",
    image_registry="quay.io",
    image_repository="giantswarm/starboard-operator",
    image_tag="0.11.0",
    report_name="replicaset-starboard-app-6894945788-starboard-app",
    severity="MEDIUM"
    } 4

This indicates that the giantswarm/starboard-operator image in the demo namespace contains 4 medium-severity vulnerabilities.

Vulnerability Details

A "detail" or "vulnerability" series exposes fields from each instance of an Aqua Vulnerability. The value of the metric is the Score for the vulnerability. For example:

starboard_exporter_vulnerabilityreport_image_vulnerability{
    fixed_resource_version="1.1.1l-r0",
    image_digest="",
    image_namespace="demo",
    image_registry="quay.io",
    image_repository="giantswarm/starboard-operator",
    image_tag="0.11.0",
    installed_resource_version="1.1.1k-r0",
    report_name="replicaset-starboard-app-6894945788-starboard-app",
    severity="HIGH",
    vulnerability_id="CVE-2021-3712",
    vulnerability_link="https://avd.aquasec.com/nvd/cve-2021-3712",
    vulnerability_title="openssl: Read buffer overruns processing ASN.1 strings",
    vulnerable_resource_name="libssl1.1"
    } 7.4

This indicates that the vulnerability with the id CVE-2021-3712 was found in the giantswarm/starboard-operator image in the demo namespace, and it has a CVSS 3.x score of 7.4.

An additional series would be exposed for every combination of those labels.

Config Audit Reports

Report Summary

A summary series exposes the count of checks of each severity reported in a given ConfigAuditReport. For example:

starboard_exporter_configauditreport_resource_checks_summary_count{
  resource_name="replicaset-chart-operator-748f756847",
  resource_namespace="giantswarm",
  severity="LOW"
  } 7

A Note on Cardinality

For some use cases, it is helpful to export additional fields from VulnerabilityReport CRs. However, because many fields contain unbounded arbitrary data, including them in Prometheus metrics can lead to extremely high cardinality. This can drastically impact Prometheus performance. For this reason, we only expose summary data by default and allow users to opt-in to higher-cardinality fields.

Sharding Reports

In large clusters or environments with many reports and/or vulnerabilities, a single exporter can consume a large amount of memory, and Prometheus may need a long time to scrape the exporter, leading to scrape timeouts. To help spread resource consumption and scrape effort, starboard-exporter watches its own service endpoints and will shard metrics for all report types across the available endpoints. In other words, if there are 3 exporter instances, each instance will serve roughly 1/3 of the metrics. This behavior is enabled by default and does not require any additional configuration. To use it, simply change the number of replicas in the Deployment. However, you should read the section on cardinality and be aware that consuming large amounts of high-cardinality data can have performance impacts on Prometheus.

Customization

Summary metrics of the format described above are always enabled.

To enable an additional detail series per Vulnerability, use the --target-labels flag to specify which labels should be exposed. For example:

# Expose only select image and CVE fields.
--target-labels=image_namespace,image_repository,image_tag,vulnerability_id

# Run with (almost) all fields exposed as labels, if you're feeling really wild.
--target-labels=all

Target labels can also be set via Helm values:

exporter:
  vulnerabilityReports:
    targetLabels:
      - image_namespace
      - image_repository
      - image_tag
      - vulnerability_id
      - ...

The same can be done for CIS Benchmark Results. To enable an additional detail series per CIS Benchmark Result, use the --cis-detail-report-labels flag to specify which labels should be exposed. For example:

# Expose only section_name, test_name and test_status
--cis-detail-report-labels=section_name,test_name,test_status

# Run with (almost) all fields exposed as labels.
--cis-detail-report-labels=all

CIS detail target labels can also be set via Helm values:

exporter:
  CISKubeBenchReports:
    targetLabels:
      - node_name
      - node_type
      - section_name
      - test_name
      - test_status
      - ...

Helm

How to install the starboard-exporter using helm:

helm repo add giantswarm https://giantswarm.github.io/giantswarm-catalog
helm repo update
helm upgrade -i starboard-exporter --namespace <trivy operator namespace> giantswarm/starboard-exporter

Scaling for Prometheus scrape timeouts

When exporting a large volume of metrics, Prometheus might time out before retrieving them all from a single exporter instance. It is possible to automatically scale the number of exporters to keep the scrape time below the configured timeout. To enable HPA scaling based on Prometheus metrics, here

starboard-exporter's People

Contributors

arabus avatar architectbot avatar audriusb avatar averagemarcus avatar dependabot[bot] avatar dirsigler avatar dotdc avatar dschunack avatar elchenberg avatar fhielpos avatar fiunchinho avatar github-actions[bot] avatar glitchcrab avatar heraldbot[bot] avatar jw-s avatar komljen avatar krionbsd avatar kubasobon avatar marians avatar mazay avatar mycodeself avatar nissessenap avatar renovate[bot] avatar rewanthtammana avatar rossf7 avatar sfhl avatar stone-z avatar strigix avatar taylorbot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

starboard-exporter's Issues

Enhancement only store metrics from the latest vulnerabilityreports

Today when we gather metrics it generates data from all vulnerabilityreports and there is a vulnerabilityreport per replicaset.
This makes it looks like we have much more CVE:s in our cluster then we actually do.

Personally I would have loved to see this solved in starboard following discussions like aquasecurity/starboard#668 or aquasecurity/starboard#17.
But I don't think it's reasonable to get this solved upstream short term.

Would you be interested having a feature that only checks for the latest vulnerabilityreport?

I have given this some thought and the first problem that I see is what happens if a user performs a rollback of a deployment?
In that case there still would be a new rs and i assume the latest vulnerabilityreport points to that rs and not the old actually active one. This could of course become a problem.
I'm not 100% it actually works like this but it's something we would have to verify.

What do you think?

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

This repository currently has no open or pending branches.

Detected dependencies

circleci
.circleci/config.yml
  • architect 5.2.1
dockerfile
Dockerfile
  • golang 1.22.4
gomod
go.mod
  • go 1.22.3
  • go 1.22.4
  • github.com/aquasecurity/trivy-operator v0.21.3
  • github.com/buraksezer/consistent v0.10.0
  • github.com/cespare/xxhash/v2 v2.3.0
  • github.com/go-logr/logr v1.4.2
  • github.com/google/go-cmp v0.6.0
  • github.com/pkg/errors v0.9.1
  • github.com/prometheus/client_golang v1.19.1
  • k8s.io/api v0.30.1
  • k8s.io/apimachinery v0.30.1
  • k8s.io/client-go v0.30.1
  • sigs.k8s.io/controller-runtime v0.18.3
kubernetes
config/default/manager_auth_proxy_patch.yaml
  • gcr.io/kubebuilder/kube-rbac-proxy v0.16.0
  • Deployment apps/v1
config/default/manager_config_patch.yaml
  • Deployment apps/v1
config/manager/manager.yaml
  • Deployment apps/v1
config/rbac/auth_proxy_client_clusterrole.yaml
  • ClusterRole rbac.authorization.k8s.io/v1
config/rbac/auth_proxy_role.yaml
  • ClusterRole rbac.authorization.k8s.io/v1
config/rbac/auth_proxy_role_binding.yaml
  • ClusterRoleBinding rbac.authorization.k8s.io/v1
config/rbac/leader_election_role.yaml
  • Role rbac.authorization.k8s.io/v1
config/rbac/leader_election_role_binding.yaml
  • RoleBinding rbac.authorization.k8s.io/v1
config/rbac/role_binding.yaml
  • ClusterRoleBinding rbac.authorization.k8s.io/v1
helm/starboard-exporter/templates/customMetricsHpa.yaml
  • HorizontalPodAutoscaler autoscaling/v2
helm/starboard-exporter/templates/deployment.yaml
  • Deployment apps/v1
helm/starboard-exporter/templates/networkpolicy.yaml
  • NetworkPolicy networking.k8s.io/v1
helm/starboard-exporter/templates/psp.yaml
  • PodSecurityPolicy policy/v1beta1
helm/starboard-exporter/templates/rbac.yaml
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRoleBinding rbac.authorization.k8s.io/v1
  • Role rbac.authorization.k8s.io/v1
  • RoleBinding rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRoleBinding rbac.authorization.k8s.io/v1
pipenv
tests/ats/Pipfile
  • pytest-helm-charts >=0.6.0
  • pytest >=6.2.5
  • pykube-ng >=21.10.0
  • pytest-rerunfailures >=10.2

  • Check this box to trigger a request for Renovate to run again on this repository

Enable 'starboard_exporter_vulnerabilityreport_image_vulnerability'

By default I do not see this metric (starboard_exporter_vulnerabilityreport_image_vulnerability) exposed. Is there a way to enable it?

I only have these two:

starboard_exporter_configauditreport_resource_checks_summary_count
starboard_exporter_vulnerabilityreport_image_vulnerability_severity_count

Grafanadashboard yaml contains invalid json

When trying to import the grafana json manually in to my grafana 7 instance i get error saying invalid json.
I get the same error when putting the pure json in a json reader.

I will see if I have time to try to find it on my own but if you already have a working edition in grafana it's probably enough to just export it out again.

ImagePullSecrets not working

Hi,

I guess the imagePullSecrets in the Helm chart are not working.

I think there are two issues:

  1. The type is currently object, it has to be array.

Fix for values.schema.json

      ...
      "imagePullSecrets": {
        "type": "array"
      `},`
      ...

Fix for values.yaml

...
imagePullSecrets: []
...
  1. The indentation of the imagePullSecret property in the deployment.yaml is off.

Fix for deployment.yaml

      ...
      securityContext:
        runAsUser: {{ .Values.pod.user.id }}
        runAsGroup: {{ .Values.pod.group.id }}
        {{- with .Values.podSecurityContext }}
          {{- . | toYaml | nindent 8 }}
        {{- end }}
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      containers:
      ...

Maybe there is also another solution to fix this, but this is how I got it working locally.

Add NameOverride

Hi,

Would be possible to add a NameOverride? It would be very useful.

Thanks!

Helm release v0.3.2 seems to be broken

Sorry for crossposting. I opened this issue at the giantswarm-catalog repository but I do not know if this was the correct place: giantswarm/giantswarm-catalog#22

It seems to me that the Helm release v0.3.2 is broken because the values for the project's branch name and the commit hash are missing in the bundled release file.

Wrong metrics name in helm charts grafana dashboard

The grafana dashboard provided with the helm chart uses the wrong metrics name here:

          "targets": [
            {
              "exemplar": true,
              "expr": "topk(10, sum(starboard_exporter_vulnerabilityreport_image_vulnerability{severity=~\"CRITICAL|HIGH\"}) by (image_repository, image_tag))",
              "format": "table",
              "instant": true,
              "interval": "",
              "legendFormat": "",
              "refId": "A"
            }
          ],

it should rather be:

          "targets": [
            {
              "exemplar": true,
              "expr": "topk(10, sum(starboard_exporter_vulnerabilityreport_image_vulnerability_severity_count{severity=~\"CRITICAL|HIGH\"}) by (image_repository, image_tag))",
              "format": "table",
              "instant": true,
              "interval": "",
              "legendFormat": "",
              "refId": "A"
            }
          ],

This breaks the the "Pods with Critical/High CVEs Panel".

releaseRevision collides with bug in helm-diff

Question: Could you remove the releaseRevision annotation in the starboard-exporter Deployment? (https://github.com/giantswarm/starboard-exporter/blob/main/helm/starboard-exporter/templates/deployment.yaml#L19)

There is a bug in helm-diff (databus23/helm-diff#253) not being able to properly cope with this.
We are currently using a diff-based deployment on our end, and this chart is thus always shown as a helm chart with changes to be deployed.
Actually, we have not seen this issue just until now, and we are deploying a whole bunch of helm charts; haven't seen this being added to the annotation actively anywhere else. Is there any point in adding the releaseRevision as an annotation?

Reporting a vulnerability

Hello!

I hope you are doing well!

We are a security research team. Our tool automatically detected a vulnerability in this repository. We want to disclose it responsibly. GitHub has a feature called Private vulnerability reporting, which enables security research to privately disclose a vulnerability. Unfortunately, it is not enabled for this repository.

Can you enable it, so that we can report it?

Thanks in advance!

PS: you can read about how to enable private vulnerability reporting here: https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository

Old metrics still visible

I'm using the starboard feature described here https://github.com/giantswarm/starboard-exporter#one-vulnerabilityreport-per-deployment, and even though I don't see old reports anymore with kubectl CLI:

kubectl get vulnerabilityreport -n gradle-enterprise
NAME                                                      REPOSITORY                                               TAG        SCANNER   AGE
replicaset-5c8b5d8449                                     gradleenterprise/gradle-enterprise-operator-image        2021.4.1   Trivy     82m
replicaset-5cf45f8fd7                                     gradleenterprise/gradle-build-cache-node-image           2021.4.1   Trivy     82m
replicaset-764c4bd49c                                     gradleenterprise/gradle-test-distribution-broker-image   2021.4.1   Trivy     82m
replicaset-gradle-database-5b89d7b595-database            gradleenterprise/gradle-database-image                   2021.4.1   Trivy     82m
replicaset-gradle-database-5b89d7b595-database-tasks      gradleenterprise/gradle-database-image                   2021.4.1   Trivy     82m
replicaset-gradle-metrics-64c7565799-gradle-metrics       gradleenterprise/gradle-metrics-image                    2021.4.1   Trivy     82m
statefulset-gradle-enterprise-app-gradle-enterprise-app   gradleenterprise/gradle-enterprise-app-image             2021.4.1   Trivy     148m
statefulset-gradle-keycloak-gradle-keycloak               gradleenterprise/gradle-keycloak-image                   2021.4.1   Trivy     144m
statefulset-gradle-proxy-gradle-proxy                     gradleenterprise/gradle-proxy-image                      2021.4.1   Trivy     150m

If I go to the metrics endpoint on starboard exporter, I still see metrics like (notice the image tag version):

starboard_exporter_vulnerabilityreport_image_vulnerability{image_namespace="gradle-enterprise",image_repository="gradleenterprise/gradle-keycloak-image",image_tag="2021.4",report_name="statefulset-gradle-keycloak-gradle-keycloak",vulnerability_id="CVE-2021-30129"} 6.5

I guess this is because the report name is not unique in this case, like with replica sets?

starboard-exporter crashing because of missing permisisons

Hi there,
using the helm template, starboard exporter crashes because of the following error

E0213 12:21:46.781018       1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *unstructured.Unstructured: failed to list *unstructured.Unstructured: endpoints "starboard-exporter" is forbidden: User "system:serviceaccount:security:starboard-exporter" cannot list resource "endpoints" in API group "" in the namespace "security"

I guess this may be easily fixed by moving the following code from Role starboard-exporter-psp to ClusterRole starboard-exporter (for example).

...
    - apiGroups:       
        - ""           
      resources:       
        - endpoints    
      verbs:           
        - get          
        - list         
        - watch        
...

ARM images

Hi, I like your exporter. Could you build and provide images for ARM too, please?

Or enable affinities via values.yaml?

E.g:

    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - amd64

We have clusters with arm-based worker nodes and amd-based worker nodes.

Errors with CIS Benchmark CRD on install

Hello, I installed this exporter via helm instruction and I see these errors in the logs. It looks like the pod restarts periodically.

1.656450534997074e+09   ERROR   controller-runtime.source       if kind is a CRD, it should be installed before calling Start   {"kind": "CISKubeBenchReport.aquasecurity.github.io", "error": "no matches for kind \"CISKubeBenchReport\" in version \"aquasecurity.github.io/v1alpha1\""}
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/source/source.go:139
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:233
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:660
k8s.io/apimachinery/pkg/util/wait.poll
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:594
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:545
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/source/source.go:132


<repeats every 10sec>


1.6564502101352346e+09  INFO    Stopping and waiting for non leader election runnables
1.6564502101352448e+09  INFO    Stopping and waiting for leader election runnables
1.6564502101354437e+09  INFO    Shutdown signal received, waiting for all workers to finish     {"controller": "ciskubebenchreport", "controllerGroup": "aquasecurity.github.io", "controllerKind": "CISKubeBenchReport"}
1.6564502101354504e+09  INFO    Shutdown signal received, waiting for all workers to finish     {"controller": "ciskubebenchreport", "controllerGroup": "aquasecurity.github.io", "controllerKind": "CISKubeBenchReport"}
1.6564502101355612e+09  INFO    All workers finished    {"controller": "ciskubebenchreport", "controllerGroup": "aquasecurity.github.io", "controllerKind": "CISKubeBenchReport"}
1.656450210135572e+09   INFO    All workers finished    {"controller": "ciskubebenchreport", "controllerGroup": "aquasecurity.github.io", "controllerKind": "CISKubeBenchReport"}
1.6564502101356056e+09  INFO    Stopping and waiting for caches
1.6564502101357338e+09  INFO    Stopping and waiting for webhooks
1.656450210135766e+09   INFO    Wait completed, proceeding to shutdown the manager
1.656450210136341e+09   ERROR   setup   problem running manager {"error": "failed to wait for ciskubebenchreport caches to sync: timed out waiting for cache to be synced"}
main.main
        /workspace/main.go:287
runtime.main
        /usr/local/go/src/runtime/proc.go:250

Image Repository does not include registry

The image repository field does not include the registry path. It cuts off the repository to only include the project/image name. It would be very helpful to include the registry in this field to help track the exact image down.

Expected:
image_repository=custom.repo.com/library/image

Actual result:
image_repository=library/image

Export fields `Target` and `Class` as labels

Hello,
in vulnerability reports created by trivy-operator there are optional report fields, like Target or Class:

  - class: lang-pkgs
    fixedVersion: "1.32"
    installedVersion: "1.25"
    links: []
    primaryLink: https://avd.aquasec.com/nvd/cve-2022-38752
    resource: org.yaml:snakeyaml
    score: 6.5
    severity: MEDIUM
    target: Java
    title: 'snakeyaml: Uncaught exception in java.base/java.util.ArrayList.hashCode'
    vulnerabilityID: CVE-2022-38752

It would be really cool to see a summary in Grafana dashboards what % of vulnerabilities come from OS vs application. It seems like adding these fields to exporter would be as simple as adding two more variables here and here, since these types are defined in Vulnerability struct (although, my golang knowledge is very, very limited).

Export publishedDate as a label

Trivy Operator 0.16.0 added a publishedDate field to the vulnerability in the VulnerabilityReport CRD, and that is a piece of information we're interested in getting into our dashboards. I wanted to see if it could be added as one of the published labels. Thanks!

Vulnerabilities to fix (1 HIGH incl.)

Hey,

The image "quay.io/giantswarm/starboard-exporter:0.7.1" has bunch of vulnerabilities which can be fixed.
Most of them are linked to the Go version used, it seems there is already a PR opened to upgrade it.
Another is linked to Trivy itself which should be updated.

Vulnerability report from Trivy:

➜ trivy image quay.io/giantswarm/starboard-exporter:0.7.10
2024-06-11T15:17:26+02:00       INFO    Vulnerability scanning is enabled
2024-06-11T15:17:26+02:00       INFO    Secret scanning is enabled
2024-06-11T15:17:26+02:00       INFO    If your scanning is slow, please try '--scanners vuln' to disable secret scanning
2024-06-11T15:17:26+02:00       INFO    Please see also https://aquasecurity.github.io/trivy/v0.51/docs/scanner/secret/#recommendation for faster secret detection
2024-06-11T15:17:27+02:00       INFO    Detected OS     family="debian" version="12.5"
2024-06-11T15:17:27+02:00       INFO    [debian] Detecting vulnerabilities...   os_version="12" pkg_num=3
2024-06-11T15:17:27+02:00       INFO    Number of language-specific files       num=1
2024-06-11T15:17:27+02:00       INFO    [gobinary] Detecting vulnerabilities...

quay.io/giantswarm/starboard-exporter:0.7.10 (debian 12.5)

Total: 0 (UNKNOWN: 0, LOW: 0, MEDIUM: 0, HIGH: 0, CRITICAL: 0)


manager (gobinary)

Total: 5 (UNKNOWN: 2, LOW: 0, MEDIUM: 2, HIGH: 1, CRITICAL: 0)

┌───────────────────────────────┬────────────────┬──────────┬────────┬───────────────────┬─────────────────┬─────────────────────────────────────────────────────────────┐
│            Library            │ Vulnerability  │ Severity │ Status │ Installed Version │  Fixed Version  │                            Title                            │
├───────────────────────────────┼────────────────┼──────────┼────────┼───────────────────┼─────────────────┼─────────────────────────────────────────────────────────────┤
│ github.com/aquasecurity/trivy │ CVE-2024-35192 │ MEDIUM   │ fixed  │ v0.50.1           │ 0.51.2          │ Trivy possibly leaks registry credential when scanning      │
│                               │                │          │        │                   │                 │ images from malicious registries                            │
│                               │                │          │        │                   │                 │ https://avd.aquasec.com/nvd/cve-2024-35192                  │
├───────────────────────────────┼────────────────┤          │        ├───────────────────┼─────────────────┼─────────────────────────────────────────────────────────────┤
│ golang.org/x/net              │ CVE-2023-45288 │          │        │ v0.22.0           │ 0.23.0          │ golang: net/http, x/net/http2: unlimited number of          │
│                               │                │          │        │                   │                 │ CONTINUATION frames causes DoS                              │
│                               │                │          │        │                   │                 │ https://avd.aquasec.com/nvd/cve-2023-45288                  │
├───────────────────────────────┼────────────────┼──────────┤        ├───────────────────┼─────────────────┼─────────────────────────────────────────────────────────────┤
│ stdlib                        │ CVE-2024-24788 │ HIGH     │        │ 1.22.2            │ 1.22.3          │ golang: net: malformed DNS message can cause infinite loop  │
│                               │                │          │        │                   │                 │ https://avd.aquasec.com/nvd/cve-2024-24788                  │
│                               ├────────────────┼──────────┤        │                   ├─────────────────┼─────────────────────────────────────────────────────────────┤
│                               │ CVE-2024-24789 │ UNKNOWN  │        │                   │ 1.21.11, 1.22.4 │ The archive/zip package's handling of certain types of      │
│                               │                │          │        │                   │                 │ invalid zip fil ......                                      │
│                               │                │          │        │                   │                 │ https://avd.aquasec.com/nvd/cve-2024-24789                  │
│                               ├────────────────┤          │        │                   │                 ├─────────────────────────────────────────────────────────────┤
│                               │ CVE-2024-24790 │          │        │                   │                 │ The various Is methods (IsPrivate, IsLoopback, etc) did not │
│                               │                │          │        │                   │                 │ work as ex...                                               │
│                               │                │          │        │                   │                 │ https://avd.aquasec.com/nvd/cve-2024-24790                  │
└───────────────────────────────┴────────────────┴──────────┴────────┴───────────────────┴─────────────────┴─────────────────────────────────────────────────────────────┘

Thank you,

"ensure CRDs are installed first" while installing via HELM Chart

Hi,

I am trying to install the exporter, but I think I am doing something wrong because I get an error:

$ helm repo add aqua https://aquasecurity.github.io/helm-charts/
$ helm repo add giantswarm https://giantswarm.github.io/giantswarm-catalog
$ helm repo update
$ helm install trivy-operator oci://ghcr.io/aquasecurity/helm-charts/trivy-operator \
   --namespace trivy-system \
   --create-namespace \
   --version 0.21.3

$ helm upgrade -i starboard-exporter --namespace trivy-system  giantswarm/starboard-exporter

Release "starboard-exporter" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: resource mapping not found for name: "starboard-exporter-psp" namespace: "" from "": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first

Trivi is installed:

$ kubectl get all -n trivy-system
NAME                                            READY   STATUS    RESTARTS      AGE
pod/scan-vulnerabilityreport-7877b49d8c-s8lts   3/3     Running   0             36s
pod/trivy-operator-7cc7457867-gwwt7             1/1     Running   1 (45s ago)   5m51s

NAME                     TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/trivy-operator   ClusterIP   None         <none>        80/TCP    5m51s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/trivy-operator   1/1     1            1           5m51s

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/trivy-operator-7cc7457867   1         1         1       5m51s

NAME                                            COMPLETIONS   DURATION   AGE
job.batch/scan-vulnerabilityreport-7877b49d8c   0/1           36s        36s
$ kubectl get crds | grep -i aquasecurity
clustercompliancereports.aquasecurity.github.io        2024-04-03T17:48:51Z
clusterconfigauditreports.aquasecurity.github.io       2024-04-03T17:48:51Z
clusterinfraassessmentreports.aquasecurity.github.io   2024-04-03T17:48:51Z
clusterrbacassessmentreports.aquasecurity.github.io    2024-04-03T17:48:51Z
clustersbomreports.aquasecurity.github.io              2024-04-03T17:48:51Z
clustervulnerabilityreports.aquasecurity.github.io     2024-04-03T17:48:51Z
configauditreports.aquasecurity.github.io              2024-04-03T17:48:51Z
exposedsecretreports.aquasecurity.github.io            2024-04-03T17:48:51Z
infraassessmentreports.aquasecurity.github.io          2024-04-03T17:48:51Z
rbacassessmentreports.aquasecurity.github.io           2024-04-03T17:48:51Z
sbomreports.aquasecurity.github.io                     2024-04-03T17:48:51Z
vulnerabilityreports.aquasecurity.github.io            2024-04-03T17:48:51Z

Can you please help me with what is missing right now?

Thank you!

Tolerations parameter type is invalid

With added schema validation deployment fails when having tolerations configured.

Used config:

...
tolerations:
  # allow scheduling on any node
  - operator: Exists
...

Error message:

Helm upgrade failed: values don't meet the specifications of the schema(s) in the following chart(s):
starboard-exporter:
- tolerations: Invalid type. Expected: object, given: array

Details of ConfigAudit

Hello,

Thanks for this exporter.

I need to use ConfigAudit details but it is not possible on the helm (unlike vulnerabilityReports)
configAuditReports:
enabled: true

vulnerabilityReports:
enabled: true
targetLabels:
# - image_namespace
# - image_repository
# - image_tag
# - vulnerability_id

I saw on the controllers that the details are not exposed
var metricLabels = []string{
"report_name",
"resource_name",
"resource_namespace",
"severity",
}

It would be very interesting to add this possibility.

Translated with www.DeepL.com/Translator (free version)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.