Giter VIP home page Giter VIP logo

nxrm3-helm-repository's Introduction

⚠️ Archive Notice

As of October 24, 2023, we will no longer update or support the Single-Instance OSS/Pro Helm Chart.

Deploying Nexus Repository in containers with an embedded database has been known to corrupt the database under some circumstances. We strongly recommend that you use an external PostgreSQL database for Kubernetes deployments.

Helm Charts for Sonatype Nexus Repository Manager 3

We now provide one HA/Resiliency Helm Chart that supports both high availability and resilient deployments in AWS, Azure, or on-premises in a Kubernetes cluster. This is our only supported Helm chart for deploying Sonatype Nexus Repository; it requires a PostgreSQL database.

nxrm3-helm-repository's People

Contributors

andresort28 avatar blacktiger avatar bobotimi avatar brzozova avatar dawidsawa avatar jflinchbaugh avatar lisadurant avatar mpiggott avatar mykyta avatar oleksiirudyk avatar pittolive avatar sonatype-zion avatar userseprid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nxrm3-helm-repository's Issues

How to set logging level

Hi, I'm going to set logging level of nexus repository,

in values.yaml, I set

nexus:
  properties:
    override: true
    data:
      root.level: ERROR

But, when I install nexus through helm, it couldn't work.

When after the pod run,
root.level=ERROR is written in /nexus-data/etc/nexus.properties
Screenshot 2023-09-19 at 6 07 04 PM

How can I fix it?
Thanks.

Addtional boilerplate options missing from chart

Following up on /issues/49, I'd like to add that helm create generates boilerplate for a chart and provides a bevy of options missing. I hope they will be included, as their absence makes it difficult to use this chart.

  • templates/_helpers.tpl includes a lot of helpful functions... chief among them resource name length controls
  • resources: In my experience this is a critical setting for resource hungry applications like nexus. Declaring resources allows the scheduler to guarantee quality of service.
  • imagePullSecrets: docker.io has very restrictive pull rate-limits for anonymous users. This is a critical option for avoiding rate limiting.
  • name overrides: This is a nice option to offer users. There's no need to force users to live with hard to remember helm release names.
  • podAnnotations: Crucial ad-hoc configuration for metrics and logs
  • affinity: Crucial for pod colocation. Cluster operators should be able to attract or repel pods.
  • tolerations: Crucial for scheduling.
  • nodeSelector: Crucial for scheduling. We use karpenter, which offers a rich set node selector options, freeing us from node groups.

Make externalDns + Fluent Bit optional

Some orgs may centrally provision these tools in their cluster, and then layer tooling charts on top. It would be good for these to be gated by an if clause so end-users can pick and choose whether they want this bundled in to the NXRM deployment.

HTTP 404 When Using NEXUS_CONTENT env variable and Ingress Path

Chart: nexus-repository-manager
Chart version: 45.0.0

Problem:
When trying to configure nexus behind ingress-nginx with a base path (/nexus), I always get 404 when hitting the UI:

image

I set the NEXUS_CONTENT env variable to:

nexus:
  docker:
    enabled: true
    # registries:
    #   - host: homelab
    #     port: 5000
    #     secretName: registry-secret
  env:
    # minimum recommended memory settings for a small, person instance from
    # https://help.sonatype.com/repomanager3/product-information/system-requirements
    - name: INSTALL4J_ADD_VM_PARAMS
      value: |-
        -Xms2703M -Xmx2703M
        -XX:MaxDirectMemorySize=2703M
        -XX:+UnlockExperimentalVMOptions
        -XX:+UseCGroupMemoryLimitForHeap
        -Djava.util.prefs.userRoot=/nexus-data/javaprefs
    - name: NEXUS_SECURITY_RANDOMPASSWORD
      value: "true"
    - name: NEXUS_CONTEXT
      value: nexus

I also set the liveness and readiness probes to match:

  livenessProbe:
    initialDelaySeconds: 30
    periodSeconds: 30
    failureThreshold: 6
    timeoutSeconds: 10
    path: /nexus
  readinessProbe:
    initialDelaySeconds: 30
    periodSeconds: 30
    failureThreshold: 6
    timeoutSeconds: 10
    path: /nexus

Ingress Rule:

ingress:
  enabled: true
  ingressClassName: nginx
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    cert-manager.io/cluster-issuer: "homelab-selfsigned-issuer"
    # kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-redirect: "true" 
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
  hostPath: /nexus(/|$)(.*)
  hostRepo: my-domain-name.home

But when accessing the UI, I can only see a 404 response. It appears I am missing some configuration or option, probably something simple I overlooked, does anyone have an idea by chance? This is usually straight forward with other apps and involves similar steps like seteting a ROOT_URL or path and then using ingress rules to add the path.

Any help is appreciated, thanks!

Helm charts successor?

I just read that the Helm charts for NXRM are to be decommissioned. What is the successor solution? Will there be new charts or a Kubernetes operator?

nexus repository manager template proxy-route.yaml uses wrong targetPort

  • The port on the service is hard coded as 'nexus-ui'
  • The proxy-route.yaml uses the template 'nexus.fullname'.

These two never match! 😢
(unless fullNameOverride is set to 'nexus-ui' as well - resulting in spreading that string over all its resources)

Possible solution could be to either hard code both as 'nexus-ui' or introduce a variable 'nexusProxyRoute.portName' with a default value of 'nexus-ui'

Nexus namespacing

I would like to suggest that nxrm-aws-resiliency stop managing the namespace nexusrepo. The side effect of the nxrm chart managing the namespace is that the helm release and the nexus-nxrm-aws-resiliency pod are in separate namespaces.

$ helm ls -n default
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                           APP VERSION
nexus           default         5               2023-11-03 17:19:07.787659544 -0400 EDT deployed        nxrm-aws-resiliency-61.1.0      3.61.0
$ kubectl --namespace default -l app=nxrm get po
No resources found in default namespace.
$ kubectl --namespace nexusrepo -l app=nxrm get po
NAME                                         READY   STATUS    RESTARTS      AGE
nexus-nxrm-aws-resiliency-54d698879b-6fk6t   4/4     Running   6 (24m ago)   29m

If these lines were removed, kubernetes admins could install the helm release and kubernetes resources in any namespace... and they'd be colocated in the same namespace. Which will help cut down on confusion.

Various concerns with the new `nxrm-aws-resiliency` chart

Hello! We recently started looking at shifting from the existing nexus-repository-manager helm chart, to the new nxrm-aws-resiliency chart, and we're noticing a lot of problems that I wanted to make sure to raise, as I feel others will likely raise the same concerns. Some of them have also been reported in other issues as well.

  1. Helm charts should rarely ever be specific to a single cloud provider. The implications of this means Nexus helm-based deployments are not portable across cloud providers, and will require a lot of additional work to port to other cloud providers if needed (we're currently AWS, but this limits our options). The whole point of Kubernetes abstracting everything via plugable resources is to support portability.
  2. Usage of AWS secrets-store-csi-driver should be optional, and should honestly not be recommended unless teams are already running the CSI driver. It adds quite a bit of additional complexity, that only makes sense if operators are already running these deployments.
    • Related issues: #38.
  3. Usage of external-dns controller should be optional, even if using docker functionality. For example, we use wildcard CNAME records through route53, so I can't see a need to add additional complexity in the DNS process just for Nexus.
    • Related issues: #18
    • Related PRs: #46
  4. Usage of fluentbit should be optional. I would also say that Prometheus should be supported at some point in the future as another optional extension of the helm chart. We actually started working on a Prometheus exporter that includes various metadata/metrics/etc (written in Go, 50+ different metrics atm, could be a sidecar).
    • Related issues: #18
    • Related PRs: #46
  5. Usage of AWS load-balancer-controller should be optional, even if using docker functionality. There are various security risks with dynamic loadbalancer creation, and it also prevents us from using IaC to manage our loadbalancers. We also have a lot of customizations we do to our loadbalancers, and this would prevent us from doing so.
  6. values.yaml looks like it doesn't follow common best practices/standards. The old chart more closely aligned with best practices/standards.
    • Naming for some keys (e.g. pv and pvc. nexusNs, cloudwatchNs, and externaldnsNs.)
    • Looks like environment variables aren't directly passed in. No way to add additional environment variables that actually get added to the containers/pods.
    • No options to add additional volumes/volume mounts (e.g. if needing to mount license or certs via another method).
    • No options to add additional init containers or regular containers (e.g. sidecars).
    • No options to add annotations to PVC, which means if we needed to add additional annotations for our storage provider, or for backups, we can't.
  7. The way database credentials get passed in, they are visible in the system information tab in the Nexus UI, which is a huge security risk. Anything that includes password, token, etc in the name, should be trimmed/masked in the system-information calls, and not returned in the UI.
  8. Misc other chart-level issues:
    • Why is a storageclass resource being created?

From just my initial review of the new chart, and after having dealt with 20+ charts from other vendors/platform solutions over the past two years, it seems like the new changes are a step backwards in almost every way and doesn't have the necessary functionality included for a proper enterprise deployment. Looking to see if any of the above improvements can be made.

Thanks!

[nexus-repository-manager] Add StatefulSet

Background

We use the free version of Nexus on EKS, with DB stored in EFS.
The free version has a lock file preventing Nexus from running on 2 pods.

Issue is: when a node dies and it is handled "properly" (=with node termination handler), a new pod gets created but fails to start because of lock file => this prevents node termination handler from completing its work.

Solution

we need a "at most 1 pod", which is provided by Statefulsets.

References

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment

Subdomains: Need a way to specify additional hosts in Ingress

For the recently added subdomain feature of Sonatype's docker repos, it is likely necessary to add additional hosts entries to ingress that correspond to the subdomains.

For example -

  1. Create a Kubernetes cluster with Sonatype in it.
  2. Go to Sonatype config and create a docker proxy registry. Give it a subdomain name of docker-proxy.
  3. Add DNS records using the method provided by your hosting provider. (For example, use external-dns and list docker-proxy.your.domain.com in the list of hostnames.)
  4. Try to use this docker-proxy URL. You might get 404 no service mapping, because while your.domain.com maps to the sonatype-nexus-repository-manager service, the subdomain docker-proxy.your.domain.com does not map to it. In my case on GCP, this will result in the subdomain getting routed to the default no service mapping backend.

There is a nexus.docker.registries that would work in this case, except, it is tied up with other logic associated with connector ports. In particular, trying to add the same sonatype-nexus-repository-manager to multiple hostnames this way will error with this service name -- notice the insertion of docker:

no Service with the name "sonatype-nexus-repository-manager-docker-8081" found

It seems the Helm chart ingress template needs to be modified to support the scenario.

Ingress in nxrm-aws-resiliency

Hi,
In nxrm3-helm-repository/nxrm-aws-resiliency/templates/ingress.yaml creates an ingress without an ingressClassName.
Can {{- if .Values.ingress.ingressClassName }} be added? or can this Ingress be conditional?
Thanks,
Miguel

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.