Giter VIP home page Giter VIP logo

helm-charts's Introduction

Nautobot

Nautobot

Nautobot is a Network Source of Truth and Network Automation Platform built as a web application atop the Django Python framework with a PostgreSQL or MySQL database.

Key Use Cases

1. Flexible Source of Truth for Networking - Nautobot core data models are used to define the intended state of network infrastructure enabling it as a Source of Truth. While a baseline set of models are provided (such as IP networks and addresses, devices and racks, circuits and cable, etc.) it is Nautobot's goal to offer maximum data model flexibility. This is enabled through features such as user-defined relationships, custom fields on any model, and data validation that permits users to codify everything from naming standards to having automated tests run before data can be populated into Nautobot.

2. Extensible Data Platform for Automation - Nautobot has a rich feature set to seamlessly integrate with network automation solutions. Nautobot offers GraphQL and native Git integration along with REST APIs and webhooks. Git integration dynamically loads YAML data files as Nautobot config contexts. Nautobot also has an evolving plugin system that enables users to create custom models, APIs, and UI elements. The plugin system is also used to unify and aggregate disparate data sources creating a Single Source of Truth to streamline data management for network automation.

3. Platform for Network Automation Apps - The Nautobot plugin system enables users to create Network Automation Apps. Apps can be as lightweight or robust as needed based on user needs. Using Nautobot for creating custom applications saves up to 70% development time by re-using features such as authentication, permissions, webhooks, GraphQL, change logging, etc. all while having access to the data already stored in Nautobot. Some production ready applications include:

The complete documentation for Nautobot can be found at Read the Docs.

Questions? Comments? Start by perusing our GitHub discussions for the topic you have in mind, or join the #nautobot channel on Network to Code's Slack community!

Build Status

Branch Status
main Build Status
develop Build Status
next Build Status

Screenshots

Gif of main page


Gif of config contexts


Gif of prefix hierarchy


Gif of GraphQL


Gif of Modes

Installation

Please see the documentation for instructions on installing Nautobot.

Application Stack

Below is a simplified overview of the Nautobot application stack for reference:

Application stack diagram

Plugins and Extensibility

Nautobot offers the ability to customize your setup to better align with your direct business needs. It does so through the use of various plugins that have been developed for network automation, and are designed to be used in environments where needed.

There are many plugins available within the Nautobot Apps ecosystem. The below screenshots are an example of some popular ones that are currently available.

Plugin Screenshots

Golden Config Plugin

Gif of golden config

ChatOps Plugin

Gif of chatops

Device Lifecycle Management Plugin

Gif of DLM

Providing Feedback

The best platform for general feedback, assistance, and other discussion is our GitHub discussions. To report a bug or request a specific feature, please open a GitHub issue using the appropriate template.

If you are interested in contributing to the development of Nautobot, please read our contributing guide prior to beginning any work.

Related projects

Please check out the GitHub nautobot topic for a list of relevant community projects.

Notices

Nautobot was initially developed as a fork of NetBox (v2.10.4). NetBox was originally developed by Jeremy Stretch at DigitalOcean and the NetBox Community.

helm-charts's People

Contributors

asergeant01 avatar cardoe avatar chipn avatar gertzakis avatar jeffkala avatar m4rg4sh avatar nautobot-bot avatar nniehoff avatar renovate[bot] avatar schwittmann avatar summed avatar ubajze avatar whitej6 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

Add support for Redis DB Index

Hello,

when pointing Nautobot at a Redis Cluster already containing databases, configuring a DB index would allow them to cohabitate.

all other software I use does that ... 🤪

count

LDAP Support

Provide a method in the helm charts to configure LDAP with Nautobot.

Cleanup Documentation

The docs are getting a little long for a single README at this point. We should probably break up docs into multiple pages, we need to make sure it still looks good on artifacthub.

FAQ section for README documentation

Proposed addition to README:

  • FAQ subsection
  • Some FAQ items that might be useful to have

FAQ

Q. How do I list available helm chart releases for the repo?
A. Update the helm repo `$ helm repo update`, then run `$ helm search repo nautobot --versions`

Q. I don't see any pre-release helm charts from helm search repo
A. Add the `--devel` parameter eg: `$ helm search repo nautobot --versions --devel`

Thoughts:

  • A table might work better
  • Needs more items

Allow additionalParameters

While using Skaffold Helm deployment, secrets encrypted via sops are not allowed due to additionalProperties: false.

ERRO[0014] Error: values don't meet the specifications of the schema(s) in the following chart(s):
nautobot:
- (root): Additional property sops is not allowed
  subtask=0 task=Render
std out err: %!(EXTRA *errors.errorString=Error: values don't meet the specifications of the schema(s) in the following chart(s):
nautobot:
- (root): Additional property sops is not allowed

)

To solve this please allow additionalProperties: https://github.com/nautobot/helm-charts/blob/v1.3.13/charts/nautobot/values.schema.json#L363

Schema does not allow array of initContainers

When trying to use initContainers as described in the values file, the chart can not be deployed.

Values comments:

## @param worker.initContainers Add additional init containers to the nautobot pod(s)
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
  ## e.g:
  ## initContainers:
  ##  - name: your-image-name
  ##    image: your-image
  ##    imagePullPolicy: Always
  ##    command: ['sh', '-c', 'echo "hello world"']
  ##

Error:

coalesce.go:199: warning: cannot overwrite table with non table for initContainers (map[])
coalesce.go:199: warning: cannot overwrite table with non table for initContainers (map[])
Error: INSTALLATION FAILED: values don't meet the specifications of the schema(s) in the following chart(s):
nautobot:
- nautobot.initContainers: Invalid type. Expected: object, given: array
- nautobot: Must validate all the schemas (allOf)

Investigate initcontainers for the Nautobot entrypoint steps

The Nautobot entrypoint runs nautobot-server post_upgrade along with some other steps. There is no need to do this every time. Investigate using an initcontainer to do this and skip this step in the running container, this may save time on container restarts.

Nautobot resources request/limit switched in values.yaml (chart version: 1.0.0)

Trying to install the vanilla chart I get the following:
Error: Deployment.apps "nautobot" is invalid: [spec.template.spec.containers[0].resources.requests: Invalid value: "8704M": must be less than orequal to memory limit, spec.template.spec.containers[0].resources.requests: Invalid value: "600m": must be less than or equal to cpu limit]
Nb_issue

Add observability for celery workers

Environment

Nautobot version: ^1.3

Proposed Functionality

Nautobot has changed to use celery workers instead of rq workers, therefore we need to enable task monitoring by using a specific option -E when we start the celery worker, in order to observe tasks in the redis queue that the celery worker is using. The following code is the way can do this in docker-compose.yml:

  celery:
    entrypoint: "nautobot-server celery worker -l INFO -E"
    depends_on:
      - "nautobot"
    healthcheck:
      disable: true
    <<: *nautobot-base

Then in the celery worker logs you will observe that task events are on:

-- ******* ---- Linux-4.19.0-18-amd64-x86_64-with-debian-11.3 2022-06-30 12:48:17
- *** --- * --- 
- ** ---------- [config]
- ** ---------- .> app:         nautobot:0x7f7d359bb6d0
- ** ---------- .> transport:   redis://:**@redis:6379/0
- ** ---------- .> results:     redis://:**@redis:6379/0
- *** --- * --- .> concurrency: 24 (prefork)
-- ******* ---- .> task events: ON
--- ***** ----- 
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery

Worker nodes restarting due to failing liveliness probes

Issue

I am trying to run this chart with my MacOS/Docker/K3S setup and I'm facing a problem where all the worker nodes (both celery and nautobot) constantly restart due to failed liveliness/readiness probes. I have tried both v1.2.0 and v1.1.15 (the following outputs are from v1.2.0). I have read #55 and #57 but I don't think this applies, as I'm running a very default configuration and also have set nautobot.redis.ssl to false. Also I don't think this is a OOM scenario, as from the many times I ran kubectl describe pod I never saw the OOM killed messages mentioned in #55 and theres still 2GB of RAM headroom on the cluster.

Configuration

postgresql:
  postgresqlPassword: "password"
redis:
  auth:
    password: "password"
nautobot:
  redis:
    ssl: false

Debug output

Celery worker output:

k logs nautobot-celery-worker-d65589fbc-swk7t -f
 
 -------------- celery@nautobot-celery-worker-d65589fbc-swk7t v5.2.3 (dawn-chorus)
--- ***** ----- 
-- ******* ---- Linux-5.10.76-linuxkit-x86_64-with-glibc2.31 2022-02-01 12:05:33
- *** --- * --- 
- ** ---------- [config]
- ** ---------- .> app:         nautobot:0x4002dd5670
- ** ---------- .> transport:   redis://:**@nautobot-redis-headless:6379/0
- ** ---------- .> results:     redis://:**@nautobot-redis-headless:6379/0
- *** --- * --- .> concurrency: 5 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- 
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery
                

[tasks]
  . nautobot.extras.datasources.git.pull_git_repository_and_refresh_data
  . nautobot.extras.jobs.run_job
  . nautobot.extras.jobs.scheduled_job_handler
  . nautobot.extras.tasks.delete_custom_field_data
  . nautobot.extras.tasks.process_webhook
  . nautobot.extras.tasks.provision_field
  . nautobot.extras.tasks.update_custom_field_choice_data
  . nautobot.utilities.tasks.get_releases

[2022-02-01 12:05:35,156: INFO/MainProcess] Connected to redis://:**@nautobot-redis-headless:6379/0
[2022-02-01 12:05:35,176: INFO/MainProcess] mingle: searching for neighbors
[2022-02-01 12:05:36,216: INFO/MainProcess] mingle: all alone
[2022-02-01 12:05:36,270: INFO/MainProcess] celery@nautobot-celery-worker-d65589fbc-swk7t ready.
[2022-02-01 12:05:39,746: INFO/MainProcess] sync with celery@nautobot-celery-worker-d65589fbc-dpjk2

worker: Warm shutdown (MainProcess)

Nautobot worker output:

k logs nautobot-68bcb95f6b-5tdtw -f
Performing database migrations...
Operations to perform:
  Apply all migrations: admin, auth, circuits, contenttypes, database, dcim, django_celery_beat, extras, ipam, sessions, social_django, taggit, tenancy, users, virtualization
Running migrations:
  No migrations to apply.

Generating cable paths...
Found no missing circuit termination paths; skipping
Found no missing console port paths; skipping
Found no missing console server port paths; skipping
Found no missing interface paths; skipping
Found no missing power feed paths; skipping
Found no missing power outlet paths; skipping
Found no missing power port paths; skipping
Finished.

Collecting static files...

974 static files copied to '/opt/nautobot/static'.

Removing stale content types...

Removing expired sessions...

⏳ Running initial systems check...
System check identified some issues:

WARNINGS:
?: (security.W004) You have not set a value for the SECURE_HSTS_SECONDS setting. If your entire site is served only over SSL, you may want to consider setting a value and enabling HTTP Strict Transport Security. Be sure to read the documentation first; enabling HSTS carelessly can cause serious, irreversible problems.
?: (security.W008) Your SECURE_SSL_REDIRECT setting is not set to True. Unless your site should be available over both SSL and non-SSL connections, you may want to either set this setting True or configure a load balancer or reverse-proxy server to redirect all connections to HTTPS.
?: (security.W012) SESSION_COOKIE_SECURE is not set to True. Using a secure-only session cookie makes it more difficult for network traffic sniffers to hijack user sessions.
?: (security.W016) You have 'django.middleware.csrf.CsrfViewMiddleware' in your MIDDLEWARE, but you have not set CSRF_COOKIE_SECURE to True. Using a secure-only CSRF cookie makes it more difficult for network traffic sniffers to steal the CSRF token.

System check identified 4 issues (0 silenced).

kubectl describe pod output for a nautobot worked node:

k describe pod nautobot-68bcb95f6b-5tdtw 
Name:         nautobot-68bcb95f6b-5tdtw
Namespace:    default
Priority:     0
Node:         k3d-k3d-rancher-server-0/172.20.0.2
Start Time:   Tue, 01 Feb 2022 12:58:56 +0100
Labels:       app.kubernetes.io/component=nautobot
              app.kubernetes.io/instance=nautobot
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=nautobot
              helm.sh/chart=nautobot-1.2.0
              pod-template-hash=68bcb95f6b
Annotations:  seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:       Running
IP:           10.42.0.49
IPs:
  IP:           10.42.0.49
Controlled By:  ReplicaSet/nautobot-68bcb95f6b
Containers:
  nautobot:
    Container ID:   containerd://e7d1478996a8d51fa23572314ff4d6b7b350f8c7a590cc5e48d4cdc9f62a2b20
    Image:          ghcr.io/nautobot/nautobot:1.2.4-py3.9
    Image ID:       ghcr.io/nautobot/nautobot@sha256:372e6e638c62ca388cfce42e96b8d9450c00210562c1ce2de04af49d4ed83536
    Ports:          8443/TCP, 8080/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Tue, 01 Feb 2022 13:36:17 +0100
      Finished:     Tue, 01 Feb 2022 13:37:46 +0100
    Ready:          False
    Restart Count:  13
    Limits:
      cpu:     600m
      memory:  8704M
    Requests:
      cpu:      100m
      memory:   1280M
    Liveness:   http-get http://:http/health/ delay=30s timeout=5s period=10s #success=1 #failure=3
    Readiness:  http-get http://:http/health/ delay=30s timeout=5s period=10s #success=1 #failure=3
    Environment Variables from:
      nautobot-env  ConfigMap  Optional: false
      nautobot-env  Secret     Optional: false
    Environment:    <none>
    Mounts:
      /opt/nautobot/git from git-repos (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-295rr (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  git-repos:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-295rr:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  39m                   default-scheduler  Successfully assigned default/nautobot-68bcb95f6b-5tdtw to k3d-k3d-rancher-server-0
  Normal   Pulled     39m                   kubelet            Successfully pulled image "ghcr.io/nautobot/nautobot:1.2.4-py3.9" in 518.130084ms
  Normal   Pulling    38m (x2 over 39m)     kubelet            Pulling image "ghcr.io/nautobot/nautobot:1.2.4-py3.9"
  Normal   Pulled     38m                   kubelet            Successfully pulled image "ghcr.io/nautobot/nautobot:1.2.4-py3.9" in 534.001792ms
  Normal   Created    38m (x2 over 39m)     kubelet            Created container nautobot
  Normal   Started    38m (x2 over 39m)     kubelet            Started container nautobot
  Normal   Killing    37m (x2 over 38m)     kubelet            Container nautobot failed liveness probe, will be restarted
  Warning  Unhealthy  19m (x50 over 39m)    kubelet            Readiness probe failed: Get "http://10.42.0.49:8080/health/": dial tcp 10.42.0.49:8080: connect: connection refused
  Warning  Unhealthy  9m34s (x36 over 39m)  kubelet            Liveness probe failed: Get "http://10.42.0.49:8080/health/": dial tcp 10.42.0.49:8080: connect: connection refused
  Warning  BackOff    4m29s (x90 over 29m)  kubelet            Back-off restarting failed container

kubectl describe pod output for a celery worked node:

k describe pod nautobot-celery-worker-d65589fbc-dpjk2 
Name:         nautobot-celery-worker-d65589fbc-dpjk2
Namespace:    default
Priority:     0
Node:         k3d-k3d-rancher-server-0/172.20.0.2
Start Time:   Tue, 01 Feb 2022 12:59:08 +0100
Labels:       app.kubernetes.io/component=nautobot-celery-worker
              app.kubernetes.io/instance=nautobot
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=nautobot
              helm.sh/chart=nautobot-1.2.0
              pod-template-hash=d65589fbc
Annotations:  seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:       Running
IP:           10.42.0.55
IPs:
  IP:           10.42.0.55
Controlled By:  ReplicaSet/nautobot-celery-worker-d65589fbc
Containers:
  nautobot-celery:
    Container ID:  containerd://1fc62cd4e4ca2daaed517bd19543983508acbd1631fc0c3e33bdda4d69d4e318
    Image:         ghcr.io/nautobot/nautobot:1.2.4-py3.9
    Image ID:      ghcr.io/nautobot/nautobot@sha256:372e6e638c62ca388cfce42e96b8d9450c00210562c1ce2de04af49d4ed83536
    Port:          <none>
    Host Port:     <none>
    Command:
      nautobot-server
      celery
      worker
      --loglevel
      $(NAUTOBOT_LOG_LEVEL)
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 01 Feb 2022 13:31:21 +0100
      Finished:     Tue, 01 Feb 2022 13:34:20 +0100
    Ready:          False
    Restart Count:  9
    Limits:
      cpu:     3328m
      memory:  6656M
    Requests:
      cpu:      400m
      memory:   1G
    Liveness:   exec [bash -c nautobot-server celery inspect ping --destination celery@$HOSTNAME] delay=30s timeout=10s period=60s #success=1 #failure=3
    Readiness:  exec [bash -c nautobot-server celery inspect ping --destination celery@$HOSTNAME] delay=30s timeout=10s period=60s #success=1 #failure=3
    Environment Variables from:
      nautobot-env  ConfigMap  Optional: false
      nautobot-env  Secret     Optional: false
    Environment:    <none>
    Mounts:
      /opt/nautobot/git from git-repos (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cjq2w (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  git-repos:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-cjq2w:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  40m                  default-scheduler  0/1 nodes are available: 1 Insufficient memory.
  Warning  FailedScheduling  40m (x1 over 40m)    default-scheduler  0/1 nodes are available: 1 Insufficient memory.
  Normal   Scheduled         40m                  default-scheduler  Successfully assigned default/nautobot-celery-worker-d65589fbc-dpjk2 to k3d-k3d-rancher-server-0
  Normal   Pulled            40m                  kubelet            Successfully pulled image "ghcr.io/nautobot/nautobot:1.2.4-py3.9" in 514.728042ms
  Normal   Pulled            37m                  kubelet            Successfully pulled image "ghcr.io/nautobot/nautobot:1.2.4-py3.9" in 513.247459ms
  Normal   Killing           34m (x2 over 37m)    kubelet            Container nautobot-celery failed liveness probe, will be restarted
  Normal   Pulling           34m (x3 over 40m)    kubelet            Pulling image "ghcr.io/nautobot/nautobot:1.2.4-py3.9"
  Normal   Pulled            34m                  kubelet            Successfully pulled image "ghcr.io/nautobot/nautobot:1.2.4-py3.9" in 514.411292ms
  Normal   Created           34m (x3 over 40m)    kubelet            Created container nautobot-celery
  Normal   Started           34m (x3 over 40m)    kubelet            Started container nautobot-celery
  Warning  Unhealthy         10m (x25 over 39m)   kubelet            Liveness probe failed:
  Warning  Unhealthy         5m7s (x30 over 39m)  kubelet            Readiness probe failed:
  Warning  BackOff           12s (x49 over 16m)   kubelet            Back-off restarting failed container

Liveness/Readiness probes failing on Celery worker pod

After setting nautobot.redis.ssl=true and in the nautobot_config.py CELERY_REDIS_BACKEND_USE_SSL = {"ssl_cert_reqs": "none"}, the Celery Worker fails the liveness/readiness probes without specifying an error.
This make the pod unable to be on "Running 1/1" state.

Add Support for HA Postgres/Redis

Bitnami provides a Postgres HA chart and Redis can be configured in HA, we should provide a method to enable this disabled by default.

Document Unsupported Redis Read Replicas/Cluster (non-Sentinel)?

The Bitnami Redis chart's default architecture is replication (without Sentinel). This chart, however, overrides that to set it to standalone. This chart sets the Redis service name to the headless service, which can't differentiate between write and read replicas. The result is that when not using Sentinel but still wanting to have read replicas, you will get the following error:

redis.exceptions.ResponseError: Error running script (call to f_f6b93ec749d56aefa37c49bbc5e7a6eedd0d4ec3): @user_script:15: @user_script: 15: -READONLY You can't write against a read only replica.

The fix is to use the *-redis-master for writes and the *-redis-replicas service for reads. However, it seems that Nautobot doesn't currently support it due to an upstream issue. For those who are possibly not familiar with the implementation details of Nautobot, but who are familiar with the Bitnami Redis chart (or Redis in general), it might be helpful to have a notice that clustering/read replicas aren't supported by Nautobot (and therefore the functionality available in the Redis chart is unsupported).

helm template command failing

Hey everyone, I have a question about an issue I am having with nautobot helm chart v1.3.13 and v1.13.14
When i am running it with helm template command i am getting the following error:

nikolaos.a@MBP-176136 [15:44:52] [~/testing/Projects/nautobot-helm-config] [master *]
-> % helm template nautobot nautobot/nautobot -n nautobot --values helm-config-prd.yml --debug
install.go:194: [debug] Original chart version: ""
install.go:211: [debug] CHART PATH: /Users/nikolaos.a/Library/Caches/helm/repository/nautobot-1.3.14.tgz


Error: execution error at (nautobot/templates/secret.yaml:17:27): Existing secret postgres-secret not found!
helm.go:84: [debug] execution error at (nautobot/templates/secret.yaml:17:27): Existing secret postgres-secret not found!
nikolaos.a@MBP-176136 [15:58:16] [~/testing/Projects/nautobot-helm-config] [master *]

This happens both in the cli as well via our deployment tool. Commands helm install and helm upgrade work fine and the secret that supposedly does not exist, does in fact exist. I am a bit confused why this is happening. Any ideas?

Common Annotations are not applied to pods

I added the value for commonAnnotations, but the value is not applied to my pods.

apiVersion: v1
data:
  values: |
    ---
    commonAnnotations:
      reloader.stakater.com/auto: "true"
$ k get pods nautobot-5478658bc6-m8l8l -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/psp: eks.privileged
    seccomp.security.alpha.kubernetes.io/pod: runtime/default
  creationTimestamp: "2022-01-05T09:40:58Z"

MySQL Support

Enable the option for using MySQL support via the helm chart.

Plugin Support

In order to support plugins within containerised deployments, a user should have 2 options:

  1. Use their own infrastructure to rebuild the nautobot container including their plugins
  2. Have their plugin loaded into the default nautobot container before execution

Option 1 is currently available to users and requires no further development from this side, however option 2 is not currently available and requires further discussion on viability and implementation.

You can consider a Pod to be a self-contained, isolated "logical host" that contains the systemic needs of the application it serves. You describe a desired state in a Deployment which will orchestrate pods/containers to meet that state declaration. Deployments should be idempotent and therefore installing dependencies at runtime is not recommended.

I believe a possible solution to this is to build containers with simply the plugin code and its associated dependencies. This would achieve a versioned, idempotent container, that could be run to pre-populate a shared volume with the nautobot container.

To give an example / idea - consider a plugin build using the following Dockerfile and the nautobot chart deployed using the following values:

FROM my_repo/alpine:latest

# Install Pip to be able to resolve dependencies
RUN apk add py3-pip

# Make Directories
RUN mkdir /my_plugin/
RUN mkdir /my_plugin/app
RUN mkdir /my_plugin/dependencies

WORKDIR /my_plugin/

# Copy App from local filesystem to container
COPY . /my_plugin/app

# Download pip requirements into container for offline install
RUN pip download /my_plugin/app -d /my_plugin/dependencies

# Assume /plugins will be a PV mounted to the this container & make dirs within it
RUN mkdir /plugins
RUN mkdir /plugins/nautobot-plugin-awx-jobs
RUN mkdir /plugins/nautobot-plugin-awx-jobs-dependencies

# Copy this plugins versioned code and dependencies to the PV
CMD cp -R /my_plugin/app /plugins/nautobot-plugin-awx-jobs && cp -R /my_plugin/dependencies /plugins/nautobot-plugin-awx-jobs-dependencies
nautobot:
  extraVolumes:
    - name: nautobot-plugins
      emptyDir: {}

  extraVolumeMounts:
    - name: nautobot-plugins
      mountPath: /plugins

  initContainers:
    - name: nautobot-plugin-awx-jobs
      image: artifactory/nautobot-plugin-awx-jobs:latest
      imagePullPolicy: Always
      volumeMounts:
        - name: nautobot-plugins
          mountPath: /plugins

When the nautobot container starts, the plugins directory is available to it:

nautobot@nautobot-97c7fc76c-4ff7d:/plugins$ ls
nautobot-plugin-awx-jobs  nautobot-plugin-awx-jobs-dependencies

What this gives is 2 folders within the main nautobot container that the docker-entrypoint.sh script could look into and install.

README feedback

I am a complete n00b to Helm charts.

I’d like to see the following in the README:

  • What is a Helm chart?
  • What is the use case for using Helm charts with Nautobot?
  • What problem does this solution solve?
  • Once this solution is installed, what then? How is it used?
  • Screen shots of this solution in action
  • The README says This repo is intented to house helm charts for the Nautobot project. (Yes, today that is only the Nautobot chart, but we are ready for more). Could there be multiple Helm charts for the Nautobot project? Why would there need to be more than one? What is the use case for more?

Having this info in the README will help a user who may be unfamiliar with Helm charts to determine if this solution can help them. Otherwise, this solution will only appeal to people who already understand the use case and already understand how a Helm chart adds value.

Enable Kubescape Security Scanning

We should scan the built templates for security concerns using a tool such as trivy or kubescape to make sure we have good security practices enabled in this chart.

Add NetworkPolicy Support

build and configure a networkpolicy object with this helm chart. This is going to be significant work to support postgres+postgres ha, redis + redis sentinel, and MySQL...

Docs Update: Add How to Update Passwords

When following the docs front to back getting the following:

❯ helm install nautobot nautobot/nautobot
Error: INSTALLATION FAILED: execution error at (nautobot/templates/secret.yaml:17:27): A Postgres Password is required!

Nautobot core/celery pods can't connect to Redis with SSL enabled

After setting in the values.yaml nautobot.redis.ssl=true and in the nautobot_config.py CELERY_REDIS_BACKEND_USE_SSL = {"ssl_cert_reqs": "none"}, the pods still can't connect to Redis and have the following error:
Cannot connect to rediss://:**@nautobot-redis-master:6379/0: Error 111 connecting to nautobot-redis-master:6379. Connection refused

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

github-actions
.github/workflows/ci.yaml
  • actions/checkout v4
  • actions/setup-go v5
  • azure/setup-helm v4
  • dorny/paths-filter v3
  • pre-commit/action v3.0.1
  • pre-commit/action v3.0.1
  • actions/checkout v4
  • azure/setup-helm v4
  • github/codeql-action v3
  • actions/checkout v4
  • azure/setup-helm v4
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
.github/workflows/release-chart.yaml
  • actions/checkout v4
  • stefanprodan/helm-gh-pages v1.7.0
  • appany/helm-oci-chart-releaser v0.4.1
  • ubuntu 22.04
  • ubuntu 22.04
helm-values
charts/nautobot/values.yaml
  • ghcr.io/nautobot/nautobot 2.2.4-py3.11
  • docker.io/nginxinc/nginx-unprivileged 1.26
  • docker.io/nginx/nginx-prometheus-exporter 1.1.0
  • docker.io/timonwong/uwsgi-exporter v1.3.0
helmv3
charts/nautobot/Chart.yaml
pep621
pyproject.toml
pip_requirements
docs/requirements.txt
  • mkdocs ==1.6.0
  • mkdocs-material ==9.5.22
  • mkdocs-version-annotations ==1.0.0
poetry
pyproject.toml
  • python ^3.8
  • invoke *
  • mkdocs ~1.6.0
  • mkdocs-material 9.5.22
  • mkdocs-version-annotations 1.0.0
pre-commit
.pre-commit-config.yaml
  • pre-commit/pre-commit-hooks v4.6.0
  • Lucas-C/pre-commit-hooks v1.5.5
  • jumanjihouse/pre-commit-hooks 3.0.0
  • streetsidesoftware/cspell-cli v8.8.2
  • gruntwork-io/pre-commit v0.1.23
  • adrienverge/yamllint v1.35.1
  • norwoodj/helm-docs v1.13.1
  • norwoodj/helm-docs v1.13.1

  • Check this box to trigger a request for Renovate to run again on this repository

Allow specify ingressClassName in the chart

Hi, I have different Ingress Objects on our k8s cluster for different environments:

$ kubectl get ingressClass 
NAME           CONTROLLER             PARAMETERS   AGE
nginx-pre      k8s.io/ingress-nginx   <none>       119d
nginx-qa       k8s.io/ingress-nginx   <none>       119d

The version of k8s is v1.20.11 and helm 3.8.
If I deploy nautobot using the provided helm chart, the ingress deploys, but doesn't work as expected due the missing spec ingressClassName.
If I edit the ingress object and add to it, then it work.

Would possible to add something like this in the ingress.yaml template:

{{- if .Values.ingress.enabled }}
apiVersion: {{ include "common.capabilities.ingress.apiVersion" . }}
kind: Ingress
metadata:
  [...]
spec:
  {{- if .Values.ingress.ingressClassName }}
  ingressClassName: {{ .Values.ingress.ingressClassName }}
  {{end}}
  rules:
    [...]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.