Giter VIP home page Giter VIP logo

microk8s-community-addons's Introduction

microk8s-addons

This repository contains the community addons that can be enabled along with MicroK8s.

Directory structure

addons.yaml         Authoritative list of addons included in this repository. See format below.
addons/
    <addon1>/
        enable      Executable script that runs when enabling the addon
        disable     Executable script that runs when disabling the addon
    <addon2>/
        enable
        disable
    ...

addons.yaml format

microk8s-addons:
  # A short description for the addons in this repository.
  description: Core addons of the MicroK8s project

  # Revision number. Increment when there are important changes.
  revision: 1

  # List of addons.
  addons:
    - name: addon1
      description: My awesome addon

      # Addon version.
      version: "1.0.0"

      # Test to check that addon has been enabled. This may be:
      # - A path to a file. For example, "${SNAP_DATA}/var/lock/myaddon.enabled"
      # - A Kubernetes resource, in the form `resourceType/resourceName`, just
      #   as it would appear in the output of the `kubectl get all -A` command.
      #   For example, "deployment.apps/registry".
      #
      # The addon is assumed to be enabled when the specified file or Kubernetes
      # resource exists.
      check_status: "deployment.apps/addon1"

      # List of architectures supported by this addon.
      # MicroK8s supports "amd64", "arm64" and "s390x".
      supported_architectures:
        - amd64
        - arm64
        - s390x

    - name: addon2
      description: My second awesome addon, supported for amd64 only
      version: "1.0.0"
      check_status: "pod/addon2"
      supported_architectures:
        - amd64

Adding new addons

See HACKING.md for instructions on how to develop custom MicroK8s addons.

microk8s-community-addons's People

Contributors

0xe282b0 avatar aalonsolopez avatar anaisurlichs avatar azuna1 avatar balasu avatar balchua avatar berkayoz avatar bschimke95 avatar byjg avatar csantanapr avatar danarlowski avatar dependabot[bot] avatar dud225 avatar eliaskoromilas avatar evilnick avatar gayathri-bluemeric avatar gopiak avatar hubvu avatar jasonumiker avatar jkosik avatar joedborg avatar ktsakalozos avatar naqvis avatar neoaggelos avatar sachinkumarsingh092 avatar sxd avatar testa113 avatar thesykan avatar udit-uniyal avatar yada avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

microk8s-community-addons's Issues

Adding support for Cilium 1.14.1 and 1.14.2

Summary

In Cilium versions 1.14.1 and 1.14.2, the value ipam.operator.clusterPoolIPv4PodCIDR has been removed; instead, use ipam.operator.clusterPoolIPv4PodCIDRList as a replacement.

Why is this important?

To support Cilium version 1.14.1 and 1.14.2

Are you interested in contributing to this feature?

yes

Istio PodDisruptionBudgets' version is unavailable in v1.25+

Summary

Enabling Istio, I get errors like

Istiod encountered an error: failed to update resource with server-side apply for obj PodDisruptionBudget/istio-system/istiod: no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"

Note that the version is the deprecated policy/v1beta1 and not policy/v1.

What Should Happen Instead?

I should not get this error. MKS should install Istio with PodDisruptionBudgets with version policy/v1.

Reproduction Steps

  1. Use MicroK8s v1.25.2 revision 4055 (or probably just anything v1.25+)
  2. Run microk8s enable istio

Introspection Report

Will supply if actually needed.

Can you suggest a fix?

Use Istio version v1.14+ instead of v1.10.3

Are you interested in contributing with a fix?

Perhaps in a few months.

Traefik not fully deployed and returns 404 page not found

Summary

Traefik returns 404 page not found

What Should Happen Instead?

I expect Traefik to be fully deployed and serve the dashboard

Reproduction Steps

  1. microk8s enable metallb:192.168.0.100-192.168.0.150
  2. microk8s enable traefik

Infer repository community for addon traefik
Enabling traefik ingress controller
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "traefik" chart repository
Update Complete. ⎈Happy Helming!⎈
Release "traefik" does not exist. Installing it now.
NAME: traefik
LAST DEPLOYED: Mon Feb 27 22:41:29 2023
NAMESPACE: traefik
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Traefik Proxy v2.9.6 has been deployed successfully
on traefik namespace !

  1. At this point I had no access to the dashboard despite the external IP was assigned and was seeing only 1 pod.
ubuntu@k8s-master:~$ microk8s kubectl get pods,svc,deployment -n traefik -o 'wide'
NAME                           READY   STATUS    RESTARTS      AGE   IP           NODE         NOMINATED NODE   READINESS GATES
pod/traefik-86777c947c-v8drn   1/1     Running   1 (17m ago)   21m   10.1.46.19   k8s-master   <none>           <none>

NAME              TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE   SELECTOR
service/traefik   LoadBalancer   10.152.183.189   192.168.0.100   80:31509/TCP,443:31430/TCP   29m   app.kubernetes.io/instance=traefik-traefik,app.kubernetes.io/name=traefik

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES           SELECTOR
deployment.apps/traefik   1/1     1            1           29m   traefik      traefik:v2.9.6   app.kubernetes.io/instance=traefik-traefik,app.kubernetes.io/name=traefik
  1. So I tried to curl the only pod:
ubuntu@k8s-master:~$ curl http://10.1.46.19:8000
404 page not found

gopaddle community addon installation hangs when -s (--storageclass) is specified

Summary

when gopaddle community addon is provided with -s or --storageclass, the enable script hangs -- such as below:

ubuntu@host:~$ microk8s enable gopaddle --storageclass nfs
Infer repository community for addon gopaddle 
(hangs indefinitely until Ctrl-C) -- strace shows nothing is happening 

What Should Happen Instead?

ubuntu@host:~$ microk8s enable gopaddle --storageclass nfs
Infer repository community for addon gopaddle
<continues to enable gopaddle using supplied SC> 

Reproduction Steps

  1. microk8s enable gopaddle -s nfs
  2. microk8s enable gopaddle --storageclass nfs

Introspection Report

Not needed as the bug is associated with the enable script of gopaddle, not microk8s

Can you suggest a fix?

the enable script is missing a few lines >> addons/gopaddle/enable

Are you interested in contributing with a fix?

yes

wereabouts with multus for multinode clusters

Summary

Multus is very useful, but only usable in single-node without wereabouts.

Why is this important?

To have second-network enabled in multi-node clusters

Are you interested in contributing to this feature?

No, not really skilled in microk8s

new microk8s addon to support gopaddle-lite

Summary

gopaddle-lite is a community edition of gopaddle, a modernization and automation solution on kubernetes clusters.
gopaddle-lite is being made available as an 'addon' on microk8s.

Why is this important?

go-paddle Lite is a lifetime free zero-cost end-to-end modernization and automation solution on kubernetes clusters for use by developers, startups, learners and evaluation and can be brought upon desktops, VMs and host m/cs.

To know more about gopaddle capabilities, you can please visit the official gopaddle website at: gopaddle.io

Are you interested in contributing to this feature?

Yes

mayastor addon issue

Hi
I installed microk8s 1.24 on a single node vm Ubuntu 21.10 .microk8s enable mayastor command enables storage classes mayastor and mayastor-3, however default storage class annotation is not set for any 1 of them, so I set for mayastor with
default storage class annotation

root@localhost:~# microk8s kubectl patch storageclass mayastor -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/mayastor patched

however mayastor pod is still in pending state

root@localhost:~# microk8s kubectl get po -n mayastor
NAME                                      READY   STATUS    RESTARTS   AGE
mayastor-csi-cwxpv                        0/2     Pending   0          31m
etcd-operator-mayastor-6b97bdb8dc-76qd6   1/1     Running   0          31m
etcd-4f8hh964ck                           1/1     Running   0          31m
core-agents-d6487674f-5qjsk               1/1     Running   0          31m
msp-operator-d47945d87-z2t7h              1/1     Running   0          31m
rest-754bb6c7f8-9c5tz                     1/1     Running   0          31m
mayastor-qvx8q                            1/1     Running   0          31m
csi-controller-5dd6f68f45-xm24g           3/3     Running   0          31m

pl suggest no error ,also requesting set atleast mayastor storageclass as default

portainer + kube-ovn result in portainer not being able to connect to the local cluster

Latest microk8s installed on 3 physical nodes on ubuntu 24.04 as follows:

On all nodes:

sudo snap install microk8s --classic --channel=1.30
sudo usermod -a -G microk8s $USER
mkdir -p ~/.kube
chmod 0700 ~/.kube
su - $USER
alias kubectl='microk8s kubectl'
sudo snap alias microk8s.kubectl kubectl

#sudo microk8s enable kube-ovn                   # uncomment this to make portainer unusable
#microk8s enable kube-ovn --force                # uncomment this to make portainer unusable

sudo microk8s enable dns
sudo microk8s enable ingress
sudo microk8s enable hostpath-storage
sudo microk8s enable rbac
sudo microk8s enable community
sudo microk8s enable nfs

On main:

microk8s add-node
sudo microk8s enable portainer

Using calico, portainer will connect to this env

image

with ovn, I wasn't able to get portainer to work. The problem should be replicable, tried to completely uninstall and reinstall microk8s on all nodes.

Unable to test custom added addons on MacOS

Summary

Unable to test local dev addons on microk8s on a Darwin based system. I installed microk8s using multipass and on execing into the vm I am getting an empty common folder in the /var/snap/microk8s/common file path.

Since snap is not available on mac, by getting it from homebrew and doing a sudo snap install microk8s --classic --channel latest/edge/addons I get this

Interacting with snapd is not yet supported on darwin.
This command has been left available for documentation purposes only.

Some alternatives I tried were pushing my enable and disable script to a GitHub repo and adding that repo to microk8s, but upon doing that I'm getting this

enable hook for litmus addon needs execute permissions
Removing /var/snap/microk8s/common/addons/litmus
An error occurred when trying to execute 'sudo microk8s.addons repo add litmus https://github.com/S-ayanide/microk8s-community-addons.git' with 'multipass': returned exit code 1.

What Should Happen Instead?

It should let me try local addons which have execute permissions. Or there should be an alternative to swapping /var/snap approach for darwin systems.

Reproduction Steps

To reproduce the above, try the following on a darwin system.

  1. Install microk8s and enable multipass
  2. Try adding your repository with a new addon where an enable script is added microk8s addons repo add <REPO_NAME> <REPO_URL>

Are you interested in contributing with a fix?

No

OpenEBS: NFS provisioner

It would be wonderful to have nfs provisioner also enabled with OpenEBS and defaulting to Jiva, as it would not be in the interest of single nodes normally.

How to add an addon command?

I'm working on knative add on and would like to add the kn CLI command like “microk8s kn”

I'm able to download clip binary to SNAP bin dir, but the command doesn't show as available

I followed the examples of cillium and linked, but luck getting to work

Update cloudnative-pg addon to version 1.22.0 and add addon arg to specifiy version

Summary

I would like to use cloudnative-pg in the latest version 1.22.0. Unfortunately in the currently provided MicroK8s addon is the version 1.21.1 hard-coded.
Please, could the maintainer of the addon or somebody else update the variable CNPG_VERSION in the enable file to 1.22.0.

Further I would suggest that the version should be passed as an addon argument like I have seen in the core addon dns.

Why is this important?

Not just for security reasons it would be very helpful to use the lastest version of a third party package. Further the possibility to choose the best fitting version individually would offer a better chance to test new versions or skip buggy versions.

Are you interested in contributing to this feature?

I would suggest to change line 4 in file microk8s-community-addons/addons/cloudnative-pg/enable into:

CNPG_VERSION="$1"
if [ -z "$1" ]; then
        CNPG_VERSION="1.22.0"

(not tested yet)

Thanks, I appreciate your effort!

OpenEBS: why is cstor enabled?

I could not find any reason for having cstor installed and enabled for OpenEBS. Jiva doesn't depend on it, and also localpath for sure not. It is using only extra resources, critical in smaller environments.

When openfaas addon is enabled, gopaddle-lite shows as enabled - and vice versa

Summary

When you enable the openfaas addon, if you check what addons are enabled, the gopaddle-lite addon shows as enabled, which is not the case. This is because openfaas creates a gateway deployment which gopaddle-lite uses to check whether it is enabled. The same situation occurs in reverse; if gopaddle-lite is enabled, openfaas will report as enabled as well.

What Should Happen Instead?

The 'enabled' checks for openfaas and gopaddle-lite should be granular enough to distinguish between which is enabled and only then show as enabled the addon that is enabled (when you perform a microk8s status command).

Reproduction Steps

Easily reproducible:

  1. In a MicroK8s cluster, perform microk8s enable openfaas
  2. Then: microk8s status

Introspection Report

N/A

Can you suggest a fix?

The 'enabled' checks for openfaas and gopaddle-lite should be more granular, focusing on checking for something that is specific to each of those addon which the other of the addons does not have.
If it is difficult to find something specific per addon, it might be worth implementing the regex check code that Ali suggested in #166 and using a regex instead.

Are you interested in contributing with a fix?

no

Can't enable some addons on Mac OS 12.1, Apple M1 Pro

Summary

Can't enable some community addons on MicroK8s 1.24 running on Mac OS 12.1 (21C52), Apple M1 Pro. The error message is the following:

Addon fluent was not found in any repository
An error occurred when trying to execute 'sudo microk8s.enable fluent' with 'multipass': returned exit code 1.

I was able to reproduce this for fluentd and jaeger.

What Should Happen Instead?

The addons should be enabled as it happens in Ubuntu.

Reproduction Steps

  1. brew install ubuntu/microk8s/microk8s
  2. microk8s install --channel=1.24/stable
  3. microk8s enable community
  4. microk8s enable fluentd

Can you suggest a fix?

I think there is a pattern where addons which were implemented using Python can be enabled (i.e. portainer) and those implemented using a bash script cannot.

Are you interested in contributing with a fix?

No

Traefik addon deploys duplicate configurations (helm duplicates the deployment by kubectl)

Summary

Pull Request #25 appears to have included unrelated changes to the traefik addon. These changes appear to duplicate existing code causing traefik to be deployed twice: once via traefik.yaml and once via helm. The PR added helm deployment for traefik but failed to remove the kubectl apply deployment. The PR should not have been merged IMO, because the traefik changes are unrelated to enabling helm deployment of portainer.

Reproduction Steps

  1. microk8s enable community
  2. microk8s enable traefik
  3. microk8s.kubectl get deploy,daemonset --namespace traefik

The output from that last command shows the two separately configured deployments (one as a daemonset by kubectl and the other as a deployment by helm):

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/traefik   1/1     1            1           8m41s

NAME                                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/traefik-ingress-controller   1         1         1       1            1           <none>          8m43s

Introspection Report

inspection-report-20221202_154717.tar.gz

Can you suggest a fix?

Drop one method for the other. Either remove the kubectl apply steps in enable and remove the traefik.yaml OR remove the changes to the traefik addon added by PR #25

Are you interested in contributing with a fix?

Maybe. Depends on my availabiliy.

Exceptions when enabling portainer

Summary

When enabling the portainer addon in a our tests (https://github.com/canonical/microk8s-community-addons/actions/runs/5820285792/job/15881038615) the following error appears:

2023-08-14T16:22:29.8461520Z Infer repository community for addon portainer
2023-08-14T16:22:40.3430883Z Traceback (most recent call last):
2023-08-14T16:22:40.3431757Z   File "/var/snap/microk8s/common/addons/community/addons/portainer/enable", line 114, in <module>
2023-08-14T16:22:40.3432938Z     main()
2023-08-14T16:22:40.3433756Z   File "/snap/microk8s/5764/usr/lib/python3/dist-packages/click/core.py", line 764, in __call__
2023-08-14T16:22:40.3434379Z     return self.main(*args, **kwargs)
2023-08-14T16:22:40.3435016Z   File "/snap/microk8s/5764/usr/lib/python3/dist-packages/click/core.py", line 717, in main
2023-08-14T16:22:40.3435503Z     rv = self.invoke(ctx)
2023-08-14T16:22:40.3436245Z   File "/snap/microk8s/5764/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
2023-08-14T16:22:40.3436824Z     return ctx.invoke(self.callback, **ctx.params)
2023-08-14T16:22:40.3438162Z   File "/snap/microk8s/5764/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
2023-08-14T16:22:40.3438765Z     return callback(*args, **kwargs)
2023-08-14T16:22:40.3439256Z   File "/var/snap/microk8s/common/addons/community/addons/portainer/enable", line 80, in main
2023-08-14T16:22:40.3440072Z     ensure_addon(metric_server_addon)
2023-08-14T16:22:40.3441007Z   File "/var/snap/microk8s/common/addons/community/addons/portainer/enable", line 35, in ensure_addon
2023-08-14T16:22:40.3441537Z     output = subprocess.check_output(
2023-08-14T16:22:40.3442066Z   File "/snap/microk8s/5764/usr/lib/python3.8/subprocess.py", line 415, in check_output
2023-08-14T16:22:40.3442630Z     return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
2023-08-14T16:22:40.3443204Z   File "/snap/microk8s/5764/usr/lib/python3.8/subprocess.py", line 516, in run
2023-08-14T16:22:40.3444124Z     raise CalledProcessError(retcode, process.args,
2023-08-14T16:22:40.3445039Z subprocess.CalledProcessError: Command '['/snap/microk8s/5764/microk8s-status.wrapper', '-a', 'core/metrics-server']' returned non-zero exit status 1.
2023-08-14T16:22:44.1902749Z Infer repository community for addon portainer
2023-08-14T16:22:44.9644588Z Traceback (most recent call last):
2023-08-14T16:22:44.9645122Z   File "/var/snap/microk8s/common/addons/community/addons/portainer/enable", line 114, in <module>
2023-08-14T16:22:44.9645467Z     main()
2023-08-14T16:22:44.9646166Z   File "/snap/microk8s/5764/usr/lib/python3/dist-packages/click/core.py", line 764, in __call__
2023-08-14T16:22:44.9647252Z     return self.main(*args, **kwargs)
2023-08-14T16:22:44.9647806Z   File "/snap/microk8s/5764/usr/lib/python3/dist-packages/click/core.py", line 717, in main
2023-08-14T16:22:44.9648198Z     rv = self.invoke(ctx)
2023-08-14T16:22:44.9648728Z   File "/snap/microk8s/5764/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
2023-08-14T16:22:44.9649208Z     return ctx.invoke(self.callback, **ctx.params)
2023-08-14T16:22:44.9649759Z   File "/snap/microk8s/5764/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
2023-08-14T16:22:44.9650171Z     return callback(*args, **kwargs)
2023-08-14T16:22:44.9650602Z   File "/var/snap/microk8s/common/addons/community/addons/portainer/enable", line 68, in main
2023-08-14T16:22:44.9650980Z     ensure_addon(dns_addon)
2023-08-14T16:22:44.9651416Z   File "/var/snap/microk8s/common/addons/community/addons/portainer/enable", line 35, in ensure_addon
2023-08-14T16:22:44.9651855Z     output = subprocess.check_output(
2023-08-14T16:22:44.9652291Z   File "/snap/microk8s/5764/usr/lib/python3.8/subprocess.py", line 415, in check_output
2023-08-14T16:22:44.9652743Z     return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
2023-08-14T16:22:44.9653206Z   File "/snap/microk8s/5764/usr/lib/python3.8/subprocess.py", line 516, in run
2023-08-14T16:22:44.9653649Z     raise CalledProcessError(retcode, process.args,
2023-08-14T16:22:44.9654393Z subprocess.CalledProcessError: Command '['/snap/microk8s/5764/microk8s-status.wrapper', '-a', 'core/dns']' returned non-zero exit status 1.
2023-08-14T16:22:48.7068857Z Infer repository community for addon portainer
2023-08-14T16:22:49.4905494Z Traceback (most recent call last):
2023-08-14T16:22:49.4907078Z   File "/var/snap/microk8s/common/addons/community/addons/portainer/enable", line 114, in <module>
2023-08-14T16:22:49.4907784Z     main()
2023-08-14T16:22:49.4909007Z   File "/snap/microk8s/5764/usr/lib/python3/dist-packages/click/core.py", line 764, in __call__
2023-08-14T16:22:49.4909774Z     return self.main(*args, **kwargs)
2023-08-14T16:22:49.4910681Z   File "/snap/microk8s/5764/usr/lib/python3/dist-packages/click/core.py", line 717, in main
2023-08-14T16:22:49.4911758Z     rv = self.invoke(ctx)
2023-08-14T16:22:49.4913884Z   File "/snap/microk8s/5764/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
2023-08-14T16:22:49.4914563Z     return ctx.invoke(self.callback, **ctx.params)
2023-08-14T16:22:49.4915380Z   File "/snap/microk8s/5764/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
2023-08-14T16:22:49.4915976Z     return callback(*args, **kwargs)
2023-08-14T16:22:49.4917101Z   File "/var/snap/microk8s/common/addons/community/addons/portainer/enable", line 68, in main
2023-08-14T16:22:49.4917654Z     ensure_addon(dns_addon)
2023-08-14T16:22:49.4918283Z   File "/var/snap/microk8s/common/addons/community/addons/portainer/enable", line 35, in ensure_addon
2023-08-14T16:22:49.4918927Z     output = subprocess.check_output(
2023-08-14T16:22:49.4919556Z   File "/snap/microk8s/5764/usr/lib/python3.8/subprocess.py", line 415, in check_output
2023-08-14T16:22:49.4920221Z     return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
2023-08-14T16:22:49.4920890Z   File "/snap/microk8s/5764/usr/lib/python3.8/subprocess.py", line 516, in run
2023-08-14T16:22:49.4922086Z     raise CalledProcessError(retcode, process.args,
2023-08-14T16:22:49.4923201Z subprocess.CalledProcessError: Command '['/snap/microk8s/5764/microk8s-status.wrapper', '-a', 'core/dns']' returned non-zero exit status 1.

This is probably because the prerequisite addons (eg dns) cause the snap services to be restarted. We probably need a silent retry when enabling the addons.

storageclass:microk8s-hostpath reclaim policy change

Hi
Currently storageClass=microk8s-hostpath has a reclaimpolicy of delete, if any add-on is using it and when someone accidentally do disable and enable the storageclass, pvc connected to the storageclass will lose data. or if the add-on has explicitly call the storage add-on from its script, if user disable/enable their add-on, pvc is lost.

root@localhost:~# microk8s kubectl describe sc
Name:            microk8s-hostpath
IsDefaultClass:  Yes
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"microk8s-hostpath"},"provisioner":"microk8s.io/hostpath"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner:           microk8s.io/hostpath
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

kindly change to ReclaimPolicy: Retain

NFS support for multi-node cluster

Summary

hostpath-storage Addon has limitations in multi-node cluster in terms of data sharing for Pods distributed across multiple Nodes.

Why is this important?

Much broader use-cases available for workloads running in Microk8s. Especially when Microk8s is mostly operated with local storage only.

Are you interested in contributing to this feature?

yes

Can't enable linkerd addon in 1.25

Summary

Can't enable linkerd addon in 1.25

What Should Happen Instead?

Should be able to enable linkerd in 1.25 from latest community addons.
Older addon worked in 1.24 and earlier.

Reproduction Steps

  1. microk8s cluster at version 1.25.2 (one worker node is at 1.25.0)
  2. community 20 https://github.com/canonical/microk8s-addons.git@b4334a
$ microk8s enable linkerd
Infer repository community for addon linkerd
Enabling Linkerd2
Infer repository core for addon dns
Addon core/dns is already enabled
Error: unknown flag: --crds
Usage:
  linkerd install [flags]
  linkerd install [command]

Examples:
  # Default install.
  linkerd install | kubectl apply -f -

  # Install Linkerd into a non-default namespace.
  linkerd install -l linkerdtest | kubectl apply -f -

  # Installation may also be broken up into two stages by user privilege, via
  # subcommands.

The installation can be configured by using the --set, --values, --set-string and --set-file flags.
A full list of configurable values can be found at https://www.github.com/linkerd/linkerd2/tree/main/charts/linkerd2/README.md

Available Commands:
  config        Output Kubernetes cluster-wide resources to install Linkerd
  control-plane Output Kubernetes control plane resources to install Linkerd

Flags:
      --admin-port uint                           Proxy port to serve metrics on (default 4191)
      --cluster-domain string                     Set custom cluster domain (default "cluster.local")
      --control-port uint                         Proxy port to use for control (default 4190)
      --controller-log-level string               Log level for the controller and web components (default "info")
      --controller-replicas uint                  Replicas of the controller to deploy (default 1)
      --controller-uid int                        Run the control plane components under this user ID (default 2103)
      --disable-h2-upgrade                        Prevents the controller from instructing proxies to perform transparent HTTP/2 upgrading (default false)
      --disable-heartbeat                         Disables the heartbeat cronjob (default false)
      --enable-endpoint-slices                    Enables the usage of EndpointSlice informers and resources for destination service
      --enable-external-profiles                  Enable service profiles for non-Kubernetes services
      --ha                                        Enable HA deployment config for the control plane (default false)
  -h, --help                                      help for install
      --identity-clock-skew-allowance duration    The amount of time to allow for clock skew within a Linkerd cluster (default 20s)
      --identity-external-ca                      Whether to use an external CA provider (default false)
      --identity-external-issuer                  Whether to use an external identity issuer (default false)
      --identity-issuance-lifetime duration       The amount of time for which the Identity issuer should certify identity (default 24h0m0s)
      --identity-issuer-certificate-file string   A path to a PEM-encoded file containing the Linkerd Identity issuer certificate (generated by default)
      --identity-issuer-key-file string           A path to a PEM-encoded file containing the Linkerd Identity issuer private key (generated by default)
      --identity-trust-anchors-file string        A path to a PEM-encoded file containing Linkerd Identity trust anchors (generated by default)
      --identity-trust-domain string              Configures the name suffix used for identities.
      --ignore-cluster                            Ignore the current Kubernetes cluster when checking for existing cluster configuration (default false)
      --inbound-port uint                         Proxy port to use for inbound traffic (default 4143)
      --linkerd-cni-enabled                       Omit the NET_ADMIN capability in the PSP and the proxy-init container when injecting the proxy; requires the linkerd-cni plugin to already be installed
      --outbound-port uint                        Proxy port to use for outbound traffic (default 4140)
      --proxy-cpu-limit string                    Maximum amount of CPU units that the proxy sidecar can use
      --proxy-cpu-request string                  Amount of CPU units that the proxy sidecar requests
      --proxy-log-level string                    Log level for the proxy (default "warn,linkerd=info")
      --proxy-memory-limit string                 Maximum amount of Memory that the proxy sidecar can use
      --proxy-memory-request string               Amount of Memory that the proxy sidecar requests
      --proxy-uid int                             Run the proxy under this user ID (default 2102)
      --registry string                           Docker registry to pull images from ($LINKERD_DOCKER_REGISTRY) (default "cr.l5d.io/linkerd")
      --set stringArray                           set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
      --set-file stringArray                      set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)
      --set-string stringArray                    set STRING values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
      --skip-inbound-ports strings                Ports and/or port ranges (inclusive) that should skip the proxy and send directly to the application
      --skip-outbound-ports strings               Outbound ports and/or port ranges (inclusive) that should skip the proxy
  -f, --values strings                            specify values in a YAML file or a URL (can specify multiple)

Global Flags:
      --api-addr string            Override kubeconfig and communicate directly with the control plane at host:port (mostly for testing)
      --as string                  Username to impersonate for Kubernetes operations
      --as-group stringArray       Group to impersonate for Kubernetes operations
      --cni-namespace string       Namespace in which the Linkerd CNI plugin is installed (default "linkerd-cni")
      --context string             Name of the kubeconfig context to use
      --kubeconfig string          Path to the kubeconfig file to use for CLI requests
  -L, --linkerd-namespace string   Namespace in which Linkerd is installed ($LINKERD_NAMESPACE) (default "linkerd")
      --verbose                    Turn on debug logging

Use "linkerd install [command] --help" for more information about a command.

error: no objects passed to apply

Introspection Report

inspection-report-20221001_121753.tar.gz

Can you suggest a fix?

Don't know the solution

Are you interested in contributing with a fix?

no

add arm64 support for argocd addon

Summary

Add arm64 support for the argocd addon.

Why is this important?

One of the main target groups of Microk8s are arm64 based systems and argocd supports arm64 images since at least v2.4

Are you interested in contributing to this feature?

Yes!

Support arm64 for Istio

Summary

Support arm64 / M1, M2 architecture for the Istio addon which is supported since 1.15

Why is this important?

For users of Istio using a different CPU architecture than amd64.

Are you interested in contributing to this feature?

yes

Add Klipper ServiceLB addon

Summary

MicroK8s is advertised to support multi-region HA clusters and run on bare metal instances, but is missing an addon load balancer suitable for this scenario. I stumbled upon Klipper ServiceLB which from reading the docs seems like a decent solution if there isn't much traffic, see:

Why is this important?

Regarding load balancers, MicroK8s only has an MetalLB addon that is unsuitable for the multi-region cluster of bare metal instances (requires shared LAN/VPC).

Are you interested in contributing to this feature?

No. It seems to require integration with MicroK8s itself?

NFS addons fail the second time they get executed

Summary

The NFS addon tests fail the second time you run them.

default       pod/busybox-pvc-nfs1                           0/1     ContainerCreating   0             124m
default       pod/busybox-pvc-nfs2                           0/1     ContainerCreating   0             124m

With the description:

  Warning  FailedMount  2m (x9 over 35m)  kubelet  Unable to attach or mount volumes: unmounted volumes=[volume], unattached volumes=[volume kube-api-access-nf8j6]: timed out waiting for the condition

This may indicate the addon cannot be disabled?

What Should Happen Instead?

NFS tests should not have side effects.

Reproduction Steps

Run the addon test of the nfs operator.

Cilium CNI Config

Summary

Cilium CNI config files are not propagated to attached nodes by default. Instead only the node the addon install command is run on is updated.

What Should Happen Instead?

Either we should update the CNI configure on each node on addon install or allow the CNI plugin to managed the CNI configure directory instead of updating on the addon install.

Reproduction Steps

  1. Create fresh multi-node setup,
  2. Enable cilium plugin
  3. Deploy sample app onto an attached node, not the node that the install took place in.
  4. Pod will fail to create, PodSandbox will fail to be initialized, error showing Calico isn't available.

Can you suggest a fix?

Allow CNI installation to write the correct config files onto each node on startup. In the helm chart configure updating the cni.customConf flag to false will allow Cilium to write to the config path already configured

Are you interested in contributing with a fix?

Yes, I can push an additional PR for this update. In addition or separately from #116

Fluentd persistence of logs in Elasticsearch and of config of Kibana

Summary

adding parameters to enable script to define persistence parameters for ES and Kibana data.

Why is this important?

loosing all logs in case of rescheduling is not ok in production.

Are you interested in contributing to this feature?

maybe, if i have time.

Question:

What is the best way to configure persistent storage for now ?

openEBS version bump for microk8s 1.25 cluster

Summary

Openebs jiva storage class fails due to a v1betav1 error when using it on k8's 1.25, I was able to fix this by bumping the version up to 3.2.x. What would be nice is that pvc replicas were scaled to the number of nodes in the cluster.

Why is this important?

I believe this is important for smaller clusters, it seems that in a two node development cluster the 3rd replica would never establish and prevent the service that requested the pvc to to bind the pvc to the pv. I was able to fix this by scaling the stateful set to 2. My proposed fix would be in the enable script, set the number of replicas of the stateful set to the number of nodes in the cluster, Also if the number of nodes in the cluster is greater than 1, I would run an additional command which patches the jiva storage class to be default storageclass.

Are you interested in contributing to this feature?

yes, please let me know if I can help.

kwasm dropping crun causes tests to fail

Summary

The In the latest kwasm node installer the crun support was removed https://github.com/KWasm/kwasm-node-installer/releases . The kwasm operator uses the latest kwasm-node-installer and as a result the validation tests started failing.

What Should Happen Instead?

The addon should deploy a specific version of the operator and by extension use a specific version of the kwasm-node-installer. This way stable MicroK8s releases are not affected by similar changes breaking backwards compatibility.

Furthermore, when a user deploys a stable MicroK8s branch or has an automation that includes the kwasm addon he does not want to all of a sudden get a different behavior.

Can you suggest a fix?

As an immediate mitigation I am going to disable the kwasm validation and testing in all affected branches so our CI is not blocked.

@0xE282B0 would you please be able to address this issue by pining the version of kwasm shipped in the addon on each affected branch?

The Portainer addon is detected when only the portainer agent is installed

Summary

I'm working at Portainer, and we came across a bug relating to the way that our addon is detected...
Given I have community addons enabled
When I install the portainer-agent (not the portainer server) in my MicroK8s cluster using a manifest file like this
And I run microk8s status, then currently Portainer shows up as an enabled addon
image

What Should Happen Instead?

Portainer shouldn't show as an enabled addon, unless the Portainer server is installed.

Reproduction Steps

  1. Enable community addons microk8s enable community
  2. Install the portainer agent from a manifest microk8s kubectl apply -f https://downloads.portainer.io/ee2-18/portainer-agent-k8s-nodeport.yaml
  3. Run microk8s status to check the enabled addons

Can you suggest a fix?

The current Portainer addon check_status, pod/portainer is matching for both pod/portainer-agent-xxx and pod/portainer-xxx causing Portainer to be detected as enabled when the agent is installed, as well as the server.
I propose to introduce an optional regex_check_status field, that when specified, does a more exact regex search for matching files or kube resources, as well as updating the portainer section of addons.yaml to use this new regex_check_status.

Are you interested in contributing with a fix?

yes

Addon Request: WasmEdge

With crun and WasmEdge it is possible to execute runtimeless WebAssembly container images. Like described here.

WasmEdge

Crun can be compiled with WebaAssembly support to run *.wasm files. That takes the distroless approach one step further and moves even the runtime out of the container image. That makes images smaller and reduces the attack surface of the image even more.
Since crun is an alternative runtime it can be added to containerd like the kata runtime and the add-on installs the runtime as snap and sets it as deflault in the enable script.

Next steps:

Sosivio 1.6.0 breaks on MicroK8s 1.27

Summary

The latest 1.6.0 release of sosivio seems to be failing on MicroK8s.

When enabling the addon the communicator pod is crashlooping with the following error:

ubuntu@ip-172-31-3-156:~$ sudo microk8s.kubectl logs -n sosivio       pod/communicator-8588f8b5b6-cbb2d
{
        "name": "communicator",
        "nodeName": "ip-172-31-3-156",
        "podName": "communicator-8588f8b5b6-cbb2d",
        "podNamespace": "sosivio",
        "podIP": "10.1.118.200",
        "podUUID": "058d54a6-c63e-4ecf-ac28-d98721638efb",
        "host": "communicator-8588f8b5b6-cbb2d",
        "port": "8082",
        "startedAt": "2023-07-14T12:18:17.199926044Z",
        "serverTime": "2023-07-14T12:18:17.498859370Z",
        "build": {
                "branch": "release-1.6.0",
                "commit": "7d4513b",
                "builtAt": "2023-07-12T13:15:19.945550195Z"
        }
}
panic: GetNSQDNodes() failed: error HttpCall() failed: error performHttpCall() failed: Get "http://nsqlookupd:4161/nodes": dial tcp: lookup nsqlookupd on 10.152.183.10:53: no such host

goroutine 1 [running]:
main.main()
        /app/internal/cmd/app/main.go:27 +0x7fb

What Should Happen Instead?

The addon should deploy without problems.

Reproduction Steps

Install microk8s and follow the sosivio steps to deploy the 1.6.0 release.

Introspection Report

inspection-report-20230714_122628.tar.gz

microk8s enable cilium has warnings and causes the node to become inaccessible.

I'm not sure if canonical/microk8s is the right place for the issue, so opening an issue here as well. Duplicate of canonical/microk8s#3108

After enabling cilium with microk8s cilium enable inbound communication to the node does not work.

ICMP -> node = timeout
Node ICMP -> anything = fine
SSH -> node = timeout
Node ssh -> anything = fine

There were also warnings about depreciated/non functional sections from the helm chart.

Enabled addons:

  • cilium
  • dns
  • ha-cluster
  • helm3
  • ingress
  • metallb
  • metrics-server
  • storage

The node remains inaccessible after disabling cilium.

After a bit of work to re-run the command from the console session:

:~$ microk8s enable cilium
Addon helm3 is already enabled.
Restarting kube-apiserver
[sudo] password for user:
Restarting kubelet
Enabling Cilium
Fetching cilium version v1.10.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   131  100   131    0     0    592      0 --:--:-- --:--:-- --:--:--   592
100 32.2M  100 32.2M    0     0  4466k      0  0:00:07  0:00:07 --:--:-- 4848k
Deploying /var/snap/microk8s/3052/actions/cilium.yaml. This may take several minutes.
serviceaccount/cilium created
serviceaccount/cilium-operator created
configmap/cilium-config created
clusterrole.rbac.authorization.k8s.io/cilium created
clusterrole.rbac.authorization.k8s.io/cilium-operator created
clusterrolebinding.rbac.authorization.k8s.io/cilium created
clusterrolebinding.rbac.authorization.k8s.io/cilium-operator created
Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nod
eSelectorTerms[1].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use "kub
ernetes.io/os" instead
Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functio
nal in v1.16+; use the "priorityClassName" field instead
daemonset.apps/cilium created
deployment.apps/cilium-operator created
Waiting for daemon set spec update to be observed...
Waiting for daemon set "cilium" rollout to finish: 0 out of 1 new pods have been updated...
Waiting for daemon set "cilium" rollout to finish: 0 of 1 updated pods are available...
daemon set "cilium" successfully rolled out
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use poli
cy/v1 PodDisruptionBudget
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": configmaps "calico-config" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": customresourcedefinitions.apiextensions.k8s.io "bgpconfigurations.crd.projectcalico.org" not foun
d
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": customresourcedefinitions.apiextensions.k8s.io "bgppeers.crd.projectcalico.org" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": customresourcedefinitions.apiextensions.k8s.io "blockaffinities.crd.projectcalico.org" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": customresourcedefinitions.apiextensions.k8s.io "clusterinformations.crd.projectcalico.org" not fo
und
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": customresourcedefinitions.apiextensions.k8s.io "felixconfigurations.crd.projectcalico.org" not fo
und
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": customresourcedefinitions.apiextensions.k8s.io "globalnetworkpolicies.crd.projectcalico.org" not
found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": customresourcedefinitions.apiextensions.k8s.io "globalnetworksets.crd.projectcalico.org" not foun
d
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": customresourcedefinitions.apiextensions.k8s.io "hostendpoints.crd.projectcalico.org" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": customresourcedefinitions.apiextensions.k8s.io "ipamblocks.crd.projectcalico.org" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": customresourcedefinitions.apiextensions.k8s.io "ipamconfigs.crd.projectcalico.org" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": customresourcedefinitions.apiextensions.k8s.io "ipamhandles.crd.projectcalico.org" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": customresourcedefinitions.apiextensions.k8s.io "ippools.crd.projectcalico.org" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": customresourcedefinitions.apiextensions.k8s.io "kubecontrollersconfigurations.crd.projectcalico.o
rg" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": customresourcedefinitions.apiextensions.k8s.io "networkpolicies.crd.projectcalico.org" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": customresourcedefinitions.apiextensions.k8s.io "networksets.crd.projectcalico.org" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": clusterroles.rbac.authorization.k8s.io "calico-kube-controllers" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": clusterrolebindings.rbac.authorization.k8s.io "calico-kube-controllers" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": clusterroles.rbac.authorization.k8s.io "calico-node" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": clusterrolebindings.rbac.authorization.k8s.io "calico-node" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": daemonsets.apps "calico-node" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": serviceaccounts "calico-node" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": deployments.apps "calico-kube-controllers" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": serviceaccounts "calico-kube-controllers" not found
Error from server (NotFound): error when deleting "/var/snap/microk8s/3052/args/cni-network/cni.yaml
": poddisruptionbudgets.policy "calico-kube-controllers" not found

inspection-report-20220428_154811.tar.gz

ArgoCD addon should include the argocd CLI

When enabling argocd, the argocd cli is not included. Typically with other addons, the cli will be included and can be called with
microk8s <add-on-name>, so I would expect microk8s argocd to call the argocd cli.

Adding Falco as a new community addon

I have raised a Pull Request to add Falco (https://falco.org/ and https://github.com/falcosecurity/falco) as a new community addon.

Falco is a CNCF incubating project focused on real-time runtime threat detection for Linux and Kubernetes. It is deployed via a Helm Chart, and I believe I have added it consistent with other Helm-based addons I saw in the repo.

#194

Please let me know if there are any changes/improvements you'd like in order to merge this...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.