billimek / billimek-charts Goto Github PK
View Code? Open in Web Editor NEWDEPRECATED - new home is https://github.com/k8s-at-home/charts
License: Apache License 2.0
DEPRECATED - new home is https://github.com/k8s-at-home/charts
License: Apache License 2.0
See https://github.com/helm/charts/blob/master/stable/metallb/templates/servicemonitor.yaml and associated files as an example implementation
Hi,
I'm trying to install the teslamate chart but I can't get the websockets to work. Is there any extra steps needed to make this work?
WebSocket connection to 'ws://tesla.xx.xxx/live/websocket?baseUrl=http%3A%2F%2Ftesla.xx.xxx&referrer=&vsn=2.0.0' failed: Error during WebSocket handshake: Unexpected response code: 426
I'm using the nginx ingress controller with no special changes and forwarding all paths, including /ws/ to port 4000 in the pod. I'm seeing the panel, but the websocket is giving errors.
Is there a special configuration needed in the NGINX ingress config? Or am I missing something else?
Thank you!
Can you share the syntax or sample .yaml on how to configure Plex to use a host mount that is pointing to a CIFS share with content to play in Plex? Thanks!
@billimek what do you think about migrating the Metallb chart to here, since the developers have no interest in maintaining one themselves?
Create a standalone chart for just rtorrent
. See StayPirate/alpine-rtorrent & looselyrigorous/docker-rtorrent repos for docker examples to base this on.
Probes aren't enabled yet. The blocky Dockerfile has a healthcheck referenced as: dig @127.0.0.1 -p 53 healthcheck.blocky +tcp || exit 1
.
But based on bugs in containerd, I'd like to avoid exec-based healthchecks if possible.
See #204 for some context.
I currently run radarr-exporter as a standard k8s deployment (see here)
We could actually implement this into the chart (possible as sidecar?) and update the values to accept whether or not to turn these on, maybe like the following:
metrics:
enabled: true
port: 9811
apiKey: <secret>
I do maintain the following projects:
I have Blocky scaled to 3 replicas, once in awhile one of these Pods will become unhealthy. This is the second time within 2 weeks it happened. Workaround is scaling down Blocky to 0 and then scale it back up, but it's not a fix as this will happen again.
@billimek have you noticed anything like this?
Maybe it could be related to this issue: 0xERR0R/blocky#20
My Blocky Pods
devin@Gaming-PC ~/C/k3s-gitops> k get po | grep blocky
blocky-558f8966b6-88jgx 1/1 Running 2 7d10h
blocky-558f8966b6-rp49r 1/1 Running 1 7d10h
blocky-558f8966b6-5glnc 0/1 CrashLoopBackOff 139 7d10h
Description of failed Pod
devin@Gaming-PC ~/C/k3s-gitops> k describe pod/blocky-558f8966b6-5glnc
Name: blocky-558f8966b6-5glnc
Namespace: default
Priority: 0
Node: k3s-worker-b/192.168.42.13
Start Time: Sat, 21 Mar 2020 22:22:39 -0400
Labels: app.kubernetes.io/instance=blocky
app.kubernetes.io/name=blocky
pod-template-hash=558f8966b6
Annotations: prometheus.io/port: monitoring
prometheus.io/scrape: true
Status: Running
IP: 10.42.2.160
IPs:
IP: 10.42.2.160
Controlled By: ReplicaSet/blocky-558f8966b6
Containers:
blocky:
Container ID: docker://b6bd9e81565ff6ec9b0c2f35d36e17668918f6232c60807e4af91acdbcdb79ad
Image: spx01/blocky:v0.5
Image ID: docker-pullable://spx01/blocky@sha256:51bb1df868cb5ace0a275abda6e1856681749df92910ce5af71d6c187d2ec755
Ports: 4000/TCP, 53/TCP, 53/UDP
Host Ports: 0/TCP, 0/TCP, 0/UDP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: ContainerCannotRun
Message: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449:
container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/kubelet/pods/f001ba3a-afe4-491a-bd47-88f86e9f1362/volume-subpa
ths/config/blocky/0\\\" to rootfs \\\"/var/lib/docker/overlay2/8b372dd20d86217ae1f3bb6458eefb152db9ca0dfcfc66c3402f3706e925ee08/
merged\\\" at \\\"/var/lib/docker/overlay2/8b372dd20d86217ae1f3bb6458eefb152db9ca0dfcfc66c3402f3706e925ee08/merged/app/config.ym
l\\\" caused \\\"no such file or directory\\\"\"": unknown
Exit Code: 128
Started: Sun, 29 Mar 2020 08:53:09 -0400
Finished: Sun, 29 Mar 2020 08:53:09 -0400
Ready: False
Restart Count: 139
Limits:
cpu: 1
memory: 500Mi
Requests:
cpu: 50m
memory: 275Mi
Liveness: http-get http://:monitoring/metrics delay=0s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:monitoring/metrics delay=0s timeout=1s period=10s #success=1 #failure=5
Environment:
TZ: America/New_York
Mounts:
/app/config.yml from config (rw,path="config.yml")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-777n4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: blocky
Optional: false
default-token-777n4:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-777n4
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 47m (x129 over 11h) kubelet, k3s-worker-b Container image "spx01/blocky:v0.5" already present on machin
e
Warning BackOff 2m50s (x3151 over 11h) kubelet, k3s-worker-b Back-off restarting failed container
From that looks like this is the error:
Message: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449:
container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/kubelet/pods/f001ba3a-afe4-491a-bd47-88f86e9f1362/volume-subpa
ths/config/blocky/0\\\" to rootfs \\\"/var/lib/docker/overlay2/8b372dd20d86217ae1f3bb6458eefb152db9ca0dfcfc66c3402f3706e925ee08/
merged\\\" at \\\"/var/lib/docker/overlay2/8b372dd20d86217ae1f3bb6458eefb152db9ca0dfcfc66c3402f3706e925ee08/merged/app/config.ym
By default, radarr and sonarr are set to rename movies and create hardlinks. Hardlinks can only be created when both file locations are located on the same file system. Even though the /downloads and /movies PV could be from the same share, the container has two mount points and will not recognize that they are from the same system. Instead of creating hardlinks, radarr and sonarr will copy the file.
Looking to move my interdir off of the NFS share and onto a block device out of ceph. Should allow for a much faster processing time of small files with only the completed files being copied over to the NFS share that sonarr/radarr picks up.
Do you think it makes most sense to add this as a separate stanza in the nzbget chart or trying to use the extravolumemounts:[]
block?
Hi @billimek just a couple of question,
have you had trouble with endless loops of re-provisioning config with the unifi controller when deployed in kubernetes? I tried your chart but I ended up in a endless loop and re-provisioned all devices until I stopped the controller.
how do you handle STUN communication? What kind of ingress controller do you use? is it necessary for the unifi controller to have a working STUN communication? Im using Traefik and it doesnt support UDP traffic so I cant expose the STUN port.
Given the deprecation of the helm/charts stable repo, it stands to reason that a new home should be found for some of the charts.
Charts I'm an owner (or co-owner) of right now in the helm/charts
repo:
unifi
home-assistant
node-red
minecraft
nextcloud
unifi
, home-assistant
, and node-red
originated in this repo and it shouldn't be a big deal to move them back. Will need to check with the other co-owners of the minecraft and nextcloud chart to see what they want to do.
I don't have more time to work on this tonight, so I'll create this issue to remind me.
nzbhydra2 is a Java Spring application and I might need to set the timeout a lot higher as it runs database migrations on start. Currently on Raspberry Pi this takes awhile.
Jackett also seems to take longer than a minute to start up as well.
Logs are showing the Container being killed.
@billimek do you have any suggestions, or do you think just adjusting the probes should do it, or maybe removing them?
Edit: I should have mentioned the probe PR didn't fix the issue, I'll need to figure out the best values...
Currently it doesn't appear there is a history limit on the deployments (default is 10) so we're seeing old replicasets starting to accumulate
replicaset.apps "radarr-55d94f7557" deleted
replicaset.apps "radarr-589cbccb4b" deleted
replicaset.apps "radarr-5b9c6c569b" deleted
replicaset.apps "radarr-6dfb689658" deleted
replicaset.apps "radarr-6fb4c9fc9d" deleted
replicaset.apps "radarr-74bfc48555" deleted
replicaset.apps "radarr-757cc79b67" deleted
replicaset.apps "radarr-85b85fbfc9" deleted
replicaset.apps "radarr-8d6c78bbf" deleted
replicaset.apps "radarr-dd6fc4bf" deleted
replicaset.apps "sonarr-54b697b96f" deleted
replicaset.apps "sonarr-5847d5b49c" deleted
replicaset.apps "sonarr-5bd56dcff4" deleted
replicaset.apps "sonarr-6796958fb" deleted
replicaset.apps "sonarr-6bfc8b4b86" deleted
replicaset.apps "sonarr-77d565879b" deleted
replicaset.apps "sonarr-78cd6f4b46" deleted
replicaset.apps "sonarr-7d4ff87c5d" deleted
According to https://www.weave.works/blog/how-many-kubernetes-replicasets-are-in-your-cluster- we can limit this...
I suggest 3 is good enough,
revisionHistoryLimit: 3
We likely don't need to increase chart version or make this option configurable.
Not happy with the github actions workflow and instead will adopt the approach outlined in these posts:
Leverage prometheus and modern speedtest CLI for collecting data. See the following for reference:
Need to add pod annotation to the following charts where they are missing:
I'm very new to Helm and charts but would love to deploy a Nextcloud 15. I see you are at 13. Would you be willing to update to 15 (or show me how?) :)
Create a new chart that is only the flood UI component. See the 'official' Dockerfile or this repo for ideas.
It still shows 2.1.1 here https://hub.helm.sh/charts/billimek/jackett
When hard-deleting a chart, the following considerations must be made:
The chart-releaser-action doesn't currently support or handle a chart deletion. The reason for this is that it looks for all 'changed' charts since the last successful release (based on tags in the repo). When a chart is removed, the releaser sees that it was 'changed' and tries to release it. It can't, of course, because there is nothing to release.
A workaround for this could be to not hard-delete a chart but instead mark it as deprecated. See this slack thread for some more context.
If it is necessary to actually delete a chart, the following actions can be taken:
For a given commit (e.g. 67167f7
), tag it:
git tag -a remove_cloudflare-ddns 67167f7 -m "removing cloudflare-ddns chart"
Next, push the tag to the remote repo:
git push --tags
... now subsequent invocations of the chart releaser should work as expected.
@billimek what do you think about adding RollingUpdate
for the deployments strategy
for 0 downtime?
Maybe something like this can be added to the deployment
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 1
https://gitlab.com/onedr0p/qbittorrent-prune
Script to delete torrents from qBittorrent that have a tracker error like Torrent not Registered or Unregistered torrent. This script currently only supports monitoring up to 3 categories in qBittorrent to check for tracker errors.
Opening this issue for me to work on.
Why not automate all the things? :)
The github actions results in two 'push' events when a pull request is merged which results in one of the two duplicate events failing. There shouldn't be two push events at the same time when a pull request is merged.
https://hub.docker.com/r/linuxserver/tautulli
Same as for Ombi, would you be open for a PR adding this to your repo?
PR 6435 submitted
Hello
I am testing out your helm chart with a new deployment, and for my deployment I need to use the annotations in the services however they are only included in the template files for the GUI and Ingress. There is a configuration for them, and they are included in the values.yaml but they never get rendered into the services from the templates. Please let me know if you need any additional information.
Thanks for your work on this project!
time="2020-03-12T15:13:55-04:00" level=fatal msg="wrong file structure: yaml: unmarshal errors:\n line 1: cannot unmarshal !!str `upstrea...` into config.Config"
My deployment:
https://github.com/onedr0p/k3s-gitops/blob/master/deployments/default/blocky/blocky.yaml
I don't see any big differences in our deployments
With the re-launch of github actions, it may be time to attempt to use it again instead of circleCI
This is necessary in order to allow for clean upgrades of the chart when it is leveraging a storageclass that only allows one access. See this article explaining the reasoning and necessity.
See helm/charts#9677 as an example for a similar chart.
Still shows 3.0.1 here https://hub.helm.sh/charts/billimek/qbittorrent
Still shows latest as 2.0.3 here https://hub.helm.sh/charts/billimek/ombi
Sorry I am still a bit of a newbie working with Kubernetes and Charts. I am having an issue deploying this chart.
# cloudflare-dyndns-helm-values.txt
cloudflare:
user: "$CLOUDFLARE_USER"
token: "$CLOUDFLARE_APIKEY"
zones: "$CLOUDFLARE_ZONES"
hosts: "$CLOUDFLARE_HOSTS"
record_types: "$CLOUDFLARE_RECORDTYPES"
# cloudflare-dyndns.yaml
---
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: cloudflare-dyndns
namespace: default
annotations:
fluxcd.io/automated: "false"
spec:
releaseName: cloudflare-dyndns
chart:
repository: https://billimek.com/billimek-charts/
name: cloudflare-dyndns
version: 1.0.0
values:
image:
repository: hotio/cloudflare-ddns
tag: stable-47b759b
resources:
requests:
cpu: 100m
memory: 64Mi
limits:
cpu: 500m
memory: 128Mi
valueFileSecrets:
- name: "cloudflare-dyndns-helm-values"
The chart has defined the env variables like so:
...
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: CF_USER
valueFrom:
secretKeyRef:
name: {{ template "cloudflare-dyndns.fullname" . }}
key: cloudflare-dyndns-user
- name: CF_APIKEY
valueFrom:
secretKeyRef:
name: {{ template "cloudflare-dyndns.fullname" . }}
key: cloudflare-dyndns-token
- name: CF_ZONES
valueFrom:
secretKeyRef:
name: {{ template "cloudflare-dyndns.fullname" . }}
key: cloudflare-dyndns-zones
- name: CF_HOSTS
valueFrom:
secretKeyRef:
name: {{ template "cloudflare-dyndns.fullname" . }}
key: cloudflare-dyndns-hosts
- name: CF_RECORDTYPES
value: "{{ .Values.cloudflare.record_types }}"
- name: DETECTION_MODE
value: "{{ .Values.cloudflare.detection_mode }}"
- name: LOG_LEVEL
value: "{{ .Values.cloudflare.log_level }}"
- name: SLEEP_INTERVAL
value: "{{ .Values.cloudflare.sleep_interval }}"
...
Do you think I should update the Chart to not use the valueFrom.secretKeyRef
? It seems as though they can all just be regular values (see below) like since the valueFileSecrets
will just drop them in.
env:
- name: CF_USER
value: "{{ .Values.cloudflare.user }}"
- name: CF_APIKEY
value: "{{ .Values.cloudflare.token }}"
- name: CF_ZONES
value: "{{ .Values.cloudflare.zones }}"
- name: CF_HOSTS
value: "{{ .Values.cloudflare.hosts }}"
- name: CF_RECORDTYPES
value: "{{ .Values.cloudflare.record_types }}"
- name: DETECTION_MODE
value: "{{ .Values.cloudflare.detection_mode }}"
- name: LOG_LEVEL
value: "{{ .Values.cloudflare.log_level }}"
- name: SLEEP_INTERVAL
value: "{{ .Values.cloudflare.sleep_interval }}"
Again sorry and thanks for your help!
Frigate release 0.4.0 is almost done and has some changes which will impact the chart:
See the beta release & associated docker image
Thank you for these charts. They make for a great opportunity to learn with a huge payoff at the end.
I had to jump through a few hoops to get modem-stats to install and run.
Only after following those steps did modem-stats successfully create. It took me a while to realize that modem-stats was looking for influxdb using the name "influxdb-influxdb". Then it failed on requiring a username. It finally said the table doesn't exist and it will try to create one.
So, not sure if you'd like to beef up the docs a bit, but these are the hoops I had to jump through to get modem-stats to install.
Thanks!
The kube-plex chart is hosted here because there is no chart registry currently hosting the excellent kube-plex chart.
Currently it's a submodule and not really handled that great in this repo.
Personally speaking, using flux I can just reference the chart via git URL and don't need it to be packaged int a chart repo.
I need to figure out a better way to host this, or convince upstream to host a chart registry instead. That would be the better outcome. There is an existing issue almost a year old attempting to support this. Perhaps I can make the necessary changes as a PR (using github actions) to have munnerz package as a chart repo.
In Kubernetes v1.16, extensions/v1beta1
will be removed. The charts all need to be updated to reflect the new apps/v1
spec instead.
When deleting a chart maybe the created pvc must not be deleted.
That way the data would remain intact even if the chart is deleted.
It could be possible to use the "helm.sh/resource-policy": keep
annotation to achieve this.
What do you think, should we use this annotation ?
Create a chart for ser2sock and behave very much like the frigate chart whereby it will only schedule to a node with a particular label indicating which node has the special USB device connected to it.
example docker way of running this:
docker run \
-d \
--name $NAME \
--restart always \
-p 8091:8091 \
--device=/dev/ttyACM0 \
-v $(pwd)/zwave2mqtt:/usr/src/app/store \
robertslando/zwave2mqtt:latest
The music fork of Sonarr / Radarr
https://hub.docker.com/r/linuxserver/ombi/
Would you be open to a PR adding this to your repo?
Appears Jackett and NZBHydra are still a culprit of slow start up times. From researching on the k8s docs we can implement a startupProbe which might solve my issue once and for all.
I'll just implement this on Jackett and NZBHydra2 as the others seem to be pretty quick at starting.
https://gitlab.com/onedr0p/sonarr-episode-prune
Delete episodes from specified series in Sonarr. Useful for shows that air daily.
Opening this issue for me to work on.
Why not automate all the things? :)
Now that this is on helm hub, some of the links and images are relative-path and not working properly.
Update the markdown files to properly resolve URLs.
Looks like we're missing the value keys and description for everything.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.