christianlempa / boilerplates Goto Github PK
View Code? Open in Web Editor NEWThis is my personal template collection. Here you'll find templates, and configurations for various tools, and technologies.
License: MIT License
This is my personal template collection. Here you'll find templates, and configurations for various tools, and technologies.
License: MIT License
Seems to be related to MariaDB 10.6
add this "--innodb-read-only-compressed=OFF " to the command line
nextcloud-db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --innodb-read-only-compressed=OFF
In boilerplates/docker-compose/traefik/config/traefik.yml
there is a typo: # redirectons: insted of # redirections.
When trying to enable rediretion the error becomes effective.
entryPoints:
web:
address: :80
# (Optional) Redirect to HTTPS
# ---
# http:
# redirectons:
# redirections:
# entryPoint:
# to: websecure
# scheme: https
please update your docker-compose.yml to reflect google no longer using docker hub.
https://hub.docker.com/r/google/cadvisor
https://github.com/google/cadvisor
new location:
image: gcr.io/cadvisor/cadvisor:latest
HI!
For first i love your tutorials at youtube and i'm glad i can learn a lot from the source.
I have a problem with packer. Everything was changed step by step with you tutorial but packer didnt recognize template and vars.
Mine system is ubuntu 20.04.4 with packer 1.8.0
❯ ls
credentials.pkr.hcl files http ubuntu-server-focal.pkr.hcl
❯ packer build var-file='./credentials.pkr.hcl' ./ubuntu-server-focal.pkr.hcl
Usage: packer build [options] TEMPLATE
Will execute multiple builds in parallel as defined in the template.
The various artifacts created by the template will be outputted.
Options:
-color=false Disable color output. (Default: color)
-debug Debug mode enabled for builds.
-except=foo,bar,baz Run all builds and post-processors other than these.
-only=foo,bar,baz Build only the specified builds.
-force Force a build to continue if artifacts exist, deletes existing artifacts.
-machine-readable Produce machine-readable output.
-on-error=[cleanup|abort|ask|run-cleanup-provisioner] If the build fails do: clean up (default), abort, ask, or run-cleanup-provisioner.
-parallel-builds=1 Number of builds to run in parallel. 1 disables parallelization. 0 means no limit (Default: 0)
-timestamp-ui Enable prefixing of each ui output with an RFC3339 timestamp.
-var 'key=value' Variable for templates, can be used multiple times.
-var-file=path JSON or HCL2 file containing user variables.
❯ cd ..
❯ packer build var-file='./credentials.pkr.hcl' ./ubuntu-server-focal-test/ubuntu-server-focal.pkr.hcl
Usage: packer build [options] TEMPLATE
Will execute multiple builds in parallel as defined in the template.
The various artifacts created by the template will be outputted.
Options:
-color=false Disable color output. (Default: color)
-debug Debug mode enabled for builds.
-except=foo,bar,baz Run all builds and post-processors other than these.
-only=foo,bar,baz Build only the specified builds.
-force Force a build to continue if artifacts exist, deletes existing artifacts.
-machine-readable Produce machine-readable output.
-on-error=[cleanup|abort|ask|run-cleanup-provisioner] If the build fails do: clean up (default), abort, ask, or run-cleanup-provisioner.
-parallel-builds=1 Number of builds to run in parallel. 1 disables parallelization. 0 means no limit (Default: 0)
-timestamp-ui Enable prefixing of each ui output with an RFC3339 timestamp.
-var 'key=value' Variable for templates, can be used multiple times.
-var-file=path JSON or HCL2 file containing user variables.
something is wrong but i can't figure what
Hello, I am an intermediate with Docker and I looked at the compose file in this repo for Grafana/Prometheus from a YouTube video. In the comment section, I saw this:
With time any updates like this will be harder to find in the Comments section. I was lucky I found just when I was wondering the video shows a combined docker-compose.yaml but the actual repo has them separated. Maybe update the main Description section with a dated update about the repo? You know, just like a code description update :-) Anyways,
I agree that it isn't obvious for new people getting into self-hosting why the containers are separate. A short explanation might help here.
Can you show an implementation of some nvidia gpu exporter for prom/graf?
something like this maybe?
https://golangrepo.com/repo/utkuozdemir-nvidia_gpu_exporter
or this
https://docs.nvidia.com/datacenter/cloud-native/#setting-up-dcgm
I have a hard time making it work.
edit: found a way to make it work.
Thanks for the great content
TODO: Insertz Pihole readme
undefined
The Kubernetes section mentioned in the "Free SSL Certs in Kubernetes! Cert Manager Tutorial" went missing. I can't find any configs for the cert manager. Is the information/video deprecated?
Thanks.
I've followed your video and boilerplate files, but when I check targets I get the following error:
Get "http://localhost:8080/metrics": dial tcp 127.0.0.1:8080: connect: connection refused
for both cadvisor and node_exporter.
Cadvisor is using network host but does not bind any ports, it should be using the bridge network this will make prometheus unable to connect to it
It seems that sincer I changed the ISO to Ubuntu 20.04.5, my build fails and the problem seems to be qemu-guest-agent. When doing the subiquiti install and getting to it, it just fails with an exit code 100 and returns a prompt. Nothing really helpful in the logs.
I tried with 22.04.1 and had the same issue until I found this post: https://askubuntu.com/questions/1427461/server-22-04-1-subiquity-autoinstall-system-install-command-fails-after-changi
After adding the network part with the correct NIC naming for Proxmox (ens18 in my case), it works perfect.
Unfortunately, the same fix does not work for 20.04.5 I am not sure what has been changed there but maybe others can try with this ISO and confirm or not if you are having the same. I also opened a thread here: https://askubuntu.com/questions/1440007/qemu-guest-agent-fails-at-autoinstall
Thanks,
Andrei
I Can't seem to get Ombi to work with a subdomain and Proxy manager. I have set up all my other self hosted apps like Radarr, Sonarr, Tautulli etc. But I am trying to set up ombi with a reverse proxy so that I can have friends and family go to the subdomain for Plex request. I can access Ombi locally with no problem. I am fairly new to all this and thank you for all your help and videos on youtube!
We need to review the nextcloud deployment using docker, docker-compose. Stop using the latest tag and review the image tags.
To ensure consistency and adherence to standards in Markdown files, markdownlint rules are applied.
The Markdown markup language was designed to be easy to read, write, and understand. This goal is achieved, but the flexibility of Markdown is both an advantage and a drawback. There are many possible styles, which can result in inconsistent formatting.
TODO: Insert Duplicati Readme and backup scripts
undefined
Hi, I've been trying to get your monitoring stack working, but kept running into an issue with cadvisor. I kept getting the error of
Error response from daemon: error while creating mount source path '/var/lib/docker': mkdir /var/lib/docker: read-only file system.
A quick search showed that this is because of how Docker was installed on my system (via snap). The docker root directory is different when installed from snap vs a manual install.
More info here
update teleport docker-compose deployment to the latest version 13
Hi,
first, many thanks for your super helpful videos.
In the config IMO there is this privileged: true
missing.
Without it I am getting this in the logs:
Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: operation not permitted
Hi Chris @xcad2k,
I just saw you are also interested in provisioning resources in Proxmox with Terraform. I have a dedicated repository - https://github.com/TechProber/cloud-estate for this purpose. I am currently using Terraform to manage all my Proxmox resources. The experience is way more superior than I expected. Feel free to check out the links below as they may be helpful for your future videos xD
Add Home Assistant Docker Compose template, for integration
a restart policy (restart: unless-stopped) is missing in the boilerplates/docker-compose/prometheus/exporters/node_exporter/docker-compose.yml file
TODO: Insert teleport readme
undefined
the volumes should include the config file
volumes:
prometheus-data:
driver: local
config:
driver: local
grafana-data:
driver: local
Get "http://cadvisor:8080/metrics": dial tcp: lookup cadvisor on 127.0.0.11:53: server misbehaving
Can anyone tell me whats going on in here. I am using same stack for grafana, prometheus , cadvisor and node export.
Hey!
Just watched your treafik vid on yt! thx!
I now switch from nginx to traefik, but im not quite sure where I can implement my selfsigned tls keys in the ingress.yaml? If you just let me know where to put in the ingress.yaml =)
tls:
certificates:
- secretName: tls-cert?
TODO: Test Docker-Compose for InfluxDB2
Heya,
I'm attempting to use Packer to create a VM template, and I'm getting the above message.
Packer can successfully access my Proxmox instance, create a VM, and it looks like it attempts to start.
Unsure if you or anyone else has come across this. Google isn't helping much.
(The password is randomly generated for this test instance, so not too fussed about it being shared here.)
packer build -var-file=../credentials.pkr.hcl ubuntu-server-jammy.pkr.hcl
modem7@Packer:~/packer-proxmox-template/ubuntu-server-jammy$ packer build -var-file=../credentials.pkr.hcl ubuntu-server-jammy.pkr.hcl
ubuntu-server-jammy.proxmox.ubuntu-server-jammy: output will be in this color.
==> ubuntu-server-jammy.proxmox.ubuntu-server-jammy: Creating VM
==> ubuntu-server-jammy.proxmox.ubuntu-server-jammy: Starting VM
==> ubuntu-server-jammy.proxmox.ubuntu-server-jammy: Error starting VM: start failed: QEMU exited with code 1
==> ubuntu-server-jammy.proxmox.ubuntu-server-jammy: Stopping VM
==> ubuntu-server-jammy.proxmox.ubuntu-server-jammy: Deleting VM
Build 'ubuntu-server-jammy.proxmox.ubuntu-server-jammy' errored after 16 seconds 326 milliseconds: Error starting VM: start failed: QEMU exited with code 1
Proxmox logs:
Nov 07 18:11:12 proxmox pvedaemon[2069083]: start failed: QEMU exited with code 1
Nov 07 18:11:12 proxmox pvedaemon[2060576]: <packer@pve!packer> end task UPID:proxmox:001F925B:0922C8A5:63694A3F:qmstart:100:packer@pve!packer: start failed: QEMU exited with code 1
Nov 07 18:11:16 proxmox pvestatd[2060612]: VM 100 qmp command failed - VM 100 not running
Nov 07 18:11:16 proxmox pvedaemon[2069119]: start failed: QEMU exited with code 1
Nov 07 18:11:16 proxmox pvedaemon[2060576]: <packer@pve!packer> end task UPID:proxmox:001F927F:0922CA39:63694A43:qmstart:100:packer@pve!packer: start failed: QEMU exited with code 1
Nov 07 18:11:20 proxmox pvedaemon[2069162]: start failed: QEMU exited with code 1
Nov 07 18:11:20 proxmox pvedaemon[2060578]: <packer@pve!packer> end task UPID:proxmox:001F92AA:0922CBCD:63694A47:qmstart:100:packer@pve!packer: start failed: QEMU exited with code 1
# Ubuntu Server jammy
# ---
# Packer Template to create an Ubuntu Server (jammy) on Proxmox
# Variable Definitions
variable "proxmox_api_url" {
type = string
}
variable "proxmox_api_token_id" {
type = string
}
variable "proxmox_api_token_secret" {
type = string
sensitive = true
}
# Resource Definiation for the VM Template
source "proxmox" "ubuntu-server-jammy" {
# Proxmox Connection Settings
proxmox_url = "${var.proxmox_api_url}"
username = "${var.proxmox_api_token_id}"
token = "${var.proxmox_api_token_secret}"
# (Optional) Skip TLS Verification
insecure_skip_tls_verify = true
# VM General Settings
node = "proxmox" # add your proxmox node
vm_id = "100"
vm_name = "ubuntu-server-jammy"
template_description = "Ubuntu Server jammy Image"
# VM OS Settings
# (Option 1) Local ISO File - Download Ubuntu ISO and Upload To Proxmox Server
iso_file = "Proxmox:iso/ubuntu-22.04.1-live-server-amd64.iso"
# - or -
# (Option 2) Download ISO
#iso_url = "https://releases.ubuntu.com/20.04/ubuntu-20.04.5-live-server-amd64.iso"
#iso_checksum = "5035be37a7e9abbdc09f0d257f3e33416c1a0fb322ba860d42d74aa75c3468d4"
iso_storage_pool = "Proxmox"
unmount_iso = true
# VM System Settings
qemu_agent = true
# VM Hard Disk Settings
scsi_controller = "virtio-scsi-pci"
disks {
disk_size = "15G"
format = "qcow2"
storage_pool = "Proxmox"
storage_pool_type = "directory"
type = "virtio"
}
# VM CPU Settings
cores = "1"
# VM Memory Settings
memory = "2048"
# VM Network Settings
network_adapters {
model = "virtio"
bridge = "vmbr0"
firewall = "false"
}
# VM Cloud-Init Settings
cloud_init = true
cloud_init_storage_pool = "Proxmox"
# PACKER Boot Commands
boot_command = [
"<esc><wait>",
"e<wait>",
"<down><down><down><end>",
"<bs><bs><bs><bs><wait>",
"autoinstall ds=nocloud-net\\;s=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ ---<wait>",
"<f10><wait>"
]
boot = "c"
boot_wait = "30s"
# PACKER Autoinstall Settings
http_directory = "http"
# (Optional) Bind IP Address and Port
http_bind_address = "192.168.50.100"
http_port_min = 8802
http_port_max = 8802
ssh_username = "root"
# (Option 1) Add your Password here
ssh_password = "Xm2Y6vZVcViPnhFm"
# - or -
# (Option 2) Add your Private SSH KEY file here
# ssh_private_key_file = "~/.ssh/id_rsa"
# Raise the timeout, when installation takes longer
ssh_timeout = "20m"
}
# Build Definition to create the VM Template
build {
name = "ubuntu-server-jammy"
sources = ["source.proxmox.ubuntu-server-jammy"]
# Provisioning the VM Template for Cloud-Init Integration in Proxmox #1
provisioner "shell" {
inline = [
"while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done",
"sudo rm /etc/ssh/ssh_host_*",
"sudo truncate -s 0 /etc/machine-id",
"sudo apt -y autoremove --purge",
"sudo apt -y clean",
"sudo apt -y autoclean",
"sudo cloud-init clean",
"sudo rm -f /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg",
"sudo sync"
]
}
# Provisioning the VM Template for Cloud-Init Integration in Proxmox #2
provisioner "file" {
source = "files/99-pve.cfg"
destination = "/tmp/99-pve.cfg"
}
# Provisioning the VM Template for Cloud-Init Integration in Proxmox #3
provisioner "shell" {
inline = [ "sudo cp /tmp/99-pve.cfg /etc/cloud/cloud.cfg.d/99-pve.cfg" ]
}
# Add additional provisioning scripts here
# ...
}
#cloud-config
autoinstall:
version: 1
locale: en_GB
keyboard:
layout: gb
ssh:
install-server: true
allow-pw: true
disable_root: false
ssh_quiet_keygen: true
allow_public_ssh_keys: true
packages:
- qemu-guest-agent
- sudo
storage:
layout:
name: direct
swap:
size: 0
user-data:
package_upgrade: false
timezone: Europe/London
users:
- name: root
groups: [adm, sudo]
lock-passwd: false
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
passwd: Xm2Y6vZVcViPnhFm
# - or -
# ssh_authorized_keys:
# - your-ssh-key
Heya,
Thanks for all the vids and content! Helped me out loads so far!
I'm sure you were already aware, but apt-key module in Ansible is deprecated.
I've had a look for better "official" methods and whilst the Ansible team are working on it, I've done a slightly more complicated version which is a good workaround currently:
Might be useful, might not be!
Please feel free to close this issue once you've decided either way!
Thanks again
# TODO: Insert Authelia Readme
Hey man, how are you doing? I followed your tutorial, I'm trying to use HTTPs in one project, but I'm trying to use Let's Encrypt and I keep getting MOZILLA_PKIX_ERROR_SELF_SIGNED_CERT
. These are my values:
additionalArguments:
- --certificatesresolvers.generic.acme.email=my-email-here
- --certificatesresolvers.generic.acme.caServer=https://acme-v02.api.letsencrypt.org/directory
- --certificatesresolvers.generic.acme.httpChallenge.entryPoint=web
- --certificatesresolvers.generic.acme.storage=/ssl-certs/acme-generic.json
logs:
general:
level: INFO
ports:
web:
redirectTo: websecure
websecure:
tls:
enabled: true
ingressRoute:
dashboard:
enabled: false
persistence:
enabled: true
name: ssl-certs
size: 1Gi
path: /ssl-certs
deployment:
initContainers:
- name: volume-permissions
image: busybox:1.31.1
command: ['sh', '-c', 'chmod -Rv 600 /ssl-certs/*']
volumeMounts:
- name: ssl-certs
mountPath: /ssl-certs
ingressClass:
enabled: true
isDefaultClass: true
and this is my Ingress:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: traefik
# annotations:
# (Optional): Annotations for the Ingress Controller
# -- ingress class is needed when traefik is not the default
# kubernetes.io/ingress.class: traefik
# ---
# -- entrypoint and tls configurations
# traefik.ingress.kubernetes.io/router.entrypoints: web, websecure
# traefik.ingress.kubernetes.io/router.tls: "true"
# ---
# -- optional middlewares
# traefik.ingress.kubernetes.io/router.middlewares:your-middleware@kubernetescrd
# ---
spec:
rules:
- host: 'my-domain.com'
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: profitor-web-service
port:
number: 80
What am I doing wrong? I couldn't figure out.
Hi,
I'm following your setup for Traefik on K8s with Persistent Volume and initContainer for permnission issues.
My implementation has traefik beign installed in the traefik namespace, with glusterfs-based volume and claim:
ubuntu@k8cp1:~$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
traefik-ssl-volume 128Mi RWX Retain Bound traefik/traefik-ssl-claim 2d11h
ubuntu@k8cp1:~$ kubectl get pvc -n traefik
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
traefik-ssl-claim Bound traefik-ssl-volume 128Mi RWX 2d11h
And here's my persistence and initContainer sections of Traefik values:
deployment:
additionalContainers: []
additionalVolumes: []
annotations: {}
enabled: true
imagePullSecrets: []
initContainers:
- name: volume-permissions
image: busybox
securityContext:
runAsUser: 0
command: ["sh", "-c", "chmod -Rv 600 /ssl-certs/*"]
volumeMounts:
- name: ssl-certs
mountPath: /ssl-certs
kind: Deployment
labels: {}
minReadySeconds: 0
podAnnotations: {}
podLabels: {}
replicas: 1
shareProcessNamespace: true
terminationGracePeriodSeconds: 60
persistence:
accessMode: ReadWriteMany
annotations: {}
enabled: true
name: ssl-certs
path: /ssl-certs
size: 128Mi
existingClaim: traefik-ssl-claim
What can the problem be? The initContainer in the wrong namespace?
Traefik works without the initContainer, but I get "permission errors" on ssl-certs contents, so I need the initContainer.
Thanks,
Hello all,
I am trying to follow the traefik tutorial to spin up on my VPS in Hetnzer. And I am running into some problems. I was able to follow the tutorial to the point where I can deploy the stack in portainer
and it appears to be running without issue, but when I attempt to navigate to the web GUI's I am unable to access it.
First I thought it was DNS so I tried navigating straight to the hetnzer provided IP with the ports but no dice. Double checked the VPS to ensure the appropriate ports are open and unless I am misconstruing UFW
it should be working. But here is the output of my firewall rules as it stands, so perhaps someone can double check my work:
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
To Action From
-- ------ ----
2230 ALLOW IN Anywhere
9090/tcp ALLOW IN Anywhere
80/tcp ALLOW IN Anywhere
443/tcp ALLOW IN Anywhere
8080/tcp ALLOW IN Anywhere
Below is the output of my docker-compose.ym
l file:
version: '3'
services:
traefik:
image: "traefik:v2.5"
container_name: "traefik"
ports:
- "80:80"
- "443:443"
# (Optional) Expose Dashboard
#- "8080:8080" # Don't do this in production!
volumes:
- /etc/traefik:/etc/traefik
- /var/run/docker.sock:/var/run/docker.sock:ro
And finally here is the traefik
configuration file:
global:
checkNewVersion: true
sendAnonymousUsage: false # true by default
# (Optional) Log information
# ---
# log:
# level: ERROR # DEBUG, INFO, WARNING, ERROR, CRITICAL
# format: common # common, json, logfmt
# filePath: /var/log/traefik/traefik.log
# (Optional) Accesslog
# ---
# accesslog:
# format: common # common, json, logfmt
# filePath: /var/log/traefik/access.log
# (Optional) Enable API and Dashboard
# ---
# api:
# dashboard: true # true by default
# insecure: true # Don't do this in production!
# Entry Points configuration
# ---
entryPoints:
web:
address: :80
(Optional) Redirect to HTTPS
# ---
# http:
# redirections:
# entryPoint:
# to: websecure
# scheme: https
websecure:
address: :443
# Configure your CertificateResolver here...
---
certificatesResolvers:
staging:
acme:
email: [email protected]
storage: /etc/traefik/certs/acme.json
caServer: "https://acme-staging-v02.api.letsencrypt.org/directory"
httpChallenge:
entryPoint: web
production:
acme:
email: [email protected]
storage: /etc/traefik/certs/acme.json
caServer: "https://acme-v02.api.letsencrypt.org/directory"
httpChallenge:
entryPoint: web
# (Optional) Overwrite Default Certificates
# tls:
# stores:
# default:
# defaultCertificate:
# certFile: /etc/traefik/certs/cert.pem
# keyFile: /etc/traefik/certs/cert-key.pem
# (Optional) Disable TLS version 1.0 and 1.1
# options:
# default:
# minVersion: VersionTLS12
providers:
docker:
exposedByDefault: false # Default is true
file:
# watch for dynamic configuration changes
directory: /etc/traefik
watch: true
Any ideas on what might be going on? This is a head scratcher, not a huge concern as this is just for labbing.
the grafana-prometheus docker-compose.yml that you mention in your youtube video is missing
It may have difficult for readers with no docker knowledge to build the file by their own
It is now possible to send notifications with a community module so you could update your playbook like this:
TODO: Insert nextcloud readme
undefined
update traefik to the latest version and add configuration examples
traefik | 2022/02/05 17:11:43 command traefik error: yaml: line 19: did not find expected key
traefik exited with code 1
traefik.txt
I have followed your youtube tutorial and keep getting stuck at this point for some reason it cant find the key? was hoping to try this as your tutorial made it look like an amazing tool hopefully you can find the issue in the file included (it is a .yaml file but github wouldnt let me upload it) otherwise ill just go back to trying to get nginx to work
Hi,
I'm trying to docker-compose up
the prometheus yaml file without the portainer.io
boilerplates/docker-compose/prometheus/docker-compose.yaml
Lines 2 to 15 in 012cca7
However, I am still getting this error:
prometheus exited with code 2
prometheus | ts=2023-07-06T22:43:15.733Z caller=main.go:482 level=error msg="Error loading config (--config.file=/etc/prometheus/prometheus.yml)" file=/etc/prometheus/prometheus.yml err="open /etc/prometheus/prometheus.yml: no such file or directory"
The config yaml file actually exists in /etc/prometheus/prometheus.yml
as follow:
(base) Ubuntu:~/Desktop/prometheus$ cat /etc/prometheus/prometheus.yml
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
# external_labels:
# monitor: 'codelab-monitor'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
# Example job for node_exporter
# - job_name: 'node_exporter'
# static_configs:
# - targets: ['node_exporter:9100']
# Example job for cadvisor
# - job_name: 'cadvisor'
# static_configs:
# - targets: ['cadvisor:8080']
I appreciate for your help.
I was following your video on creating a ubuntu template with cloud-inti on proxmox using packer but ran into this issue -
The VM doesn't proceed further than this an the script is still waiting for SSH to be accessible
Have you faced this issue or do you possibly know of any fix ? Any help is appreciated
Using the boilerplate without NPM (I already have that installed) I followed with only minor adkjustments to the yaml, container_nae: entries, paths to the volumes and passwords as required. I can setup NPM to use the nextcloud-app container name for the host when setting up the reverse proxy and then can access the nextcloud start screen. Even though the boiler plate includes the mariadb option, the first screen shows just the admin username and password option and an install button. This fails and only then says SQLlite was selected and couldn't be written, so I select mariadb, add the passwords for admin user and sql user again but I still get tthe error unable to create the admin user.
I've attached my dc.yaml file for reference in case my user error is obvious to a better trained eye.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.