Giter VIP home page Giter VIP logo

k3os's Issues

ulimits is too low run some application

When delpoying elastic helm chart, I got this error
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]

No matter how I configure ulimits inside the container, the error persistent. I guess it is host has low limits numbers.
is there any possible way to configure the /etc/security/limits.conf?

kubectl throws strange error

when running deployment with help or kubectl, sometimes get strange errors:
error: SchemaError(io.k8s.api.core.v1.CinderVolumeSource): invalid object doesn't have additional properties

for example,

$ kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml
error: SchemaError(io.k8s.api.core.v1.CinderVolumeSource): invalid object doesn't have additional properties

apk add failed

$ apk add vim
ERROR: Unable to lock database: Read-only file system
ERROR: Failed to open apk database: Read-only file system

but by mount get that / mount point is "rw" mounted.

What ARM boards would the community like supported?

I'd like to use this issue so people can comment on what ARM boards should be supported by k3OS. It would work best if one comment per board was create and then other can just 👍 the comment so we can get a count.

As a bonus if you are already running k3s on some existing Linux distro on these boards if you could please share what those distros are that would be great. We don't have resources to maintain kernels for tons of boards to if we can use kernels and boot loaders from other community images and just replace the user space with k3OS then we should be able to support a good number of boards.

K3OS image for LXC

I’ve been using k3s on LXC for local experiments with Kubernetes (cluster in a box). Having an LXC image with k3os would make setting up a cluster in a box a lot easier / faster.

Boot_cmd running after write_files

Version - v0.2.1-rc2

Steps:

  1. Create a config.yaml with
write_files:
- enconding: ""
  content: |
    Here is a line.
    Another line is here
  owner: root
  path: test2.txt
  permissions: '0777'
run_cmd:
- "echo 'run hello' >> test2.txt"
boot_cmd:
- "echo 'boot hello' >> test2.txt"
k3os:
  password: asdf
  1. During boot process, press e to get into GNU Grub
  2. On linux cmdline add k3os.mode=install k3os.install.config_url="config yaml url"
  3. Log in and cat/test2.txt

results:

Here is a line
Another line is here
run hello

Expected: Documentation says that boot, run and init command run after the write_files. So I would expect to see the "boot hello" text in there as well. If I run the boot_cmd to a different file it shows up.

Requesting for resources using kubectl throws an error

I'm running the latest release of k3OS on VirtualBox 6.0.6 r130049. Running kubectl get nodes throws the following error:

The connection to the server localhost:6443 was refused - did you specify the right host or port?

I tried other resources - I get the same result. At first I thought there may be an issue with the VM's networking - but on further reflection, I no longer think so. Probably a configuration issue of some sort.

Attaching a screenshot.

image

k3os.install.silent=true still asks final Configuration question

Version - v0.2.0

Steps:

  1. Setup VMWare machine
  2. During boot process, press e to get into GNU Grub
  3. In linux add k3os.mode=install & k3os.install.silent=true
  4. Ctrl-x to save and continue

Results: Silent work all the way through the installation. The final Configuration question is asked and requires input
image

[feature request] support for regular cloud-init network config

It would be great if it was possible to configure base networking settings (IP, gateway, dns domain, dns servers, hostname, search domains) through the regular cloud init interfaces (instead of having to create a connman service file).

Why is this important (in my opinion)? There are virtualization systems, like Proxmox VE, that allow you to configure a limited set of cloudinit parameters for a VM regarding the network config (but don't support full cloudinit scripts). This is usually enough to run any standard cloud image (Debian, CentOS, etc). It would be great if the k3os iso would behave similarly.

image

https://pve.proxmox.com/wiki/Cloud-Init_Support
https://pve.proxmox.com/wiki/Cloud-Init_FAQ

Kernel panic on boot under KVM

Greetings,

I'm trying out k3os, but I can't seem to get it to boot. I get a kernel panic, attaching screenshot below. This is 0.2.0rc2.

Screen Shot 2019-04-25 at 1 53 51 PM

Building ISO with baked in custom config.yaml

Can we get instructions on where to place a customized config.yaml to have it baked into an iso/image when building.

Ideally if this doesn't exist it would be awesome to have a .gitignored location where I can drop my config, run make and get an iso that I can use for a completely unattended install.

routing conflict because of duplicate routes on container interfaces set by connman

I stumbled upon this unexpected behaviour:

I have created a k3os cluster with static network config for the nodes (i.e. no dhcp). I'm setting all of the interface config in a connman service file - including the default gateway and the DNS servers. Network config im /k3os/system/config.yaml looks like:

[...]
write_files:
- encoding: ""
  content: |-
    [service_default-interface]
    Type=ethernet
    IPv4=192.168.200.2/24/192.168.200.1
    IPv6=off
    Nameservers=192.168.100.2,192.168.100.3
    SearchDomains=mynetwork.local
    Timeserver=192.168.100.2,192.168.100.3
    Domain=mynetwork.local
  owner: root
  path: /var/lib/connman/default-interface.config
  permissions: '0644'
[...]

This seems to work fine for the host's main interface. But has an unexpected side effect on the virtual container interfaces that gets dynamically created for every running container (veth########). For every container interface, a static route to the default gateway and the IPs of the DNS servers gets created (which is just wrong and can't work).

$ route
[...]
192.168.200.0    *               255.255.255.0 U     0      0        0 vethfb2114ac
192.168.200.1    *               255.255.255.0 U     0      0        0 vethfb2114ac
192.168.100.2    *               255.255.255.0 U     0      0        0 vethfb2114ac
192.168.100.3    *               255.255.255.0 U     0      0        0 vethfb2114ac
[...]

This leads to a routing conflict: The default gw and the DNS servers become unreachable both from the host and also from inside the containers. (Routing still continues to work though). The major issue this leads to is that the CoreDNS pod can't reach the internal DNS server. As CoreDNS is set up to fall back to Internet root dns servers it can still query public domains, but can't resolve any local dns zones only known by the Intranet dns server (i.e. zone mynetwork.local on DNS server 192.168.100.2).

I believe this is a connman issue and related to that connman creates a config file for every container interface (/var/lib/connman/ethernet_<id>_cable/settings) with settings derived from the main interface.

The workaround for me is to not have connman set routes and DNS servers. When I use the following network config in /k3os/system/config.yaml no duplicate routes appear on the container interfaces:

in /k3os/system/config.yaml:

[...]
run_cmd:
- "route add default gw 192.168.200.1"
write_files:
- encoding: ""
  content: |-
    [service_default-interface]
    Type=ethernet
    IPv4=192.168.200.2/24
    IPv6=off
  owner: root
  path: /var/lib/connman/default-interface.config
  permissions: '0644'
- encoding: ""
  content: |-
    nameserver 192.168.100.2
    nameserver 192.168.100.3
    search mynetwork.local
  owner: root
  path: /etc/resolv.conf
  permissions: '0644'
[...]

With this the routes created for a container interface look like:

192.168.200.0    *               255.255.255.0 U     0      0        0 vethfb2114ac

and everything works as expected (i.e. CoreDNS can reach the internal DNS server and resolve the zone mynetwork.local).

Get version

Request - Be nice to have some sort of function to get the version of k3os

option to set proxy settings for continerd daemon required.

To download container images, proxy settings are required for running containerd.

Typically, if containerd service is run with setting, it works for kubernetes + containerd.

[Service]
ExecStart=/usr/local/bin/containerd
Restart=always
Environment="HTTP_PROXY=http://example.com:8080"
Environment="HTTPS_PROXY=http://example.com:8080"

cannot find systemd service file for containerd. Proxy info can be asked while initial setup and passed while starting containerd.

--no-deploy traefik servicelb not respected

I'm having trouble disabling the auto deployment of traefik.
I have added:

k3os-8393 [/var/lib/rancher/k3os]$ cat config.yaml
k3os:
  k3s_args:
  - server
  - "--no-deploy traefik servicelb"

and deleted all the jobs/pods/services related to traefik, but still after 120minutes everything is recreated.

ps-ax gives the following:

 2007 ?        S      0:00 supervise-daemon k3s-service --start --pidfile /var/run/k3s-service.pid --respawn-delay 5 /sbin/k3s -- server --no-deploy traefik
 2009 ?        Ssl   96:56 /sbin/k3s server --no-deploy traefik

Any thoughts on what I can do to stop this from happing? I have to deploy my own LB and Ingress server to handle TLS, ACME etc..

Installing agent to disk, reboot, not showing in cluster

Hi,

After installing k3os to disk agents don't show up in the cluster when I use the commando:
kubectl get nodes.

However if I manually add them to the cluster with the following steps it will work:
pkill -9 k3s
sudo k3s agent --server https://10.0.0.196:6443 --token $TOKENCODE

Then it will connect to the cluster. But this isn't usable in a live environment that way.

Issues with disk space on new disk install to 4GB disk

I think I have enough free space in general, but this seems to be taking specifically about the tmpmounts when loading up images. This is on VirtualBox right now and I can't get very far yet.

k3os-21574 [/]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       545M  495M   18M  97% /
/dev/loop1       50M   50M     0 100% /usr
none            997M  1.4M  996M   1% /etc
tmpfs           200M  224K  200M   1% /run
tmpfs           200M  296K  200M   1% /tmp
dev              10M     0   10M   0% /dev
shm             997M     0  997M   0% /dev/shm
cgroup_root      10M     0   10M   0% /sys/fs/cgroup
/dev/loop2      200M  200M     0 100% /usr/lib/modules
/dev/sda1       476M   92M  384M  20% /boot

containerd.log

time="2019-04-27T16:58:24.800670127Z" level=info msg="apply failure, attempting cleanup" error="failed to extract layer sha256:5dacd731af1b0386ead06c8b1feff9f65d9e0bdfec032d2cd0bc03690698feda: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount252943319: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount252943319/lib/x86_64-linux-gnu/libc-2.24.so: no space left on device: unknown" key="extract-247511086-BP4B sha256:5dacd731af1b0386ead06c8b1feff9f65d9e0bdfec032d2cd0bc03690698feda"
time="2019-04-27T16:58:24.845060165Z" level=info msg="apply failure, attempting cleanup" error="failed to extract layer sha256:ed9e1db7c0e6e5aeee786920b0c9db919cee1f5d237c792e4fd07038da0ae597: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount861930739: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount861930739/coredns: no space left on device: unknown" key="extract-319977195-Yeie sha256:a46466f6a77a598059d6aaddfc99ab668459e5175a397ec422472d6912791396"
time="2019-04-27T16:58:24.845103665Z" level=info msg="apply failure, attempting cleanup" error="failed to extract layer sha256:d635f458a6f8a4f3dd57a597591ab8977588a5a477e0a68027d18612a248906f: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount641878561: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount641878561/usr/lib/bash/realpath: no space left on device: unknown" key="extract-558504132-lzQd sha256:a95a173483558b1cfb499b6827a64cad3729dc2e7fa6ff8f8e1320b4d61d3600"
time="2019-04-27T16:58:24.845523052Z" level=error msg="PullImage \"nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to unpack image on snapshotter overlayfs: failed to extract layer sha256:5dacd731af1b0386ead06c8b1feff9f65d9e0bdfec032d2cd0bc03690698feda: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount252943319: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount252943319/lib/x86_64-linux-gnu/libc-2.24.so: no space left on device: unknown"
time="2019-04-27T16:58:24.856313793Z" level=error msg="PullImage \"coredns/coredns:1.3.0\" failed" error="failed to pull and unpack image \"docker.io/coredns/coredns:1.3.0\": failed to unpack image on snapshotter overlayfs: failed to extract layer sha256:ed9e1db7c0e6e5aeee786920b0c9db919cee1f5d237c792e4fd07038da0ae597: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount861930739: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount861930739/coredns: no space left on device: unknown"

And on a deployment, similar errors getting those images, too (no surprising) from k3s kubectl describe for any of the pods on this test

Name:               nginx-7db9fccd9b-w4sgk
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               k3os-21574/
Start Time:         Sat, 27 Apr 2019 16:58:15 +0000
Labels:             pod-template-hash=7db9fccd9b
                    run=nginx
Annotations:        <none>
Status:             Failed
Reason:             Evicted
Message:            The node was low on resource: ephemeral-storage. 
IP:                 
Controlled By:      ReplicaSet/nginx-7db9fccd9b
Containers:
  nginx:
    Image:        nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rkz48 (ro)
Volumes:
  default-token-rkz48:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rkz48
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From                 Message
  ----     ------            ----               ----                 -------
  Warning  FailedScheduling  20m (x3 over 21m)  default-scheduler    0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  Warning  FailedScheduling  14m (x5 over 19m)  default-scheduler    0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  Normal   Scheduled         14m                default-scheduler    Successfully assigned default/nginx-7db9fccd9b-w4sgk to k3os-21574
  Normal   Pulling           14m                kubelet, k3os-21574  Pulling image "nginx"
  Warning  Failed            14m                kubelet, k3os-21574  Failed to pull image "nginx": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to unpack image on snapshotter overlayfs: failed to extract layer sha256:5dacd731af1b0386ead06c8b1feff9f65d9e0bdfec032d2cd0bc03690698feda: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount252943319: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount252943319/lib/x86_64-linux-gnu/libc-2.24.so: no space left on device: unknown
  Warning  Failed            14m                kubelet, k3os-21574  Error: ErrImagePull
  Normal   BackOff           14m                kubelet, k3os-21574  Back-off pulling image "nginx"
  Warning  Failed            14m                kubelet, k3os-21574  Error: ImagePullBackOff
  Warning  Evicted           14m                kubelet, k3os-21574  The node was low on resource: ephemeral-storage.

Boot k3os iso from usb stick fails

Hi,
I tried to boot k3os from usb stick (Dell Vostro notebook) and it fails with

cat /proc/cmdline
for x in $(cat /proc/cmdline
case $x in 
for x in $(cat /proc/cmdline
case $x in 
for x in $(cat /proc/cmdline
case $x in 
for x in $(cat /proc/cmdline
case $x in 
for x in $(cat /proc/cmdline
case $x in 
'[' -z '' ']'
blkid -L K3=S_STATE
'[' -n '' ']'
'[' -n '' ']'
'[' -z '' ']'
pfatal Failed to determine boot mode
echo '[FATAL] Failed to determine boot mode'
[FATAL] Failed to determine boot mode
exit 1
rescue
...

Dropped to bash then.

config init_cmd is not working

Version - v0.2.1-rc2

Steps:

  1. Create a config.yaml with:
ssh_authorized_keys:
- github:<username>
init_cmd:
- "echo 'init command' && sleep 120"
k3os:
  password: asdf
  1. During boot process, press e to get into GNU Grub
  2. On linux cmdline add k3os.mode=install k3os.install.config_url=<raw path of config.yml>

Results: The init command never get run. The terminal never sleeps or print out the echo.

If agent, don't show add agent to server text

Version - v0.2.0-rc3

  1. Create a server
  2. Create and install an agent

Results: Text about node token and how to get it and add agents is showing up. Be nice not to show this text but something about it being an agent.

traefik pending forever.

I'm using Virtualbox. It has a host-only adp in vboxnet0. It has eth1 = 192.168.99.102
How do I config traefik right? also how do I access traefik UI?

Captura de Tela 2019-05-12 às 02 53 34

Thanks!

Cheers.

auto install using password in config.yaml causes first time login to hang

Version - v0.2.0

Steps:

  1. Setup VMWare machine
  2. During boot process, press e to get into GNU Grub
  3. In github create gist file that contains
k3os:
  password: asdf
  1. In linux add k3os.mode=install & k3os.install.config_url="url of gist file"
  2. Ctrl-x to save and continue
  3. After reboot, type in rancher as user name
  4. Try to type in password "asdf"

Results: nothing happens, you can't type in a password and basically you are hung up. If you hit Ctrl-C a bunch of time it will reset and you can type in both username and password. This will happen every time you restart the machine and have to login.

docker-machine create/deploy friendly

I'm not expecting this to be a plug-in replacement for rancherOS, however, rancherOS has features and tools that make deploying with docker-machine trivial. I can deploy to azure, digitalocean, vmware. I can expand and contract the cluster from the console. The ROS util is also very helpful in deploying clusters.

Set locales and keyboard

Hello, perhaps I don't see it, but I don't see how to set locale, timezone and keyboard layout.

Thanks

powertop

This looks like a great project. Are you also going to optimize the system for low power usage? In a typical Ubuntu setup you can save a lot using powertop

VMware deployment

When using VMware to fire one of these up, which Guest Operating System should I be using on ESXi 6.5.0.

I spun one up with the v0.1.0 iso and did the auto login, but neither kubectl nor sudo os-config were found.

using latest iso now.

thanks

K3OS VMs querying their own DNS like crazy

Hi Everyone,

I created two k3os one-node cluster VMs in Proxmox lately, and when I checked the PiHole DNS blackhole service on the LAN, which is for filtering out ads and other unwanted sites, I saw that the k3os VMs are querying themselves in an extreme measure, like 30+ thousand times in a day. After I created the first VM, it ran for some two days, then I destroyed it for experimentation purposes, then created the current one yesterday, which does the same.

Both VM's init was the following:

# get latest iso once (it was 0.2.0-rc6 previously this week)
cd /var/lib/vz/template/iso
GITHUB_REPO="rancher/k3os"
GITHUB_LATEST_RELEASE=$(curl -L -s -H 'Accept: application/json' https://github.com/${GITHUB_REPO}/releases/latest)
GITHUB_LATEST_VERSION=$(echo $GITHUB_LATEST_RELEASE | sed -e 's/.*"tag_name":"\([^"]*\)".*/\1/')
GITHUB_ORIG_FILE="k3os-amd64.iso"
GITHUB_DOWN_FILE="k3os-amd64-${GITHUB_LATEST_VERSION//v/}.iso"
GITHUB_URL="https://github.com/${GITHUB_REPO}/releases/download/${GITHUB_LATEST_VERSION}/${GITHUB_ORIG_FILE}"
wget -O $GITHUB_DOWN_FILE $GITHUB_URL

qm create 200 --agent 1 --cores 2 --ide2 local:iso/${GITHUB_DOWN_FILE},media=cdrom --memory 3072 --name k-node-1 --net0 virtio,bridge=vmbr0,firewall=1 --numa 0 --onboot 1 --ostype l26 --scsi0 local-lvm:8 --scsihw virtio-scsi-pci --sockets 1
qm start 200

# login in VNC as rancher
sudo passwd rancher
# type rancher
# exit VNC

# ssh login using a DHCP allocated address
ssh [email protected]
# type rancher

sudo os-config
# Install to disk: 1
# Cloud-init: N
# Github SSH: N
# Type new pass 2 times
# Wifi: N
# Server: 1
# Cluster secret: generate new with `openssl rand -hex 20` in a new tab
# Continue: y
# reboot

# host info changes, so remove it
ssh-keygen -f "/root/.ssh/known_hosts" -R 10.0.0.88
ssh [email protected]
# using new pass

Nothing else was set up, I just looked around inside the VMs. The process could be easier on the setup side (no VNC interaction, automated network settings, installed image by default), but there are other issues open for cloud-init, etc. so I won't bother with it.

My only problem now is the VM going berserk on querying its own DNS, it amounts to ~90% of the local DNS traffic these days. Can it be stopped somehow? I would like to use fixed IPs, but I just haven't got the time to set it up yet with conman (I also need to look into conman's config options, new stuff to me). Could it stop the flood?

k3s not starting on disk install

Version: v0.2.0-rc3
Virtualisation: VirtualBox

I installed k3os to disk (server mode) and did a reboot. After the reboot I logged in, waited a couple minutes and tried some kubectl commands. The only output I am getting is:

The connection to the server localhost:6443 was refused - did you specify the right host or port?

Did I miss something to make it work?

Configuring a VPN service

wondering how to connect k3OS nodes over a VPN (this might not be possible yet...)
On RancherOS I deploy a vpn service in my cloud-config (see bellow)

is it possible to achieve something similar on k3OS?

  services:
    zerotier:
      image: dwitzig/zerotier:1.2.12
      labels:
        io.rancher.os.scope: system
      volumes:
        - /opt/zerotier-one:/var/lib/zerotier-one
      restart: always
      net: host
      devices:
        - /dev/net/tun:/dev/net/tun
      cap_add:
        - NET_ADMIN
        - SYS_ADMIN
      volumes_from:
        - system-volumes
      entrypoint: /zerotier-one
    zerotier-join:
      image: dwitzig/zerotier:1.2.12
      labels:
        io.rancher.os.scope: system
      volumes:
        - /opt/zerotier-one:/var/lib/zerotier-one
      restart: on-failure
      net: host
      entrypoint: /zerotier-cli join $NETWORK_ID
      depends_on:
        - zerotier

vm doesn't reboot if I set k3os.mode = install

Version - v0.2.0

Steps:

  1. Setup VMWare machine
  2. During boot process, press e to get into GNU Grub
  3. On linux cmdline add k3os.mode=install
  4. Ctrl-x to save, answer question and wait for reboot

Result: Machine goes to shutdown state instead of rebooting.
image

services appear twice

Except k3s-service, the other services appear twice in the return value of the rc-update command.
Is this normal?

image

README has duplicate information in the first two sections.

Looks like some edited copy was left in the README. First and second sections of the README are nearly identical, with some grammar edits.
"k3OS

k3OS is a linux distribution designed to remove as much as possible OS maintaince in a Kubernetes cluster. It is specifically designed to only have what is need to run k3s. Additionally the OS is designed to be managed by kubectl once a cluster is bootstrapped. Nodes only need to join a cluster and then all aspects of the OS can be managed from Kubernetes. Both k3OS and k3s upgrades are handled by k3OS.
Quick Start

Download the ISO from the latest release and run in VMware, VirtualBox, or KVM. The server will automatically start a single node kubernetes cluster. Log in with the user rancher and run kubectl. This is a "live install" running from the ISO media and changes will not persist after reboot.

To copy k3os to local disk, after logging in as rancher run sudo os-config. Then remove the ISO from the virtual machine and reboot.

Live install (boot from ISO) requires at least 1GB of RAM. Local install requires 512MB RAM."

----- Is the same as ----

"k3OS

k3OS is a Linux distribution designed to remove as much OS maintenance as possible in a Kubernetes cluster. It is specifically designed to only have what is needed to run k3s. Additionally the OS is designed to be managed by kubectl once a cluster is bootstrapped. Nodes only need to join a cluster and then all aspects of the OS can be managed from Kubernetes. Both k3OS and k3s upgrades are handled by k3OS.
Quick Start

Download the ISO from the latest release and boot it on VMware, VirtualBox, or KVM. The server will automatically start a single node cluster. Log in with the user rancher to run kubectl.
Configuration

All configuration is done through a single cloud-init style config file that is either packaged in the image, downloaded though cloud-init or managed by Kubernetes.

More docs to come"

Building ISO and MacOS

Hi! When I build the iso file on MacOS (without any modifications, just cloned the repo and ran make), it initially works to run the live environment, but when I try to install it on the disk through os-config, I get the error unknown filesystem type 'hfsplus'. This is when it tries to mount the K3OS iso into /run/k3os/iso during the installation process.

Is this because I made the USB on MacOS?
I used dd bs=4m if=dist/artifacts/k3os-amd64.iso of=/dev/disk3 for example.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.