Comments (91)
@ibuildthecloud this would be great to see for ARM64 and ARMHF for use with k3s.
from longhorn.
Is there anyone that can comment on what is happening with this? Between raspi running K3s/K3OS and AWS a1.* instances there is a real need for this support. This issue has been open for 3 years and no ones mentioned anything about a timeline let alone set real expectations.
from longhorn.
ARM64 build is now running in https://ci.longhorn.io/job/public/job/longhorn-tests-arm64/ . Consider this is done.
Additional bug fixes for the integration test are tracked at #1990
from longhorn.
Same, here. My raspi 4 cluster is waiting for a great distributed storage :)
from longhorn.
I'm desperately searching for a decent k3s FS for Raspi4 cluster... so pls pls pls continue the good work here!
from longhorn.
For anyone who wants to install longhorn master-arm on raspberry pi 4 with helm
btw I used k3s v1.19.4+k3s1 and ubuntu 20.10 for raspberry pi
Prerequisites
install open-iscsi
apt install -y open-iscsi
pre pull docker images ( optional )
docker pull longhornio/longhorn-engine:master-arm64
docker pull longhornio/longhorn-manager:master-arm64
docker pull longhornio/longhorn-ui:master-arm64
docker pull longhornio/longhorn-instance-manager:master-arm64
docker pull longhornio/longhorn-share-manager:master-arm64
docker pull longhornio/csi-attacher:v2.2.1-lh1-arm64
docker pull longhornio/csi-provisioner:v1.6.0-lh1-arm64
docker pull longhornio/csi-node-driver-registrar:v1.2.0-lh1-arm64
docker pull longhornio/csi-resizer:v0.5.1-lh1-arm64
docker pull longhornio/csi-snapshotter:v2.1.1-lh1-arm64
Installation
create namespace
kubectl create namespace longhorn-system
add helm repo longhorn
helm repo add longhorn https://charts.longhorn.io
helm repo update
install longhorn helm chart
helm upgrade -i longhorn longhorn/longhorn -n longhorn-system \
--set image.longhorn.engineTag=master-arm64 \
--set image.longhorn.managerTag=master-arm64 \
--set image.longhorn.uiTag=master-arm64 \
--set image.longhorn.instanceManagerTag=master-arm64 \
--set image.longhorn.shareManager.tag=master-arm64 \
--set csi.attacherImageTag=v2.2.1-lh1-arm64 \
--set csi.provisionerImageTag=v1.6.0-lh1-arm64 \
--set csi.nodeDriverRegistrarImageTag=v1.2.0-lh1-arm64 \
--set csi.resizerImageTag=v0.5.1-lh1-arm64 \
--set csi.snapshotterImageTag=v2.1.1-lh1-arm64
I encountered 1 error
Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:serviceaccount:longhorn-system:longhorn-service-account" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Fix this by adding following to ClusterRole longhorn-role
- verbs:
- '*'
apiGroups:
- policy
resources:
- poddisruptionbudgets
from longhorn.
Just writing in support - I appreciate all of your work on this project!
ARM64v8 support for Longhorn is the thing that will make Kubernetes (k3s specifically) usable on Raspi clusters in a production environment. Having tried other solutions (Rook, NFS), there simply isn't anything else that's really suitable for the use case.
The fact ARM support has been given priority within the longhorn project is, IMO, a mark of sustainability and team receptiveness with the community.
Keep up the good work, folks!
from longhorn.
With the lack of arm/arm64 images, I found it extremely difficult to run a multiarch cluster with defaults.
What has worked well is to taint the arm nodes and use tolerations to allow scheduling of workloads with known good multi-arch container images.
from longhorn.
Any news on this - would be great to be able to use Raspis for demonstrations.
from longhorn.
Pending on the ARM64 infrastructure setup.
from longhorn.
csi build infrastructure is currently in the progress of updating to support building arm csi sidecar images, tracked here:
kubernetes-csi/csi-release-tools#86 (comment)
https://github.com/orgs/kubernetes-csi/projects/29#card-42709613
For supporting arm in the meantime we would need to look into building our own sidecar csi images.
There are also some community hosted multi arch csi sidecar images here:
https://github.com/raspbernetes/multi-arch-images
from longhorn.
Uploaded a modified version of longhorn.yaml with arm support. Using the images from vsellier.
Might help someone while playing around with it.
EDIT:
Ups - missed to remove KUBELET_ROOT_DIR
env var. See k3s-io/k3s#840 for more details about it - just disable it when using the above yamls
from longhorn.
Just curious, is it something we as community can help with? Would love to use it with my RPi 4 K3S cluster :)
from longhorn.
I've made some progress on this. @yasker If you have early comments, let me know. I submitted 2 PR on related projects. Tests are failing with undefined symbol: aligned_allocate
. I'm not too sure why at this point.
Also, I would need some help for setting up a Drone pipeline with ARM.
from longhorn.
Hi, will you also include the support for ARMv7 (32 bit) in v1.1.0 milestone?
If not, having this architecture supported need a lot of work? I can propose some change if it is not complicated to support it.
I have some Odroid HC-2 and would like to deploy Longhorn on them alongside my K3S cluster on RPi :)
from longhorn.
@dvdbot We haven't put this on the schedule yet, though I just noticed that this issue has 14 upvotes... I put it on the v0.7.1 release now, so we can start evaluating it.
from longhorn.
@VladoPortos a couple of days unless it slips, https://github.com/longhorn/longhorn/milestone/17.
from longhorn.
Just a small note, I have spend last couple of days checking longhorn alternatives ( storage as pods ) and pretty much nobody is supporting arm64 or 32.. I mean yes there is one arm version of OpenEBS which is 3years old... Rook + Ceph have unofficial arm64 support the last push was one month ago but who know when the guy making it gives up on it... So having longhorn come up with amr64 support would be huge for us single board computer users :D ( for example, I use 9 node rpi4 cluster to model production examples and test random stuff on it )
from longhorn.
@kiddouk In fact just install the chart from https://github.com/longhorn/longhorn/tree/v1.1.0 (currently hosting v1.1.0-rc1) should work since our image build are multi-arch, you shouldn't need to change the image specifically for the ARM64 environment.
from longhorn.
+1
from longhorn.
@fuhrmannb The problem with ARMv7 (32 bits) is we don't have the dev/test environment in our infrastructure. We cannot find reputable cloud providers with ARMv7 machines. And nightly testing is a requirement for us to claim it's officially supported. So ARMv7 is not supported in the v1.1.0 release.
Also, it's hard for us to even accept PRs for ARMv7 since we cannot validate it. Regarding ARM, ARM64 is our focus right now.
from longhorn.
Awesome job @yasker. Do you think we can open another issue for ARMv7 support?
from longhorn.
@VladoPortos kubernetes pvc's are namespaced and a pod can only use a pvc from the same namespace.
From your explanation it sounds like you created the longhorn-docker-registry-pvc
in the default
namespace but now are trying to use it from the docker-registry
namespace.
from longhorn.
Hi I’m just writing in to support as well. I really appreciate all the hard work put into arm64 support. It’s been working amazing on my cluster of 5x Rock Pi 4A and 1x Rock Pi X (mixed arm64/amd64 cluster) so far. I’ve tried Rook Ceph and Glusterfs but none have fit the use case as it did with Longhorn. I've been searching for a highly-available block storage solution to replace my failure-prone NFS shared folders but have not found any until now.
Huge kudos to the team for pioneering not just k3s
but also longhorn
on bare metal self-hosted setups, essentially democratizing the storage space, away from just cloud providers, giving people a shot at hosting web services on their own hardware at the edge.
I’ll be writing something positive up shortly on my blog and medium to help spread the word. Thanks again for putting priority on this.
from longhorn.
@VladoPortos nice write up, just leaving some notes below.
One can also use default disk annotations on the nodes to have additional disks setup automatically.
https://longhorn.io/docs/1.1.0/advanced-resources/default-disk-and-node-config/
Also one can change the default data path during installation, then the disks should be picked up automatically.
https://longhorn.io/docs/1.1.0/advanced-resources/deploy/customizing-default-settings/
from longhorn.
from longhorn.
@boknowswiki you're absolutely right - armhf is armv7 and it's 32bit CPU arch.
from longhorn.
@yasker I imagine several of us here would be happy to help test this, once it's ready for that. Please let us know.
from longhorn.
@unixfox Yes, you can open another issue for ARMv7 support.
from longhorn.
Using K3s 1.18.12 on a Raspberry Pi 4 cluster on Ubuntu 20.04 I had to expand the the above permission of poddisruptionbudgets as my requests were stuck in a forever attaching state. Found error messages about not being able to create or watch poddisruptionbudgets.
I guess it can be reduced but at the moment I have this
- verbs: ["*"]
apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
from longhorn.
@cjyar we sync the content from the longhorn-manager repo to the longhorn repo before releases. So in the case where you want to run the current master branch it's best to use the deployments from the longhorn-manager repo.
from longhorn.
Thanks to @luckyluc74 for his how-to. 👏
From a few days ago, you also need the shareManager
that has been included in the chart for v1.1.0
. For this, make sure to clone https://github.com/longhorn/longhorn/tree/v1.1.0 and then issue (you will install the local chart) :
helm upgrade -i longhorn . -n longhorn-system \
--set image.longhorn.engine.tag=master-arm64 \
--set image.longhorn.manager.tag=master-arm64 \
--set image.longhorn.ui.tag=master-arm64 \
--set image.longhorn.instanceManager.tag=master-arm64 \
--set image.longhorn.shareManager.tag=master-arm64 \
--set csi.attacher.tag=v2.2.1-lh1-arm64 \
--set csi.provisioner.tag=v1.6.0-lh1-arm64 \
--set csi.nodeDriverRegistrar.tag=v1.2.0-lh1-arm64 \
--set csi.resizer.tag=v0.5.1-lh1-arm64 \
--set csi.snapshotter.tag=v2.1.1-lh1-arm64
from longhorn.
@ikaruswill you are right, wrong branch... :) its up now. It took some time to spin up and one cluster rebuild :D Looking forward to some testing.
from longhorn.
@opticlear From the error message it makes me think that you have ipv6 disabled on your cluster.
You can check if ipv6 is enable by following this guide:
https://www.golinuxcloud.com/linux-check-ipv6-enabled/
You can use sysctl as here to enable ipv6
https://linuxconfig.org/how-to-disable-ipv6-address-on-ubuntu-20-04-lts-focal-fossa
Even if your network is not ipv6 by disabling ipv6 the nginx cannot bind to the local ipv6 address.
By default we try to bind to ip4 and ip6, see:
https://github.com/longhorn/longhorn-ui/blob/master/nginx.conf.template#L13
@meldafrawi you should be able to replicate this, by disabling ipv6 on the test nodes.
from longhorn.
Any ETA?
from longhorn.
@JorritPosthuma Sure! Contributions are always welcome!
As the first step, we need to compile and run Longhorn Engine (Longhorn data plane) with ARM. This step doesn't require Kubernetes. Longhorn is currently using tgt
iSCSI framework to expose block device as an iSCSI target. We need to compile Longhorn Engine with ARM and get it running like this: https://github.com/longhorn/longhorn-engine#with-tgt-frontend . This should expose a block device with one engine(controller) and one replica.
If anyone wants to give it a try, I would be glad to help along with the way.
from longhorn.
@legege Thanks for looking into this!
I've merged your PR for liblonghorn and tgt, and tried to update them (without introducing anything extra for ARM) for engine in PR longhorn/longhorn-engine#439 , but it failed due to the same reason.
https://drone-pr.rancher.io/longhorn/longhorn-engine/450/1/2
E ImportError: /go/src/github.com/longhorn/longhorn-engine/integration/.tox/py27/local/lib/python2.7/site-packages/directio.so: undefined symbol: aligned_allocate
I haven't seen this kind of error before, and I doubt it is caused by your changes - since this is the integration test for the engine, which is written in python.
The function the error referred to is an inline function in the directio
python package we're using
inline void *
aligned_allocate (size_t size)
{
return mmap(NULL, size, PROT_READ | PROT_WRITE,
MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
}
I haven't able to figure out why this happened yet. Likely it's due to some implicit dependencies changes. I will need to take a look at it later.
from longhorn.
Filed #968 for directio.so: undefined symbol: aligned_allocate
issue. Worked around it in the engine master branch.
from longhorn.
Thanks for your quick help today. I've rebased on master with your changes.
On my dev branch, tests are passing on AMD64, but are failing on ARM architecture. I haven't been able to find the root cause yet... sadly, it looks a bit random.
https://drone-pr.rancher.io/longhorn/longhorn-engine/462/2/2
from longhorn.
@legege Does it pass in your local environment? You can try to use e.g.
dapper ./scripts/integration-test -- -s -x -k test_backup
to test individual test cases if that helps. It looks like something was wrong with grpc.
from longhorn.
@yasker I have the same issue locally (AWS A1 instance). The problem only manifests if I run more than one test. I will troubleshoot more later tonight.
from longhorn.
Any news on this? Will it be in v0.8.0?
from longhorn.
Any news on this? Will it be in v0.8.0?
Also curious: do any docs or project boards for help needed on this topic exist that I may have overlooked?
from longhorn.
@Xnyle @shaneutt We haven't made much further than last December. The ARM support will not be in v0.8.0.
@legege from the community was working on it last time but seems blocked by some test failures in the engine.
from longhorn.
In the meantime, is it possible to at least exclude every non beta.kubernetes.io/arch: amd64 node in the Daemon Sets for 0.8.0?
I'm running bare metal multiarch clusters, and don't really need provisioner on the arm SBC nodes, but those longhorn crashloop containers drag the SBCs down.
from longhorn.
@Xnyle Sure, but it's very late in the release cycle right now and we don't have ARM environment for testing. So is it possible for you to submit a PR for that? YAMLs are in https://github.com/longhorn/longhorn-manager , Chart is in https://github.com/longhorn/longhorn. We can test it again AMD64 and make sure it will not break in our test environment (which is pure AMD64).
from longhorn.
I don't have a multiarch cluster configured right now. When I did, I edited the Longhorn DaemonSets in the Rancher GUI to add a Node Scheduling rule with beta.kubernetes.io/arch = amd64
which added the following to the YAML:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
I am looking forward to running Longhorn on my Rock Pi 4 cluster with NVMe SSD's. Keep up the good work.
from longhorn.
Thanks, that is working, BUT how do I do that during deployment?
There are only 3 "static" Deployments: longhorn manager, longhorn-ui and longhorn driver-deployer. But the instance-manager, csi-attacher and csi-provisioner are generated dynamically by one of those static deployments pods. How do I tell the them not to deploy on arm nodes / change the Daemonsets engine-image and longhorn-csi-plugin?
from longhorn.
I'm desperately searching for a decent k3s FS for Raspi4 cluster... so pls pls pls continue the good work here!
Same, I'm looking to this for Rpi4 storage as well as storage for ARM EC2. ➕
from longhorn.
+1
from longhorn.
Might help someone while playing around with it.
Thanks @mgerhardy - it definitely did! Here's a kustomize
d version of the same:
https://gitlab.com/b-k8s/longhorn (sha 80b530b71279f2d7b82ab243871b6d51007307ae
is identical to the tarball)
in case anyone prefers to consume it that way.
from longhorn.
Hi @mgerhardy , do the vsellier's csi images work on your arm hosts? I have tried on a arm host, it doesn't run. And based on the docker manifest information, it looks like it only has amd64, doesn't support arm. Is there something I missed or did wrong? Thanks. Below are the docker manifest information I got:
docker manifest inspect --verbose vsellier/csi-attacher:v2.0.0
{
"Ref": "docker.io/vsellier/csi-attacher:v2.0.0",
"Descriptor": {
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"digest": "sha256:6f4076e0e0ee6c2130a9ba1dab697ec9cfb415c2f12a3e0d5f6cad9f49f35d58",
"size": 739,
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
"SchemaV2Manifest": {
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 2202,
"digest": "sha256:e2cbb7a8427d68ae8cc98277e82137c1f5fdbc3d12c28c1876015298fdf957cb"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 655317,
"digest": "sha256:9ff2acc3204b4093126adab3fed72de8f7bbfe332255b199c30b8b185fcf6923"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 14994036,
"digest": "sha256:8ac74a7aa51e199dd90db8aff56c69020813bda8645c39c33531d51c90764808"
}
]
}
}
Also I have tried the csi images from this link based on this comment, from the docker manifest, it looks it has arm images, but once I run it on an arm host, it always give me "format error" which means wrong architect. And then I tried them on amd64 host, and it works... I am wondering did I miss any parameter? And it ends up to I found some images works for me, and couldn't find a resizer working for me, so I build one. I will try to build all of them and update them here later. Thanks.
from longhorn.
Aside from the bug at #1847, my build of longhorn using the vsellier images works perfectly well for me and has been for a good month+. See link to repo posted above, or the full manifest at #1847
from longhorn.
Create a new issue #1857 for tracking finding root cause of replica client and engine controller client behavior differences on amd64 and arm64.
from longhorn.
so, just a small clarification - will longhorn work on armhf in nearest future? if not, can you suggest any other storage options except nfs for k3s, since my original intention was to use glusterfs and I've missed the point, that glusterfs plugin is not included into k3s? Is there any help I could provide for bringing longhorn onto armhf?
from longhorn.
@ypyly , please correct me if I am wrong. Based on arm wiki page, armhf is armv7 which is 32bit CPU architecture. So it is arm not arm64(arm 64bit like armv8). Currently we are focusing on arm64, but with this multi-arch support, probably it will work for arm as well, however no guarantee. I think we would like to support both arm64 and arm. @yasker , please advise the arm support plan. Thanks.
from longhorn.
@ypyly Yeah, armhf
is a stretch goal at the moment, but we will try to support both in the next release.
from longhorn.
Pre-merged Checklist
-
Does the PR include the explanation for the fix or the feature?
-
Is the backend code merged (Manager, Engine, Instance Manager, BackupStore etc)?
The PR is at -
Is the reproduce steps/test steps documented?
-
Which areas/issues this PR might have potential impacts on?
Area
Issues -
If the fix introduces the code for backward compatibility Has an separate issue been filed with the label
release/obsolete-compatibility
?
The compatibility issue is filed at -
If labeled: area/ui Has the UI issue filed or ready to be merged?
The UI issue/PR is at -
if labeled: require/doc Has the necessary document PR submitted or merged?
The Doc issue/PR is at -
If labeled: require/automation-e2e Has the end-to-end test plan been merged? Have QAs agreed on the automation test case?
The automation skeleton PR is at
The automation test case PR is at -
if labeled: require/automation-engine Has the engine integration test been merged?
The engine automation PR is at -
if labeled: require/manual-test-plan Has the manual test plan been documented?
The updated manual test plan is at
from longhorn.
First of all, thank you very much for contributing Longhorn ARM based support PRs. Since it's bee a while and there are lots of changes since your PRs, could you help to rebased your PRs, and submit them again? Or if you want, we can do the rebase for you with your permission. Thanks.
Couple notes:
- We have improved the engine integration tests, this case, you shouldn't hit any test failure with engine repo, you may remove that part from your PR, and please let us know if you hit any test failure.
- For the engine repo, I have added a change request. Please take a look.
- I looks like the UI repo already has the changes, you can skip that one.
Thanks,
Bo
from longhorn.
@boknowswiki, my ARM set-up is disassembled at the moment and I plan to put it back together over the weekend. After that point I would be happy to rebase and resubmit my PRs but if you need the changes quicker than that, feel free to do it without me.
from longhorn.
@boknowswiki, my ARM set-up is disassembled at the moment and I plan to put it back together over the weekend. After that point I would be happy to rebase and resubmit my PRs but if you need the changes quicker than that, feel free to do it without me.
@ivang , thank you for your response. We would like to get this in ASAP. I will submit PRs which cherry-pick yours to keep your credits.
Thanks,
Bo
from longhorn.
Hi
Installation Requirements should be updated with supported architecture, I was looking forward to test longhorn on our rpi4 kubernetes k8s cluster ( on Ubuntu 20.04 64bit)... and all I got was failure with:
standard_init_linux.go:211: exec user process caused "exec format error"
from longhorn.
@VladoPortos currently in the v1.1.0 milestone. Were you testing master?
from longhorn.
@flaccid I did not get past this from longhorn.io docu:
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
From the file it seems like:
longhorn-manager:v1.0.2
longhorn-engine:v1.0.2
longhorn-ui:v1.0.2
from longhorn.
@VladoPortos please wait for this issue to close with v1.1.0 :)
from longhorn.
@flaccid any idea when that will be ? I'm not in hurry at the moment just curious :)
from longhorn.
Milestone date is set to RC1 at the moment. Also, you can check longhorn-manager
repo master branch for latest development code/deployment, which contains ARM64 support. Or you can wait for the RC1 release later this week.
from longhorn.
Thanks @flaccid, I'll wait for 1.1.0
from longhorn.
@VladoPortos same here, I've spent a lot of time searching, since I have 4 HC-2 for storage, I actually found an alternative (which is not so pretty, but it works) - Kadalu + external glusterfs config
from longhorn.
@ypyly Yea, I was running gluster fs as well ( and NFS before that :D ), 6 worker nodes each with own external storage ( USB3 attached ), and each was also a glusterfs server. Glsuter was exporting one volume with 6x replica ( so each node had files locally ) and at the same time each node was also a client mounting the glusterfs using localhost as gluster server... than I could use that mounted folder as local storage and it was distributed to each node. This works but glusterfs is not recommended for stuff like databases and "live data" more for static files...
from longhorn.
@VladoPortos I tried to avoid using nfs at all cost, 1 have 8 worker node cluster with mix of x86 (master and workers) and armhf (dedicated to storage), I've been able to configure nodes with kadalu to provision persistent volumes onto glusterfs, however, haven't tested the performance yet
P.S. this discussion is prob out of scope of this FR
from longhorn.
@yasker I imagine several of us here would be happy to help test this, once it's ready for that. Please let us know.
Indeed, I have just rebuild my cluster, waiting for this one :D
from longhorn.
@cjyar i'm testing it with this helmfile, where ../charts/longhorn
-- path to local copy of https://github.com/longhorn/longhorn/tree/v1.1.0/chart
:
releases:
- name: longhorn
namespace: longhorn-system
installed: true
chart: ../charts/longhorn
values:
- image:
longhorn:
engine:
tag: master-arm64
manager:
tag: master-arm64
ui:
tag: master-arm64
instanceManager:
tag: master-arm64
csi:
attacher:
tag: v2.2.1-lh1
provisioner:
tag: v1.6.0-lh1
nodeDriverRegistrar:
tag: v1.2.0-lh1
resizer:
tag: v0.5.1-lh1
snapshotter:
tag: v2.1.1-lh1
pullPolicy: "Always"
defaultSettings:
defaultDataPath: "/mnt/storage"
ingress:
enabled: true
host: longhorn.k8s.home.lex.la
annotations:
traefik.ingress.kubernetes.io/router.tls: "true"
Btw, it doesn't look useful to me. I'm using it with Kingston DataTraveler G4 32gb as storage, not an enterprise SSD, but the 16gb pvc for Prometheus is creating for more than 20 hours, still pending.
@yasker can you give me the clue on how to find the bottleneck?
UPD: 3x RPi 4 with 8gb RAM, Kingston DataTraveler G4 32GB in each, 1gb ether link
from longhorn.
@fuhrmannb The problem with ARMv7 (32 bits) is we don't have the dev/test environment in our infrastructure. We cannot find reputable cloud providers with ARMv7 machines. And nightly testing is a requirement for us to claim it's officially supported. So ARMv7 is not supported in the v1.1.0 release.
Also, it's hard for us to even accept PRs for ARMv7 since we cannot validate it. Regarding ARM, ARM64 is our focus right now.
You should be able to deploy an armhf/armv7 pod on aarch64 infrastructure, they are usually configured backward compatible, similar to running an x86 OS on x86_64, I do it with gitlab-runner running 32 bit on my Rock Pi 4 cluster.
I have only looked over the issue as I am interested in 32 bit arm support for my Olimex Lime2 SBCs but I will take a look at the testing when I have time to see if we can get something to test arm32.
from longhorn.
I learned the hard way the the correct way to install longhorn is using longhorn-manager. The docs on longhorn.io are wrong. When I installed on my arm64 k3s cluster using longhorn-manager, it was flawless.
from longhorn.
@yasker thanks so much, I'm going to give it a try, I'm so frustrated rebuilding rook+ceph ( It destroy it self after two reboots and never come back )
from longhorn.
So I did:
git clone -b v1.1.0-rc1 https://github.com/longhorn/longhorn.git
helm install longhorn ./longhorn/chart/ --namespace longhorn-system
But the pods are failing with:
time="2020-12-13T14:27:21Z" level=fatal msg="Error starting manager: require share-manager-image"
Than I tried the same with:
helm upgrade -i longhorn . -n longhorn-system \
--set image.longhorn.engine.tag=master-arm64 \
--set image.longhorn.manager.tag=master-arm64 \
--set image.longhorn.ui.tag=master-arm64 \
--set image.longhorn.instanceManager.tag=master-arm64 \
--set image.longhorn.shareManager.tag=master-arm64 \
--set csi.attacher.tag=v2.2.1-lh1-arm64 \
--set csi.provisioner.tag=v1.6.0-lh1-arm64 \
--set csi.nodeDriverRegistrar.tag=v1.2.0-lh1-arm64 \
--set csi.resizer.tag=v0.5.1-lh1-arm64 \
--set csi.snapshotter.tag=v2.1.1-lh1-arm64
With same result, why is it missing share-manager-image, and how I can provide it ?
from longhorn.
@VladoPortos It seems like you're still on the master branch.
A quick check over at https://github.com/longhorn/longhorn/blob/v1.1.0/chart/values.yaml will reveal that you're not pulling the right values.yaml
.
Also it seems to stem from the fact that you're specifying the wrong branch name. Use v1.1.0
as @yasker mentioned here #6 (comment)
from longhorn.
Also, re-pull the images on the node if you're using master. The default image pull policy is IfNotPresent
which will not update the master image.
from longhorn.
Guys i need a nudge, I can create PVC and PV super fine, GUI = brilliant... test pod attached the PVC just fine. But When I try to create pod that is in other namespace than default it will get stuck with:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 5m43s default-scheduler persistentvolumeclaim "longhorn-docker-registry-pvc" not found
Warning FailedScheduling 5m43s default-scheduler persistentvolumeclaim "longhorn-docker-registry-pvc" not found
I know this is not probably longhorn problem but some kind of permission problem. Any ideas what to do ?
Longhorn is in its own namespace "longhorn"
Default namespace works fine
Created new namespace "docker-registry" and it cant find the PVC ( same PVC mounted fine in default )
from longhorn.
@joshimoo you are right :D fixed now... I love longhorn so far
from longhorn.
Hi 👍
Installed v1.1.0 using the steps here
All deployed nicely (although I haven't played with it yet), except for the UI which gives the following error:
2020/12/18 16:20:51 [warn] 1#1: duplicate MIME type "text/html" in /etc/nginx/nginx.conf:7
nginx:[warn] duplicate MIME type "text/html" in /etc/nginx/nginx.conf:7
2020/12/18 16:20:51 [emerg] 1#1: socket() [::]:8000 failed (97: Address family not supported by protocol)
nginx: [emerg] socket() [::]:8000 failed (97: Address family not supported by protocol)
Thanks
from longhorn.
Hi @opticlear
Can you give us more details about your environment setup,I couldn't reproduce the error.
from longhorn.
Hi, thanks for the quick response.
Sorry, yes of course
It's a Pi4 cluster running k3s version v1.18.12+k3s1, on Ubuntu 20.10 server 64-bit.
I am currently running Rancher’s local path provisioner with a remote NFS volume. I have some pods using the NFS volume and some others using local storage. I installed the longhorn chart alongside this with the intent of migrating the pods over to the longhorn volume to test.
If that worked I was going to recreate the cluster using Longhorn only, which I will probably be doing anyway.
Thanks
from longhorn.
@opticlear From the error message it makes me think that you have ipv6 disabled on your cluster.
You can check if ipv6 is enable by following this guide:
https://www.golinuxcloud.com/linux-check-ipv6-enabled/You can use sysctl as here to enable ipv6
https://linuxconfig.org/how-to-disable-ipv6-address-on-ubuntu-20-04-lts-focal-fossaEven if your network is not ipv6 by disabling ipv6 the nginx cannot bind to the local ipv6 address.
By default we try to bind to ip4 and ip6, see:
https://github.com/longhorn/longhorn-ui/blob/master/nginx.conf.template#L13@meldafrawi you should be able to replicate this, by disabling ipv6 on the test nodes.
Strange on my cluster I on purpose disabled IPv6 also Ubuntu 20.10 and this issue did not occurred. ( maybe depends how they are disabled, it was not straight forward to get rig of them :D )
from longhorn.
@opticlear From the error message it makes me think that you have ipv6 disabled on your cluster.
You can check if ipv6 is enable by following this guide:
https://www.golinuxcloud.com/linux-check-ipv6-enabled/You can use sysctl as here to enable ipv6
https://linuxconfig.org/how-to-disable-ipv6-address-on-ubuntu-20-04-lts-focal-fossaEven if your network is not ipv6 by disabling ipv6 the nginx cannot bind to the local ipv6 address.
By default we try to bind to ip4 and ip6, see:
https://github.com/longhorn/longhorn-ui/blob/master/nginx.conf.template#L13@meldafrawi you should be able to replicate this, by disabling ipv6 on the test nodes.
Thanks Joshimoo, yes that will be it! I'll change my provisioning script.
from longhorn.
Added longhorn to my home kubernetes cluster setup guide :) https://rpi4cluster.com/k3s/k3s-storage-setting/ keep it up guys, love it !
from longhorn.
@joshimoo thanks for notes, much appreciated. I will add them. Still need to finish and re-read the whole thing, since some parts were taken when I was trying to get k8s working :D but now wife is not giving me much time to do it :D with all the holidays stuff.
from longhorn.
@yasker What is the current maturity level of longhorn on arm64? in the latest docs at https://longhorn.io/docs/1.2.0/what-is-longhorn/ it is still marked experimental? is it still the case or has the docs just not been updated yet? Thanks
from longhorn.
Can anyone provide step-by-step instructions on how to install Lonhorn on an arm based system?
I have tried doing what @luckyluc74 suggested but I am getting ImagePullBackOff
on all pods in the longhorn-system
namespace.
When I run kubectl get pods -n longhorn-system
this is what I see:
NAME READY STATUS RESTARTS AGE
longhorn-post-upgrade-hv8g8 0/1 ImagePullBackOff 0 2m42s
longhorn-manager-sqq9s 0/1 ImagePullBackOff 0 11m
longhorn-driver-deployer-6db849975f-f8hgj 0/1 Init:ImagePullBackOff 0 11m
longhorn-manager-mjwkl 0/1 ImagePullBackOff 0 11m
longhorn-ui-6f547c964-7g57d 0/1 ImagePullBackOff 0 11m
longhorn-manager-q8qxs 0/1 ImagePullBackOff 0 11m
When I run kubectl describe pod longhorn-post-upgrade-hv8g8 -n longhorn-system
I see
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned longhorn-system/longhorn-post-upgrade-hv8g8 to raspberrypi-004dae06
Normal Pulling 3m21s (x4 over 4m47s) kubelet, raspberrypi-004dae06 Pulling image "longhornio/longhorn-manager:v1.2.3"
Warning Failed 3m20s (x4 over 4m45s) kubelet, raspberrypi-004dae06 Failed to pull image "longhornio/longhorn-manager:v1.2.3": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/longhornio/longhorn-manager:v1.2.3": no match for platform in manifest: not found
Warning Failed 3m20s (x4 over 4m45s) kubelet, raspberrypi-004dae06 Error: ErrImagePull
Warning Failed 2m54s (x6 over 4m45s) kubelet, raspberrypi-004dae06 Error: ImagePullBackOff
Normal BackOff 2m41s (x7 over 4m45s) kubelet, raspberrypi-004dae06 Back-off pulling image "longhornio/longhorn-manager:v1.2.3"
from longhorn.
@anthonybudd Sounds like you are running a 32bit ARM OS which is not supported, ensure you are on an ARMv8 platform.
from longhorn.
Related Issues (20)
- [BACKPORT][v1.6.2][BUG] Lost connection to unix:///csi/csi.sock HOT 1
- [BUG] Instance manager pod consumes high CPU usage HOT 1
- [BUG] Unable to join new node - Longhorn v1.7.0-dev HOT 7
- [FEATURE] Add New Disks via Config of Longhorn / HELM
- [TEST] support Rancher prime version in Jenkins pipeline
- [BACKPORT][v1.5.6][BUG] Backup marked as "completed" cannot be restored, gzip: invalid header
- [UI][IMPROVEMENT] Allow users to request backup volume update
- [BUG][V1.5.5] fs trim job fails on RWX volume HOT 8
- [BACKPORT][v1.5.6][CI] Move CI builds to Github Action
- [BACKPORT][v1.6.2][CI] Move CI builds to Github Action HOT 2
- [Bug] DaemonSet longhorn-manager has too much RBAC permission which may leads the whole cluster being hijacked HOT 3
- [BUG] Test case test_support_bundle_should_not_timeout timed out
- [IMPROVEMENT] Read-only volume monitoring check HOT 8
- [TEST] Fix flaky test case test_csi_block_volume_online_expansion
- [BACKPORT][v1.5.6][IMPROVEMENT] Investigate performance bottleneck in v1 data path HOT 4
- [BACKPORT][v1.6.2][IMPROVEMENT] Investigate performance bottleneck in v1 data path HOT 3
- [BACKPORT][v1.6.2][BUG] Secret for backup not found
- [FEATURE] Enhance Volume Expansion by Relocating and Rebuilding Replicas on Nodes with Sufficient Space HOT 1
- [BUG] backing image isn't restored after restored a system backup HOT 2
- [BUG] Error starting manager: upgrade API version failed: cannot create CRDAPIVersionSetting HOT 8
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from longhorn.