Giter VIP home page Giter VIP logo

ceph-cinder-demo's Introduction

OpenShift and KubeVirt Multinode Demo

Deploying OpenShift with Vagrant

This will create an OpenShift cluster with one manster and one node via Vagrant:

OpenShift

make cluster-up
make cluster-openshift

The deployment can take pretty long. Also you will not see much output from the openshift installer while it is running. To see what is going on in the installer run

vagrant ssh master -c "tailf /ansible.log"

Once the deployment is done, you can directly talk to the master via ./oc.sh:

$ ./oc.sh get nodes
NAME      STATUS    ROLES     AGE       VERSION
master    Ready     master    3h        v1.9.1+a0ce1bc657
node      Ready     <none>    3h        v1.9.1+a0ce1bc657

Note: Right now we only support having a single master.

Storage Provisioner

To deploy storage run

make cluster-storage

Once it is done, you can see the storage deployed:

$ ./oc.sh get pods -n kube-system
NAME          READY     STATUS    RESTARTS   AGE
ceph-demo-0   7/7       Running   2          52m

To test the installation, start a pod which requests storage from that provisioner:

$ ./oc.sh create -f examples/storage-pod.yaml
$ /oc.sh get pvc
NAME       STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
demo-pvc   Bound     pvc-2d1a98e1-12ef-11e8-a1c4-525400cc240d   1Gi        RWO            standalone-cinder   24m
$ ./oc.sh get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM              STORAGECLASS        REASON    AGE
pvc-2d1a98e1-12ef-11e8-a1c4-525400cc240d   1Gi        RWO            Delete           Bound     default/demo-pvc   standalone-cinder             24m

KubeVirt

To deploy KubeVirt run

make cluster-kubevirt

Once it is done, you can see all KubeVirt pods deployed:

$ ./oc.sh get pods -n kube-system
NAME                              READY     STATUS    RESTARTS   AGE
virt-controller-66d948c84-kmfxs   0/1       Running   0          17m
virt-controller-66d948c84-mmmnx   1/1       Running   0          17m
virt-handler-64sjz                1/1       Running   0          17m
virt-handler-ps8ds                1/1       Running   0          17m

To test the installation, start a vm with alpine:

$ ./oc.sh create -f examples/vm.yaml
$ sleep 300 # We need to pull a lot when the first vm starts on a node
$ ./oc.sh get vms -o yaml | grep phase
    phase: Running

To connect to a VM:

./oc.sh console demo-vm  # Serial console
./oc.sh vnc demo-vm      # VNC

To import and run a cirros vm:

$ ./oc.sh create -f examples/import-cirros.yaml
$ sleep 120  # The cirros disk image downloads into a PVC
$ ./oc.sh get pod disk-importer  # Repeat until pod is Completed
$ ./oc.sh create -f examples/vm-cirros-clone.yaml
$ ./oc.sh get vms -o yaml | grep phase
    phase: Running

ManageIQ - kubevirt provider configuration

To configure KubeVirt provider run

make cluster-miq

Once it is done, you can see that service account was created

./oc.sh get sa
NAME       SECRETS   AGE
builder    2         20m
default    3         19m
deployer   2         20m
miq        2         19s
registry   3         14m
router     2         14m

and offlinevms custom resource was created

./oc.sh get offlinevms
No resources found.

In order to configure provider in ManageIQ there are two tokens needed by running

./oc.sh sa get-token -n management-infra management-admin

./oc.sh sa get-token -n default miq

Deploying OpenShift on arbitrary nodes

First create an inventory:

[nodes]
node1 ansible_host=192.168.200.4 ansible_user=root
node2 ansible_host=192.168.200.3 ansible_user=root

[master]
master ansible_host=192.168.200.2 ansible_user=root

Save it in myinventory. Then run

ansible-playbook -i myinventory openshift.yaml
ansible-playbook -i myinventory storage.yaml
ansible-playbook -i myinventory kubevirt.yaml
``

ceph-cinder-demo's People

Contributors

aglitke avatar karmab avatar pkliczewski avatar rmohr avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

ceph-cinder-demo's Issues

cant mount pvc in storage-pod sample

when running the examples/storage-pod.yaml sample, i get this error

2018-02-21 19:31:02 +0000 UTC   2018-02-21 19:31:02 +0000 UTC   1         demo-pod.15156e0fa678839d   Pod                 Warning   FailedMount   kubelet, 192.168.201.2   MountVolume.WaitForAttach failed for volume "pvc-9b939c0e-173a-11e8-8e2a-525400f9a98e" : error: exit status 110, rbd output: 2018-02-21 19:26:02.752493 7fbc3a35ed40 -1 did not load config file, using default settings.
2018-02-21 19:26:02.756933 7fbc3a35ed40 -1 Errors while parsing config file!
2018-02-21 19:26:02.756939 7fbc3a35ed40 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2018-02-21 19:26:02.756940 7fbc3a35ed40 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2018-02-21 19:26:02.756940 7fbc3a35ed40 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2018-02-21 19:26:02.758582 7fbc3a35ed40 -1 Errors while parsing config file!
2018-02-21 19:26:02.758589 7fbc3a35ed40 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2018-02-21 19:26:02.758589 7fbc3a35ed40 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2018-02-21 19:26:02.758590 7fbc3a35ed40 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2018-02-21 19:26:02.797061 7fbc3a35ed40 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.cinder.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2018-02-21 19:31:02.798167 7fbc3a35ed40  0 monclient(hunting): authenticate timed out after 300
2018-02-21 19:31:02.798850 7fbc3a35ed40  0 librados: client.cinder authentication error (110) Connection timed out
rbd: couldn't connect to the cluster!

not that i had to create the secret standalone-cinder-cephx-secret in order to launch the pod at all
i used the existing definition from the templates/provisioner.yml ( which for some reason didnt get created, though other resources as the ceph-demo statefulset or the storage class did

start docker deamon failed on vagrant node

when I execute

➜  ceph-cinder-demo git:(master) ✗ make cluster-openshift                                                         

..
..
TASK [common : Start Docker] ***************************************************
fatal: [node]: FAILED! => {"changed": false, "msg": "Unable to start service docker: Job for docker.service failed because the control process exited with error code. See \"systemctl status docker.service\" and \"journalctl -xe\" for details.\n"}
	to retry, use: --limit @/home/shiywang/ceph-cinder-demo/openshift.retry

PLAY RECAP *********************************************************************
node                       : ok=7    changed=0    unreachable=0    failed=1   

Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
make: *** [cluster-openshift] Error 1

then vagrant ssh node

[vagrant@node ~]$ sudo journalctl -xe
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit user-1000.slice has finished starting up.
-- 
-- The start-up result is done.
Mar 12 07:54:20 node systemd[1]: Starting User Slice of vagrant.
-- Subject: Unit user-1000.slice has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit user-1000.slice has begun starting up.
Mar 12 07:54:20 node systemd[1]: Started Session 1 of user vagrant.
-- Subject: Unit session-1.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit session-1.scope has finished starting up.
-- 
-- The start-up result is done.
Mar 12 07:54:20 node systemd-logind[653]: New session 1 of user vagrant.
-- Subject: A new session 1 has been created for user vagrant
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat
-- 
-- A new session with the ID 1 has been created for the user vagrant.
-- 
-- The leading process of the session is 1263.
Mar 12 07:54:20 node systemd[1]: Starting Session 1 of user vagrant.
-- Subject: Unit session-1.scope has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit session-1.scope has begun starting up.
Mar 12 07:54:20 node sshd[1263]: pam_unix(sshd:session): session opened for user vagrant by (uid=0)
Mar 12 07:54:59 node sudo[1297]:  vagrant : TTY=pts/0 ; PWD=/home/vagrant ; USER=root ; COMMAND=/bin/journalctl -xe
[vagrant@node ~]$ 
[vagrant@node ~]$ ls
[vagrant@node ~]$ systemctl restart docker
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to manage system services or units.
Authenticating as: root
Password: 
[vagrant@node ~]$ sudo systemctl restart docker
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
[vagrant@node ~]$ journalctl -xe
Hint: You are currently not seeing messages from other users and the system.
      Users in the 'systemd-journal' group can see all messages. Pass -q to
      turn off this notice.
No journal files were opened due to insufficient permissions.
[vagrant@node ~]$ sudo journalctl -xe
Mar 12 07:55:11 node systemd[1]: Starting Docker Storage Setup...
-- Subject: Unit docker-storage-setup.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit docker-storage-setup.service has begun starting up.
Mar 12 07:55:12 node systemd[1]: Started Docker Storage Setup.
-- Subject: Unit docker-storage-setup.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit docker-storage-setup.service has finished starting up.
-- 
-- The start-up result is done.
Mar 12 07:55:12 node systemd[1]: Starting Docker Application Container Engine...
-- Subject: Unit docker.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit docker.service has begun starting up.
Mar 12 07:55:12 node dockerd-current[1358]: time="2018-03-12T07:55:12.054005862Z" level=warning msg="could not change group /var/run/docker.sock to docker: gr
Mar 12 07:55:12 node dockerd-current[1358]: time="2018-03-12T07:55:12.056469594Z" level=info msg="libcontainerd: new containerd process, pid: 1364"
Mar 12 07:55:13 node dockerd-current[1358]: Error starting daemon: SELinux is not supported with the overlay2 graph driver on this kernel. Either boot into a 
Mar 12 07:55:13 node systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Mar 12 07:55:13 node systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit docker.service has failed.
-- 
-- The result is failed.
Mar 12 07:55:13 node systemd[1]: Unit docker.service entered failed state.
Mar 12 07:55:13 node systemd[1]: docker.service failed.
Mar 12 07:55:13 node polkitd[685]: Unregistered Authentication Agent for unix-process:1313:6991 (system bus name :1.18, object path /org/freedesktop/PolicyKit
Mar 12 07:55:25 node sudo[1376]:  vagrant : TTY=pts/0 ; PWD=/home/vagrant ; USER=root ; COMMAND=/bin/journalctl -xe
[vagrant@node ~]$ 

ceph-cinder-demo master branch HEAD

Make the storageClass to be default

It is very useful to have standalone-cinder class to be marked as default. Workaround is to run:

$ oc patch storageclass standalone-cinder -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

The PVC is pretty fast bound and ready but it takes between 1 to 4 minutes to mount it

VM-Pod event:

  Warning  FailedMount            1m    kubelet, node      Unable to mount volumes for pod "virt-launcher-demo-vm-8t6h9_default(04d9fb01-16e5-11e8-b9f8-525400b8e418)": timeout expired waiting for volumes to attach/mount for pod "default"/"virt-launcher-demo-vm-8t6h9". list of unattached/unmounted volumes=[pvcvolume]

After a few retries from the kubelet it is finally mounted and the VM starts.

Slow storage

When using storage I see that it takes 2 time longer to write the same amount of data as to local disk on my laptop.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.