Giter VIP home page Giter VIP logo

Comments (16)

Dr4ik avatar Dr4ik commented on July 20, 2024

Try to do:

/etc/init.d/tgt stop

There is some stupid conflict between tgt and iscsi-target i think.

from openstack-grizzly-install-guide.

bilelmsekni avatar bilelmsekni commented on July 20, 2024

Please be more specific and try to determine the problem (or the log errors ) before pasting all of this just like that.

from openstack-grizzly-install-guide.

maller avatar maller commented on July 20, 2024

I can't attach volume to my instances. I can start instances without volume, that work fine and i can create volume.
My setup: 3 Nodes Control, Network and Computenode, i followed the Multinode Guide.

If i attached volume to any instances, they will be stay in attaching status. I test it about Horizon and CLI I become the same failure.

nova-compute.log from compute node
2013-04-24 10:55:21.182 ERROR nova.openstack.common.rpc.amqp [req-058903d8-2ff9-4fd0-9251-ab09b735b584 7e8cef045e0e46749fdb8fc7a0d91317 f438353be1494506b2950a5c9d79d374] Exception during message handling
TRACE nova.openstack.common.rpc.amqp ConnectionError: [Errno 101] ENETUNREACH

@Dr4ik
i have stop the tgt service, but the problem still exists

from openstack-grizzly-install-guide.

hrushig avatar hrushig commented on July 20, 2024

Am exactly seeing this error.
2013-04-24 10:55:21.182 ERROR nova.openstack.common.rpc.amqp [req-058903d8-2ff9-4fd0-9251-ab09b735b584 7e8cef045e0e46749fdb8fc7a0d91317 f438353be1494506b2950a5c9d79d374] Exception during message handling
TRACE nova.openstack.common.rpc.amqp ConnectionError: [Errno 101] ENETUNREACH

It seems to me that we need to specify somewhere the Cinder details in nova-compute.conf.

from openstack-grizzly-install-guide.

bilelmsekni avatar bilelmsekni commented on July 20, 2024

You need to mount the cinder-volumes group inside each compute node !
If you do a vgdisplay in a compute node and you don't see the
cinder-volumes volume group , that means you can't connect your VMs to your
volumes !

2013/4/24 hrushig [email protected]

Am exactly seeing this error.
2013-04-24 10:55:21.182 ERROR nova.openstack.common.rpc.amqp
[req-058903d8-2ff9-4fd0-9251-ab09b735b584 7e8cef045e0e46749fdb8fc7a0d91317
f438353be1494506b2950a5c9d79d374] Exception during message handling
TRACE nova.openstack.common.rpc.amqp ConnectionError: [Errno 101]
ENETUNREACH

It seems to me that we need to specify somewhere the Cinder details in
nova-compute.conf.


Reply to this email directly or view it on GitHubhttps://github.com//issues/35#issuecomment-16937152
.

Best regards,

Bilel Msekni | IT Engineer
Mobile: +33 6 49 52 42 17

from openstack-grizzly-install-guide.

hrushig avatar hrushig commented on July 20, 2024

Adding this line to compute node's nova.conf helped resolve the issue.
cinder_catalog_info=volume:cinder:internalURL

I understand it is an overhead on management network i.e. 10.10.10.x, but i can;t create and dedicate volume on each compute node. I guess your solution is to add multiple storage backend so that each compute node has its local cinder volume. Can you please suggest that in your documentation?

from openstack-grizzly-install-guide.

maller avatar maller commented on July 20, 2024

thanks, with this line works now.

from openstack-grizzly-install-guide.

hrushig avatar hrushig commented on July 20, 2024

Don't we need to document this?

from openstack-grizzly-install-guide.

bilelmsekni avatar bilelmsekni commented on July 20, 2024

Indeed Hrushig ! we need to !
But can you correct this for me please, if i want to add this to nova.conf in compute node, it should become like this :
cinder_catalog_info=volume:cinder-volumes:10.10.10.51
right ?

from openstack-grizzly-install-guide.

hrushig avatar hrushig commented on July 20, 2024

I had to do two changes:

Control node (cinder.conf): don’t replace internalURL with ip address – just keep it as is.

Cinder

volume_api_class=nova.volume.cinder.API
osapi_volume_listen_port=5900
cinder_catalog_info=volume:cinder:internalURL

Compute node (nova.conf):
iscsi_ip_address=10.10.10.51

From: SkiBLE [mailto:[email protected]]
Sent: Friday, April 26, 2013 1:19 AM
To: mseknibilel/OpenStack-Grizzly-Install-Guide
Cc: Gangur, Hrushikesh (HP Converged Cloud - R&D - Sunnyvale)
Subject: Re: [OpenStack-Grizzly-Install-Guide] stay in attaching status (#35)

Indeed Hrushig ! we need to !
But can you correct this for me please, if i want to add this to nova.conf in compute node, it should become like this :
cinder_catalog_info=volume:cinder-volumes:10.10.10.51
right ?


Reply to this email directly or view it on GitHubhttps://github.com//issues/35#issuecomment-17060830.

from openstack-grizzly-install-guide.

bilelmsekni avatar bilelmsekni commented on July 20, 2024

By control node, i think you mean change in nova.conf and not cinder.conf,
am i correct ?

Best regards,
Bilel

2013/4/26 hrushig [email protected]

I had to do two changes:

Control node (cinder.conf): don’t replace internalURL with ip address –
just keep it as is.

Cinder

volume_api_class=nova.volume.cinder.API
osapi_volume_listen_port=5900
cinder_catalog_info=volume:cinder:internalURL

Compute node (nova.conf):
iscsi_ip_address=10.10.10.51

From: SkiBLE [mailto:[email protected]]
Sent: Friday, April 26, 2013 1:19 AM
To: mseknibilel/OpenStack-Grizzly-Install-Guide
Cc: Gangur, Hrushikesh (HP Converged Cloud - R&D - Sunnyvale)
Subject: Re: [OpenStack-Grizzly-Install-Guide] stay in attaching status
(#35)

Indeed Hrushig ! we need to !
But can you correct this for me please, if i want to add this to nova.conf
in compute node, it should become like this :
cinder_catalog_info=volume:cinder-volumes:10.10.10.51
right ?


Reply to this email directly or view it on GitHub<
https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/issues/35#issuecomment-17060830>.


Reply to this email directly or view it on GitHubhttps://github.com//issues/35#issuecomment-17061074
.

Best regards,

Bilel Msekni | IT Engineer
Mobile: +33 6 49 52 42 17

from openstack-grizzly-install-guide.

hrushig avatar hrushig commented on July 20, 2024

Sorry, to be clear on the instructions, here is what we need to get the cinder volume attach to active VM instance working:

  1. On the Controller node, edit the /etc/cinder/cinder.conf and add a line

This is to indicate that when we create volume, the iqn to record mgmt ip of controller node. By default, it uses external ip of controller node that is not reachable from compute node.

iscsi_ip_address=10.10.10.51

  1. On the compute node(s), edit the /etc/nova/nova.conf
    This is to indicate that nova to look at internal URI endpoint of cinder node. By default, it uses external IP of the cinder node.

cinder_catalog_info=volume:cinder:internalURL

Additionally (and optional), and not sure if everyone will observe this issue. I had to change my cinder configuration reflect tgt and here are the instructions and explanation to do so:

If you are unable to attach an existing volume to an instance, you must switch from iet to tgt iscsi_helper mode in cinder.conf. The reason is mentioned in this Launchpad id https://bugs.launchpad.net/cinder/+bug/1096009

Error seen in nova-compute.log:

iscsiadm: No portal found

Explanation

I noticed that some Cinder related install guides instructs you to install the "iscsitarget" package (that's the IET iSCSI target, instead of TGT) and enable it's user space components in "/etc/default/iscsitarget". When combined with the "cinder-volume" package, which (in Ubuntu 12.10) already depends on tgt by default, gives you two active iSCSI target daemons (yes, that's one to many).
In this case it's possible (and most likely) that the IET iSCSI target starts earlier than the TGT one and blocks the default iSCSI port (3260) that prevents TGT to successfully start. When this happens Cinder's tgt-admin commands are send to the active IET iSCSI target which doesn't work (IET & TGT are two completely different iSCSI frameworks). This doesn't effect LVM volume creation because these are separate calls from the tgt-admin ones, so volumes are created successfully as you stated.

Check if, and more important which iSCSI target daemon is active on your system

1.ss -tuplen | grep 3260
It should show an active socket on tcp port 3260 with a tgtd process

If you're using Ubuntu 12.10 check if you have the package "iscsitarget" installed. Remove this and also the kernel module package "iscsitarget-dkms" if installed, after that restart all related OpenStack services or just reboot to be absolutely sure...

Solution
The instructions are as following:
0. Terminate all the instances and volumes.

  1. uninstall iscsitarget components:
    apt-get purge iscsitarget
    apt-get purge iscsitarget-dkms
  2. Remove contents from
    /etc/iscsi/nodes
    /etc/send_targets/
  3. Modify Cinder.conf to change the iscsi_helper mode:
    iscsi_helper = tgtadm
  4. Restart nova and cinder and open-iscsi services
    service open-iscsi restart
    cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done
    cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done

from openstack-grizzly-install-guide.

drolfe avatar drolfe commented on July 20, 2024

Hrushig, is likely correct, I'm using ubuntu os deployed with MAAS server and TGT is either installed by default or comes with cinder, I had the problem of TGT starting first and having the port already, I haven't tested my volumes as yet but I've always used TGT in past installs , in fact if you do apt-get remove -s TGT you will see it also wants to remove cinder-volume so they are packaged together with Ubuntu by the looks of the dependencies

from openstack-grizzly-install-guide.

bilelmsekni avatar bilelmsekni commented on July 20, 2024

I have Verified this and it's working now thanks to hrushig's instructions ! Good Job man ! I have added you to the official contributor's list 👍

from openstack-grizzly-install-guide.

 avatar commented on July 20, 2024

Hi all.

I add the same problem. I could create volumes but they didn't attach to an existing instance.
The work around that issue was:

1 - In Horizon, delete the volume that was created, because this one used tgt to create
2 - Stop tgt service --> tgt stop
3 - Restart iscsi --> service iscsitarget restart ; service open-iscsi restart
4 - In Horizon, create a new volume
5 - Then attach it to an instance

I hope that this helps

Cheers

from openstack-grizzly-install-guide.

rubber-ant avatar rubber-ant commented on July 20, 2024

@mseknibilel could you fix that in the main guide please !

from openstack-grizzly-install-guide.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.