Giter VIP home page Giter VIP logo

openstack-grizzly-install-guide's Introduction

OpenStack-Grizzly-Install-Guide

You want to install OpenStack Grizzly and you don't know how ? This is what you are looking for !

It's easy, simple and tested. Can't wait? Go check it out by yourself:

ScreenShot

Guide Branch Single/Multi Node Quantum plugin Direct Guide Link
master Single Linux Bridge https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/master/OpenStack_Grizzly_Install_Guide.rst
OVS_SingleNode Single OpenVSwitch https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_SingleNode/OpenStack_Grizzly_Install_Guide.rst
OVS_MutliNode Multi OpenVSwitch https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst
Nicira_SingleNode Single Nicira NVP https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/Nicira_SingleNode/OpenStack_Grizzly_Install_Guide.rst
SandBox on Virtual Machine Single/Multi N/A https://github.com/dguitarbite/OpenStack-Grizzly-VM-SandBox-Guide/blob/master/SandBox-Single-Node.rst

openstack-grizzly-install-guide's People

Contributors

alfreb avatar bilelmsekni avatar cflmarques avatar fmanco avatar nilesh56 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openstack-grizzly-install-guide's Issues

Quantum server scheduler error

Every time when I run an instance I see this error once in the quantum server.log on the controller node:
WARNING [quantum.db.agentschedulers_db] Fail scheduling network {'status': u'ACTIVE', 'subnets': [u'247039e6-5f7e-4d4a-a211-00c34f757c02'] ...........

Everything is running normal but the error is still present.

Unable to see the attached volume in instance - systools not installed

When you are attaching a device to an instance as block storage, the attach works fine, but if you login to the launched instance, you don't see the new storage. There are two factors that influences this behavior:

  1. Image - Few images does not support, hot add of the block device. One has to load ‘acpiphp’ module to support this. However, most of the images available has this loaded.
  2. Missing package - this wiki has missing instructions on one of the packages that does this job. Every compute node requires this package
    apt-get install sysfsutil

Can we get the wiki updated with point 2 package name?

Grizzly, MultiNode - No smily faces!!

Smily faces on the controller node on the command nova-manage service list.

However, XXX on the compute node. Nothing in the logs, tried the install verbatim.

Any pointers?

how to check username password inconsistency?

karthi hobby • a few seconds ago
−+
DeleteFlag as inappropriateHi
In glance-api-paste.ini and glance-registry-paste.ini files, i have mentioned [filter:authtoken] "admin_user = glance" "admin_password = glance"
In glance-registry.conf and glance-api.conf files, i have mentioned [keystone_authtoken] "admin_user = admin" "admin_password = secrete"
In novarc i have mentioned "export OS_PASSWORD=admin_pass"
so how to check the user name, password inconsistency and actual user name and password of keystone etc...(glance,nova,quantum...)

Kindly help for better understanding?

Thanks,
Karthi

Swift

Can you add Swift / Object storage to your guide?

possible glance issue in multinode docs

in the doc for tweaking /etc/glance/glance-registry-paste.ini it has the line:

paste.filter_factory = keystone.middleware.auth_token:filter_factory

but the default value in the file from install is
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
(additional 'client' in the name.)

which also matches the tweaks to be done in the /etc/glance/glance-api-paste.ini file
from the previous step.

thanks

steve

Wrong wget URL.

bridge

Using single node with single nic have i to add a bridge with brctl to have external connection?
Thanks

Unable to run Instance, Ends with status Error in OpenStack

HI,

I followed your OVS_MultiNode installation. same setup nodes but except compute node have only one interface, I manually created eth1:0, eth0:1

I am trying to start instance it's says instance status Error.
In horizon log tab says Unable to get log for instance "c307ae2b-c09d-4c05-9525-906ed35d81c5".

This my /var/log/nova/nova-compute.log

2013-05-13 15:59:38.717 3014 TRACE nova.compute.manager [instance: c307ae2b-c09d-4c05-9525-906ed35d81c5] QuantumClientException: 404 Not Found
2013-05-13 15:59:38.717 3014 TRACE nova.compute.manager [instance: c307ae2b-c09d-4c05-9525-906ed35d81c5]
2013-05-13 15:59:38.717 3014 TRACE nova.compute.manager [instance: c307ae2b-c09d-4c05-9525-906ed35d81c5] The resource could not be found.

Full stace trace.
http://paste.ubuntu.com/5662352/

Please guide me how to resolve this.

-Dhanasekaran.

GRE tunnel without VLANs

I followed the multi node guide, but i see the packets from VMs, incapsulated into the GRE, contains the VLANs tag.

How could only use GRE without VLANs?

On ovs-vsctl show i see the port on br-int(compute) used by VM has a tag, no I removed the tag manually and connected eth1 to br-int to have external connection, but i lose GRE, the packets are sent directly to the vm management network, and the network node forwards it to the internet.

Any help will be appreciated

Michael

Horizon "OfflineGenerationError"

I've followed the Multinode Guide. Really good job - thanks.

The Horizon interface (using the Ubuntu theme) fails with

OfflineGenerationError: You have offline compression enabled but key "3ddd89d27fa2e162d4efd30c103a072b" is missing from offline manifest. You may need to run "python manage.py compress".

Googling suggests to set

    COMPRESS_OFFLINE = False

in /etc/openstack-dashboard/local_settings.py

This changes the error to

UncompressableFileError: 'horizon/js/horizon.js' isn't accessible via COMPRESS_URL ('/static/') and can't be compressed, referer: http://10.0.0.11/horizon

So now I'm a bit stuck. :(

Otherwise, so far as I can tell, it's working fine.

Missing Quantum configuration in Compute node

I believe a section for the Quantum Agent configuration, is missing for the Compute node. In particular we should read something like:

  • Edit /etc/quantum/api-paste.ini::

    [filter:authtoken]
    paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
    auth_host = 10.10.10.51
    auth_port = 35357
    auth_protocol = http
    admin_tenant_name = service
    admin_user = quantum
    admin_password = service_pass

Without this configuration the agent won't appear to be alive and we will get a xxx in the agent list:

root@control:~# quantum agent-list
+--------------------------------------+--------------------+---------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+---------+-------+----------------+
| 2c5c994c-a367-4de0-871b-f1e18cb348a0 | Open vSwitch agent | network | :-) | True |
| 7e63cbed-b1a5-4bbf-bf8e-80d16a83c567 | DHCP agent | network | :-) | True |
| b4add1f9-c278-42cf-92a6-694aab078fcb | Open vSwitch agent | compute | xxx | True |
| d6260558-4042-41c5-b926-d1e527b7bec8 | L3 agent | network | :-) | True |
+--------------------------------------+--------------------+---------+-------+----------------+

VM and External network can't reach internet

Hi,
everything is work ok except that when I try to add external network to the router gataway interface on the external network is a Status Down. I can ping local interface on router via namespaces and VM get IP but can't ping router external gw interface outside (from internet).

I have exactly the same issue. With 3 machines - controler, network and compute node. The local network is a ok VM get IP, and i can ping them but external network interface GW 95.214.x.x is listed as Status DOWN.

quantum net-list
+--------------------------------------+---------------+--------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------------+--------------------------------------------------------+
+---------------------------+--------------------------------------+
| 8c2d3c27-6f5c-4b40-9db7-5aa8bcc9335e | StudioWeb_INT | 364e67ef-ce7c-4649-aeeb-c15f4483c3f8 10.1.1.0/24 |
| d1eccd72-cd5a-4532-8cc8-c6c4ffdd8bff | StudioWeb_EXT | a025b78b-264c-4961-9b6d-79782364c105 95.169.x.x/26 |

quantum net-show d1eccd72-cd5a-4532-8cc8-c6c4ffdd8bff
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | d1eccd72-cd5a-4532-8cc8-c6c4ffdd8bff |
| name | StudioWeb_EXT |
| provider:network_type | gre |
| provider:physical_network | |
| provider:segmentation_id | 2 |
| router:external | True |
| shared | False |
| status | ACTIVE |
| subnets | a025b78b-264c-4961-9b6d-79782364c105 |
| tenant_id | f73d5c7c26cd4f24aef990928fbb68b3 |
+---------------------------+--------------------------------------+

quantum router-list
+--------------------------------------+------------------+--------------------------------------------------------+
| id | name | external_gateway_info |
+--------------------------------------+------------------+--------------------------------------------------------+
| e090b0c8-ab49-4ae7-a7f8-91c8eb0b7789 | StudioWeb_Router | {"network_id": "d1eccd72-cd5a-4532-8cc8-c6c4ffdd8bff"} |
+--------------------------------------+------------------+--------------------------------------------------------+

quantum router-show StudioWeb_Router
+-----------------------+--------------------------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------------------------+
| admin_state_up | True |
| external_gateway_info | {"network_id": "d1eccd72-cd5a-4532-8cc8-c6c4ffdd8bff"} |
| id | e090b0c8-ab49-4ae7-a7f8-91c8eb0b7789 |
| name | StudioWeb_Router |
| routes | |
| status | ACTIVE |
| tenant_id | aecab2512ceb4083a591bea7a7f2c89f |
+-----------------------+--------------------------------------------------------+

quantum port-list
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
| 51db3736-e34c-4283-8075-947e439fe144 | | fa:16:3e:62:64:3d | {"subnet_id": "a025b78b-264c-4961-9b6d-79782364c105", "ip_address": "95.169.x.x"} |
| 58d87d7a-57a7-4993-a429-b827e9b82cbc | | fa:16:3e:17:56:6e | {"subnet_id": "364e67ef-ce7c-4649-aeeb-c15f4483c3f8", "ip_address": "10.1.1.1"} |
| c1845d95-f71d-492a-9edc-02f24e962535 | | fa:16:3e:0d:dd:c0 | {"subnet_id": "364e67ef-ce7c-4649-aeeb-c15f4483c3f8", "ip_address": "10.1.1.3"} |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+

quantum port-show 51db3736-e34c-4283-8075-947e439fe144
+----------------------+---------------------------------------------------------------------------------------+
| Field | Value |
+----------------------+---------------------------------------------------------------------------------------+
[...]
| admin_state_up | True |
| binding:capabilities | {"port_filter": false} |
| binding:vif_type | ovs |
| device_id | e090b0c8-ab49-4ae7-a7f8-91c8eb0b7789 |
| device_owner | network:router_gateway |
| fixed_ips | {"subnet_id": "a025b78b-264c-4961-9b6d-79782364c105", "ip_address": "95.169.214.118"} |
| id | 51db3736-e34c-4283-8075-947e439fe144 |
| mac_address | fa:16:3e:62:64:3d |
| name | |
| network_id | d1eccd72-cd5a-4532-8cc8-c6c4ffdd8bff |
| status | DOWN |
| tenant_id | |

keystone user-role-add: error: argument --user is required

Hi All,

I am following multinode installation script haveing some problem some arugment is required please check
https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst

root@controllernode:# service keystone start
keystone start/running, process 1837
root@controllernode:
# keystone-manage db_sync
root@controllernode:# ls -ltr
total 12
-rwxr-xr-x 1 root root 2463 Apr 25 04:03 keystone_basic.sh
-rwxr-xr-x 1 root root 4489 Apr 25 04:05 keystone_endpoints_basic.sh
root@controllernode:
# vim keystone_basic.sh
root@controllernode:~# ifconfig
eth0 Link encap:Ethernet HWaddr 34:40:b5:89:50:b6
inet addr:192.168.70.180 Bcast:192.168.70.255 Mask:255.255.255.0
inet6 addr: fe80::3640:b5ff:fe89:50b6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:941 errors:0 dropped:8 overruns:0 frame:0
TX packets:532 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:89648 (89.6 KB) TX bytes:78219 (78.2 KB)
Interrupt:17 Memory:c1a80000-c1aa0000

eth1 Link encap:Ethernet HWaddr 34:40:b5:89:50:b7
inet addr:10.10.10.51 Bcast:10.10.10.255 Mask:255.255.255.0
inet6 addr: fe80::3640:b5ff:fe89:50b7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:26 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3840 (3.8 KB) TX bytes:492 (492.0 B)
Interrupt:18 Memory:c1980000-c19a0000

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:948 errors:0 dropped:0 overruns:0 frame:0
TX packets:948 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:120723 (120.7 KB) TX bytes:120723 (120.7 KB)

root@controllernode:# ./keystone_basic.sh
usage: keystone user-role-add --user --role
[--tenant_id ]
keystone user-role-add: error: argument --user is required
usage: keystone user-role-add --user --role
[--tenant_id ]
keystone user-role-add: error: argument --user is required
usage: keystone user-role-add --user --role
[--tenant_id ]
keystone user-role-add: error: argument --user is required
usage: keystone [--os_username ]
[--os_password ]
[--os_tenant_name ]
[--os_tenant_id ] [--os_auth_url ]
[--os_region_name ]
[--os_identity_api_version ]
[--token ] [--endpoint ]
[--username ] [--password ]
[--tenant_name ] [--auth_url ]
[--region_name ]
...
keystone: error: unrecognized arguments: --tenant-id b981baad4e1841fe9b866ccc840c28a8
usage: keystone user-role-add --user --role
[--tenant_id ]
keystone user-role-add: error: argument --user is required
usage: keystone [--os_username ]
[--os_password ]
[--os_tenant_name ]
[--os_tenant_id ] [--os_auth_url ]
[--os_region_name ]
[--os_identity_api_version ]
[--token ] [--endpoint ]
[--username ] [--password ]
[--tenant_name ] [--auth_url ]
[--region_name ]
...
keystone: error: unrecognized arguments: --tenant-id b981baad4e1841fe9b866ccc840c28a8
usage: keystone user-role-add --user --role
[--tenant_id ]
keystone user-role-add: error: argument --user is required
usage: keystone [--os_username ]
[--os_password ]
[--os_tenant_name ]
[--os_tenant_id ] [--os_auth_url ]
[--os_region_name ]
[--os_identity_api_version ]
[--token ] [--endpoint ]
[--username ] [--password ]
[--tenant_name ] [--auth_url ]
[--region_name ]
...
keystone: error: unrecognized arguments: --tenant-id b981baad4e1841fe9b866ccc840c28a8
usage: keystone user-role-add --user --role
[--tenant_id ]
keystone user-role-add: error: argument --user is required
usage: keystone [--os_username ]
[--os_password ]
[--os_tenant_name ]
[--os_tenant_id ] [--os_auth_url ]
[--os_region_name ]
[--os_identity_api_version ]
[--token ] [--endpoint ]
[--username ] [--password ]
[--tenant_name ] [--auth_url ]
[--region_name ]
...
keystone: error: unrecognized arguments: --tenant-id b981baad4e1841fe9b866ccc840c28a8
usage: keystone user-role-add --user --role
[--tenant_id ]
keystone user-role-add: error: argument --user is required
root@controllernode:
#

Keystone + mySQL connection problems

I was following the OpenStack-Grizzly-Install-Guide and i found some problems doing the connection to the mysql:

How i got here:
#3. Keystone

Start by the keystone packages:

apt-get install -y keystone

Verify your keystone is running:

service keystone status

Create a new MySQL database for keystone:

mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL ON keystone.* TO 'keystoneUser'@'%' IDENTIFIED BY 'keystonePass';
quit;

Adapt the connection attribute in the /etc/keystone/keystone.conf to the new database:

connection = mysql://keystoneUser:[email protected]/keystone

Restart the identity service then synchronize the database:

service keystone restart
keystone-manage db_sync

after using the sync command i got a tracktrace from python telling me that i cant do a connection with 'root'@'localhost' well this is the terminal log:

root@cloud:~# keystone-manage db_sync
Traceback (most recent call last):
File "/usr/bin/keystone-manage", line 28, in
cli.main(argv=sys.argv, config_files=config_files)
File "/usr/lib/python2.7/dist-packages/keystone/cli.py", line 175, in main
CONF.command.cmd_class.main()
File "/usr/lib/python2.7/dist-packages/keystone/cli.py", line 54, in main
driver.db_sync()
File "/usr/lib/python2.7/dist-packages/keystone/identity/backends/sql.py", line 156, in db_sync
migration.db_sync()
File "/usr/lib/python2.7/dist-packages/keystone/common/sql/migration.py", line 49, in db_sync
current_version = db_version()
File "/usr/lib/python2.7/dist-packages/keystone/common/sql/migration.py", line 63, in db_version
return db_version_control(0)
File "/usr/lib/python2.7/dist-packages/keystone/common/sql/migration.py", line 68, in db_version_control
versioning_api.version_control(CONF.sql.connection, repo_path, version)
File "", line 2, in version_control
File "/usr/lib/python2.7/dist-packages/migrate/versioning/util/init.py", line 159, in with_engine
return f(_a, *_kw)
File "/usr/lib/python2.7/dist-packages/migrate/versioning/api.py", line 250, in version_control
ControlledSchema.create(engine, repository, version)
File "/usr/lib/python2.7/dist-packages/migrate/versioning/schema.py", line 139, in create
table = cls._create_table_version(engine, repository, version)
File "/usr/lib/python2.7/dist-packages/migrate/versioning/schema.py", line 180, in _create_table_version
if not table.exists():
File "/usr/lib/python2.7/dist-packages/sqlalchemy/schema.py", line 578, in exists
self.name, schema=self.schema)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2423, in run_callable
conn = self.contextual_connect()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2489, in contextual_connect
self.pool.connect(),
File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 236, in connect
return _ConnectionFairy(self).checkout()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 401, in init
rec = self._connection_record = pool._do_get()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 746, in _do_get
con = self._create_connection()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 189, in _create_connection
return _ConnectionRecord(self)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 282, in init
self.connection = self.**connect()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 344, in __connect
connection = self.__pool._creator()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py", line 80, in connect
return dialect.connect(_cargs, _cparams)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 281, in connect
return self.dbapi.connect(_cargs, _cparams)
File "/usr/lib/python2.7/dist-packages/MySQLdb/__init
.py", line 81, in Connect
return Connection(_args, *_kwargs)
File "/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 187, in init
super(Connection, self).init(_args, *_kwargs2)
sqlalchemy.exc.OperationalError: (OperationalError) (1045, "Access denied for user 'root'@'localhost' (using password: NO)") None None

And this is my user table in MySQL

mysql> SELECT user, host, password FROM mysql.user
-> ;
+------------------+--------------+-------------------------------------------+
| user | host | password |
+------------------+--------------+-------------------------------------------+
| root | localhost | *E3667A513AA157512F78FC507DB717552F6F795D |
| root | fmat-cloud | *E3667A513AA157512F78FC507DB717552F6F795D |
| root | 127.0.0.1 | *E3667A513AA157512F78FC507DB717552F6F795D |
| root | ::1 | *E3667A513AA157512F78FC507DB717552F6F795D |
| | localhost | |
| | fmat-cloud | |
| debian-sys-maint | localhost | *FEE4A81C373BC4141F7F24A26315FEB4F71531AE |
| keystoneUSER | 10.10.100.51 | *0099864D549EE2F75140CF9D4E855E69526E6134 |
| keystoneUSER | % | *0099864D549EE2F75140CF9D4E855E69526E6134 |
| keystoneUSER | localhost | *0099864D549EE2F75140CF9D4E855E69526E6134 |
| keystoneUSER | 127.0.0.1 | *0099864D549EE2F75140CF9D4E855E69526E6134 |
| keystoneUSER | fmat-cloud | *0099864D549EE2F75140CF9D4E855E69526E6134 |
+------------------+--------------+-------------------------------------------+
12 rows in set (0.00 sec)

as u can see i already try all kind of connections, one by one, and some of them like this, this is cause the way MySQL use the match, in this '%' contains all the other options, but the python scripts still trys to use root@localhost,
so i was wondering if ensure that the proper service token is used in the keystone.conf file should one way to solve this.

The version programs:

root@fmat-cloud:# mysql --version
mysql Ver 14.14 Distrib 5.5.31, for debian-linux-gnu (x86_64) using readline 6.2
root@fmat-cloud:
# keystone --version
Unknown, couldn't find versioninfo file at /usr/lib/python2.7/dist-
packages/keystoneclient/versioninfo
root@fmat-cloud:# keystone-manage --version
2013.1.1
root@fmat-cloud:
# uname -a
Linux fmat-cloud 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

so if anyone can helpme here =D it will be thankfull

UPDATE #1 .-

One way to solve this its add a new user:

mysql> GRANT ALL ON keystone.* TO 'root'@'%' IDENTIFIED BY 'toor';
mysql> SELECT user, host, password FROM mysql.user
-> ;
+------------------+--------------+-------------------------------------------+
| user | host | password |
+------------------+--------------+-------------------------------------------+
| root | localhost | *E3667A513AA157512F78FC507DB717552F6F795D |
| root | fmat-cloud | *E3667A513AA157512F78FC507DB717552F6F795D |
| root | 127.0.0.1 | *E3667A513AA157512F78FC507DB717552F6F795D |
| root | ::1 | *E3667A513AA157512F78FC507DB717552F6F795D |
| | localhost | |
| | fmat-cloud | |
| debian-sys-maint | localhost | *FEE4A81C373BC4141F7F24A26315FEB4F71531AE |
| root | % | *E3667A513AA157512F78FC507DB717552F6F795D |
+------------------+--------------+-------------------------------------------+
8 rows in set (0.00 sec)

now in /etc/keystone/keystone.conf add the connection:

[sql]
connection = mysql://root:toor@localhost/keystone?charset=utf8

and in this time i use "admin_token = toor" this option its commented. everything works now =D so i dunno if its the way python scripts use sql, i guess it use the local user (root) and localhost, and its not taking the keystone.conf options to create a socket.

Cheers. =D

VM can't get IP on Grizzly Multinode

Hi, i am new in this

Have setup multinode, and vm can build from dashboard, but dhcp from network node is not working, i have check with

ps faux | grep dnsmasq

and it show that they are already inject into VM (am i right ?) like this

dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap1a8f1ea3-54 --except-interface=lo --pid-file=/var/lib/quantum/dhcp/5b4619d2-198b-453a-8ab6-b1c5545edb1f/pid --dhcp-hostsfile=/var/lib/quantum/dhcp/5b4619d2-198b-453a-8ab6-b1c5545edb1f/host --dhcp-optsfile=/var/lib/quantum/dhcp/5b4619d2-198b-453a-8ab6-b1c5545edb1f/opts --dhcp-script=/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update --leasefile-ro --dhcp-range=set:tag0,10.10.30.0,static,120s --conf-file= --domain=openstacklocal

i am stuck at this, what can i check again? please give me some clue
thanks

quantum configuration in multi node guide

I'm stuck at the controller node:

I following the multinode guide, in ubuntu 12.04 but i got an internal server error just after login in horizon dashboard, stepping back, i found the following error in nova-api log:

CRITICAL nova No module named quantum2.api ( nova-api starts and crashes immediatly)

so i looked at quantum log and i get this:

WARNING: extension routed-service not supported by any of loaded plugins

  • same for port-security; flavor ; lbaas ; service-type

Also, with command "quantum agent-list" I got nothing else than a blank line.

Any type of suggestion will be really appreciated

Openvswitch agent is not able to work

Hi, I met a quantum issue:
After install any node, the quantum plugin openvswitch agent service would be working. (Commands “service quantum-plugin-openvswitch-agent status” and “quantum agent-liat” both show fine). However if I reboot the Ubuntu, then this service would be stopped, and couldn’t not enable anymore. Each time issue command “service quantum-plugin-openvswitch-agent start”, /var/log/quantum/openvswitch-agent.log would show:
2013-05-22 22:03:13 ERROR [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Failed to create OVS patch port. Cannot have tunneling enabled on this agent, since this version of OVS does not support tunnels or patch ports. Agent terminated!

Environment:
Server: 3 UCS blade servers, B-series blade server.
OS: Ubuntu precise (12.04.2 LTS)
OpenStack version: Grizzly, quantum.
Quantum plugin: openvswitch, with GRE
I follow this document to install: https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst
Network environment:
Control Node: eth0 (API network) (10.246.60.55), eth1(mgmt) (192.168.0.1), eth2(VM network) (192.168.100.1)
Network Node: eth0 (External network) (10.246.60.56), eth1 (192.168.0.2), eth2 (192.168.100.2)
Compute Node: eth0 (just for my login)(10.246.60.57), eth1 (192.168.0.3), eth2 (192.168.100.3)

Hasn't anyone encountered same issue?

stay in attaching status

Hello,

I can create a volume, but i can't attach this volume to a instances. It will be stay in attaching status.

Here are my Log files. Who can help me, where are my misstake?

cinder-api.log
2013-04-24 10:54:51 WARNING [cinder.openstack.common.policy] Inheritance-based rules are deprecated; use the default brain instead of HttpBrain.
2013-04-24 10:54:51 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/detail
2013-04-24 10:54:52 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/detail returned with HTTP 200
2013-04-24 10:54:53 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/types
2013-04-24 10:54:53 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/types returned with HTTP 200
2013-04-24 10:54:53 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/snapshots/detail
2013-04-24 10:54:53 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/snapshots/detail returned with HTTP 200
2013-04-24 10:54:53 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/os-quota-sets/f438353be1494506b2950a5c9d79d374
2013-04-24 10:54:53 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/os-quota-sets/f438353be1494506b2950a5c9d79d374 returned with HTTP 200
2013-04-24 10:54:53 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/detail
2013-04-24 10:54:53 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/detail returned with HTTP 200
2013-04-24 10:54:59 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/types
2013-04-24 10:54:59 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/types returned with HTTP 200
2013-04-24 10:54:59 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/snapshots/detail
2013-04-24 10:54:59 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/snapshots/detail returned with HTTP 200
2013-04-24 10:54:59 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/os-quota-sets/f438353be1494506b2950a5c9d79d374
2013-04-24 10:54:59 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/os-quota-sets/f438353be1494506b2950a5c9d79d374 returned with HTTP 200
2013-04-24 10:54:59 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/detail
2013-04-24 10:54:59 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/detail returned with HTTP 200
2013-04-24 10:54:59 INFO [cinder.api.openstack.wsgi] POST http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes
2013-04-24 10:54:59 AUDIT [cinder.api.v1.volumes] Create volume of 10 GB
2013-04-24 10:54:59 INFO [cinder.openstack.common.rpc.common] Connected to AMQP server on localhost:5672
2013-04-24 10:54:59 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': 'nova', 'terminated_at': None, 'updated_at': None, 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': '9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': None, 'scheduled_at': None, 'status': 'creating', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': None, 'host': None, 'source_volid': None, 'provider_auth': None, 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59, 629295), 'attach_status': 'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x46015d0>, 'metadata': {}}
2013-04-24 10:54:59 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes returned with HTTP 200
2013-04-24 10:54:59 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/detail
2013-04-24 10:54:59 AUDIT [cinder.api.v1.volumes] vol=<cinder.db.sqlalchemy.models.Volume object at 0x4905c90>
2013-04-24 10:54:59 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/detail returned with HTTP 200
2013-04-24 10:55:00 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:55:00 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': None, 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'creating', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': None, 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': None, 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4a9da50>}
2013-04-24 10:55:00 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:55:02 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:55:02 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'available', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4befd50>}
2013-04-24 10:55:02 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:55:07 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/detail
2013-04-24 10:55:07 AUDIT [cinder.api.v1.volumes] vol=<cinder.db.sqlalchemy.models.Volume object at 0x4a9df90>
2013-04-24 10:55:07 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/detail returned with HTTP 200
2013-04-24 10:55:08 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:55:08 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'available', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4cde350>}
2013-04-24 10:55:08 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:55:11 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:55:11 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'available', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4ada510>}
2013-04-24 10:55:11 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:55:20 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:55:20 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'available', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4ab3090>}
2013-04-24 10:55:20 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:55:20 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:55:20 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'available', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4911850>}
2013-04-24 10:55:20 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:55:20 INFO [cinder.api.openstack.wsgi] POST http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83/action
2013-04-24 10:55:20 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83/action returned with HTTP 202
2013-04-24 10:55:20 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:55:20 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4abb9d0>}
2013-04-24 10:55:20 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:55:20 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/detail
2013-04-24 10:55:20 AUDIT [cinder.api.v1.volumes] vol=<cinder.db.sqlalchemy.models.Volume object at 0x4bbe150>
2013-04-24 10:55:20 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/detail returned with HTTP 200
2013-04-24 10:55:21 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:55:21 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4911990>}
2013-04-24 10:55:21 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:55:23 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:55:23 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4dddc50>}
2013-04-24 10:55:23 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:55:26 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:55:26 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4ce9e10>}
2013-04-24 10:55:26 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:55:39 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/detail
2013-04-24 10:55:39 AUDIT [cinder.api.v1.volumes] vol=<cinder.db.sqlalchemy.models.Volume object at 0x4ac1e90>
2013-04-24 10:55:39 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/detail returned with HTTP 200
2013-04-24 10:55:40 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:55:40 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4a9dad0>}
2013-04-24 10:55:40 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:55:42 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:55:42 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4acde10>}
2013-04-24 10:55:42 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:55:45 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:55:45 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x49b8610>}
2013-04-24 10:55:45 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:55:50 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:55:50 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4911bd0>}
2013-04-24 10:55:50 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:55:57 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:55:57 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4cc0450>}
2013-04-24 10:55:57 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:56:07 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:56:07 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4df0550>}
2013-04-24 10:56:07 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:56:20 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:56:20 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x49c3b90>}
2013-04-24 10:56:20 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:56:35 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:56:35 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x49d3910>}
2013-04-24 10:56:35 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:56:52 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:56:52 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4918ad0>}
2013-04-24 10:56:52 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:57:12 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:57:12 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4cba6d0>}
2013-04-24 10:57:12 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:57:35 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:57:35 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x499dd50>}
2013-04-24 10:57:35 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:58:00 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:58:00 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4aa4510>}
2013-04-24 10:58:00 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200
2013-04-24 10:58:27 INFO [cinder.api.openstack.wsgi] GET http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83
2013-04-24 10:58:27 AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 4, 24, 8, 55, 20), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'9cff9422-de37-40e9-adb5-a28655871e83', 'size': 10L, 'user_id': u'7e8cef045e0e46749fdb8fc7a0d91317', 'attach_time': None, 'display_description': u'', 'project_id': u'f438353be1494506b2950a5c9d79d374', 'launched_at': datetime.datetime(2013, 4, 24, 8, 55), 'scheduled_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'status': u'attaching', 'volume_type_id': u'3aa61424-fb15-49b3-8955-9c2ea7772d10', 'deleted': False, 'provider_location': u'10.123.4.101:3260,1 iqn.2010-10.org.openstack:volume-9cff9422-de37-40e9-adb5-a28655871e83 0', 'volume_glance_metadata': [], 'host': u'controlnode', 'source_volid': None, 'provider_auth': u'CHAP Vv9G3g4km22Ed2cu4mD8 9cvgs23m8yGypfkctykU', 'display_name': u'admin', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 4, 24, 8, 54, 59), 'attach_status': u'detached', 'volume_type': <cinder.db.sqlalchemy.models.VolumeTypes object at 0x4394090>}
2013-04-24 10:58:27 INFO [cinder.api.openstack.wsgi] http://10.123.4.101:8776/v1/f438353be1494506b2950a5c9d79d374/volumes/9cff9422-de37-40e9-adb5-a28655871e83 returned with HTTP 200

cinder-scheduler.log
2013-04-24 10:54:40 AUDIT [cinder.service] SIGTERM received
2013-04-24 10:54:41 AUDIT [cinder.service] Starting cinder-scheduler node (version 2013.1)
2013-04-24 10:54:41 INFO [cinder.openstack.common.rpc.common] Connected to AMQP server on localhost:5672
2013-04-24 10:54:41 INFO [cinder.openstack.common.rpc.common] Connected to AMQP server on localhost:5672

cinder-volume.log
2013-04-24 10:54:41 INFO [cinder.openstack.common.rpc.common] Connected to AMQP server on localhost:5672
2013-04-24 10:54:57 INFO [cinder.volume.manager] Updating volume status
2013-04-24 10:54:59 INFO [cinder.volume.manager] volume volume-9cff9422-de37-40e9-adb5-a28655871e83: creating
2013-04-24 10:55:00 INFO [cinder.volume.manager] volume volume-9cff9422-de37-40e9-adb5-a28655871e83: created successfully
2013-04-24 10:55:00 INFO [cinder.volume.manager] Clear capabilities
2013-04-24 10:55:57 INFO [cinder.volume.manager] Updating volume status
2013-04-24 10:56:57 INFO [cinder.volume.manager] Updating volume status
2013-04-24 10:57:57 INFO [cinder.volume.manager] Updating volume status

nova-compute.log
2013-04-24 10:55:21.182 ERROR nova.openstack.common.rpc.amqp [req-058903d8-2ff9-4fd0-9251-ab09b735b584 7e8cef045e0e46749fdb8fc7a0d91317 f438353be1494506b2950a5c9d79d374] Exception during message handling
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line 430, in _process_data
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp rval = self.proxy.dispatch(ctxt, version, method, *_args)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py", line 133, in dispatch
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp return getattr(proxyobj, method)(ctxt, *_kwargs)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 117, in wrapped
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp temp_level, payload)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in exit
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp self.gen.next()
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 94, in wrapped
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp return f(self, context, _args, *_kw)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 209, in decorated_function
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp pass
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in exit
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp self.gen.next()
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 195, in decorated_function
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp return function(self, context, _args, *_kwargs)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 237, in decorated_function
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp e, sys.exc_info())
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in exit
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp self.gen.next()
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 224, in decorated_function
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp return function(self, context, _args, *_kwargs)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2833, in attach_volume
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp context, instance, mountpoint)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in exit
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp self.gen.next()
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2828, in attach_volume
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp mountpoint, instance)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2836, in _attach_volume
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp volume = self.volume_api.get(context, volume_id)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 193, in get
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp self._reraise_translated_volume_exception(volume_id)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 190, in get
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp item = cinderclient(context).volumes.get(volume_id)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinderclient/v1/volumes.py", line 180, in get
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp return self._get("/volumes/%s" % volume_id, "volume")
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinderclient/base.py", line 141, in _get
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp resp, body = self.api.client.get(url)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 185, in get
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp return self._cs_request(url, 'GET', *_kwargs)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 153, in _cs_request
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp *_kwargs)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 123, in request
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp *_kwargs)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/requests/api.py", line 44, in request
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp return session.request(method=method, url=url, *_kwargs)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 279, in request
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp resp = self.send(prep, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 374, in send
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp r = adapter.send(request, **kwargs)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 206, in send
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp raise ConnectionError(sockerr)
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp ConnectionError: [Errno 101] ENETUNREACH
2013-04-24 10:55:21.182 1277 TRACE nova.openstack.common.rpc.amqp

GRE + VLAN tag, best of both worlds

Sorry I don't really now how to use github yet but I've spent alot of time getting both gre and vlan tag support working at the same time. I think it would be worth adding to your grizzly howtos as I've don't all my testing based off your install how-tos, anyways see below:

Best of both worlds, GRE network with option for vlan tag

root@supermicro:~# cat /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini | grep -v #
[DATABASE]

sql_connection = mysql://quantumUser:[email protected]/quantum
reconnect_interval = 2

[OVS]

tenant_network_type = gre
network_vlan_ranges = physnet1:2:4094
bridge_mappings = physnet1:br-eth3
enable_tunneling = True
tunnel_id_ranges = 1:1000
tunnel_bridge = br-tun
local_ip = 192.168.10.118

[AGENT]
polling_interval = 2

[SECURITYGROUP]

root@supermicro:~# quantum net-create --tenant-id 19f73304c78e4fdd8787dd6b4b4fe263 blade-net-vlan30 --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id 30 --shared
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | d78187b9-a38e-42a9-86c7-5975119d3332 |
| name | blade-net-vlan30 |
| provider:network_type | vlan |
| provider:physical_network | physnet1 |
| provider:segmentation_id | 30 |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tenant_id | 19f73304c78e4fdd8787dd6b4b4fe263 |
+---------------------------+--------------------------------------+

root@supermicro:# quantum
(quantum)
(quantum)
(quantum) net-list
+--------------------------------------+-----------------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+-----------------+-------------------------------------------------------+
| 5821fd84-e631-4d93-b17f-067588013592 | test | |
| 7329681a-32e4-4676-8ced-a63a910aa81b | btfg-net-vlan30 | c1331e79-8ece-4d93-bf1a-e632991f1edb 192.168.10.0/24 |
| de7c6e50-d9ec-483a-8d0b-a0a39527f37b | ext_net | cbb60e9b-b27e-40e3-9cda-a80971c2dfa0 192.168.2.229/24 |
| ff65edfa-c5a5-4673-af69-32357e554887 | core | 88b629ed-b476-4f24-bb79-48f43059dddc 10.0.0.0/8 |
+--------------------------------------+-----------------+-------------------------------------------------------+
(quantum) net-show btfg-net-vlan30
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 7329681a-32e4-4676-8ced-a63a910aa81b |
| name | btfg-net-vlan30 |
| provider:network_type | vlan |
| provider:physical_network | physnet1 |
| provider:segmentation_id | 30 |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | c1331e79-8ece-4d93-bf1a-e632991f1edb |
| tenant_id | c4832efd900b47e78bdca6cc5b56a47f |
+---------------------------+--------------------------------------+
(quantum) net-show core
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | ff65edfa-c5a5-4673-af69-32357e554887 |
| name | core |
| provider:network_type | gre |
| provider:physical_network | |
| provider:segmentation_id | 1 |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | 88b629ed-b476-4f24-bb79-48f43059dddc |
| tenant_id | c4832efd900b47e78bdca6cc5b56a47f |
+---------------------------+--------------------------------------+
(quantum) exit
root@supermicro:
# nova list

root@supermicro:# source creds-bt
root@supermicro:
# nova list
+--------------------------------------+---------+--------+-----------------------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+---------+--------+-----------------------------------------------+
| 32054608-8567-4d70-89e1-dca48b2804c7 | windows | ACTIVE | core=10.0.0.2; btfg-net-vlan30=192.168.10.230 |
+--------------------------------------+---------+--------+-----------------------------------------------+
root@supermicro:~#

with this I needed to create all the bridges with the exception of the br-tun, this is automatically created

bridge name bridge id STP enabled interfaces
br-eth3 0000.0017087d6cdc no eth3
phy-br-eth3
br-ex 0000.0017087d6d00 no eth5
qg-3e5754f8-c2
qg-698e5206-5f
qg-ed97a64b-25
br-int 0000.ae369601f84a no int-br-eth3
qr-f865fb40-37
qvo38d3ddd8-0a
qvod503c601-ad
tapa70c6cf2-2f
tapbfc8031b-03
tapc5baefb5-a8
br-tun 0000.a21e620abd4b no
qbr38d3ddd8-0a 8000.7a63be45e888 no qvb38d3ddd8-0a
tap38d3ddd8-0a
qbrd503c601-ad 8000.fe163ee5e520 no qvbd503c601-ad
tapd503c601-ad

port 7 below is the mv's bridge map to vlan 30

vlan 20 and 30 a trunked from my cisco switch

root@supermicro:# ovs-appctl fdb/show br-eth3
port VLAN MAC Age
1 30 00:25:90:21:cf:32 56
7 30 fa:16:3e:61:ed:ee 21
1 20 d4:9a:20:54:6c:f2 4
1 20 60:67:20:81:a5:38 3
1 30 00:14:1c:73:20:91 2
1 20 00:14:1c:73:20:91 2
1 30 00:16:c7:9f:7f:27 1
1 0 00:14:1c:73:20:91 1
root@supermicro:
#

see png image as proof that it does work

image

vm can not access to the metadata server.

Hi, Thanks for your information.

I read this url. (multi node installation.)

https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst

I noticed that vm can not access to the metadata serverr via metadata proxy of quantum.

So I changed this parameter on controller node, I can do this. 💃

< metadata_listen = 127.0.0.1

metadata_listen = 10.10.10.51

We do not need to change compute node's one.

and I release installation bash script. I was in reference of your guide. Thanks!

https://github.com/jedipunkz/openstack_grizzly_install

Best Regards from Tokyo. :))))

network node networking step #2 comment

so in step #2 of the networking part you mention that to get the network node back on
the outside internet to add the IP to br-ex.
but at this point due to the /etc/network/interfaces file you have set the actual
eth2 device is still down. you will need to bring it up via whatever method
you prefer.
great guide.

steve

Is the keystone_basic.sh the right one

Getting following error. I tried dropping and recreating database but not sure, the script did match the syntax but not sure what is going on:

Env: Ubuntu 12.04.2-64 bit running on top of VMWorkstation with 2 NIC interface.

sh -x ./keystone_basic.sh

  • HOST_IP=10.10.100.51
  • ADMIN_PASSWORD=admin_pass
  • SERVICE_PASSWORD=service_pass
  • export SERVICE_TOKEN=ADMIN
  • export SERVICE_ENDPOINT=http://10.10.100.51:35357/v2.0
  • SERVICE_TENANT_NAME=service
  • get_id keystone tenant-create --name=admin
  • awk / id / { print $4 }
  • keystone tenant-create --name=admin
  • echo 9bbda53d26db4cbb8a52552bed7d54dd
  • ADMIN_TENANT=9bbda53d26db4cbb8a52552bed7d54dd
  • get_id keystone tenant-create --name=service
  • awk / id / { print $4 }
  • keystone tenant-create --name=service
  • echo aa91f29e54d94286977ac65dedc98473
  • SERVICE_TENANT=aa91f29e54d94286977ac65dedc98473
  • get_id keystone user-create --name=admin --pass=admin_pass --email=[email protected]
  • awk / id / { print $4 }
  • keystone user-create --name=admin --pass=admin_pass --email=[email protected]
  • echo 9d0c1c1fdf644edf97c580d4c4c18551
  • ADMIN_USER=9d0c1c1fdf644edf97c580d4c4c18551
  • get_id keystone role-create --name=admin
  • awk / id / { print $4 }
  • keystone role-create --name=admin
  • echo cab5564d2c2f4b85a7bd7e8aad17c0dd
  • ADMIN_ROLE=cab5564d2c2f4b85a7bd7e8aad17c0dd
  • get_id keystone role-create --name=KeystoneAdmin
  • awk / id / { print $4 }
  • keystone role-create --name=KeystoneAdmin
  • echo 9edcf7dc6b4448219be17a36ae8195a5
  • KEYSTONEADMIN_ROLE=9edcf7dc6b4448219be17a36ae8195a5
  • get_id keystone role-create --name=KeystoneServiceAdmin
  • awk / id / { print $4 }
  • keystone role-create --name=KeystoneServiceAdmin
  • echo ca43f5025e094c42a6bd526524a59c67
  • KEYSTONESERVICE_ROLE=ca43f5025e094c42a6bd526524a59c67
  • keystone user-role-add --user 9d0c1c1fdf644edf97c580d4c4c18551 --role-id cab5564d2c2f4b85a7bd7e8aad17c0dd --tenant-id 9bbda53d26db4cbb8a52552bed7d54dd
    usage: keystone user-role-add --user --role
    [--tenant_id ]
    keystone user-role-add: error: argument --role is required
  • keystone user-role-add --user 9d0c1c1fdf644edf97c580d4c4c18551 --role-id 9edcf7dc6b4448219be17a36ae8195a5 --tenant-id 9bbda53d26db4cbb8a52552bed7d54dd
    usage: keystone user-role-add --user --role
    [--tenant_id ]
    keystone user-role-add: error: argument --role is required
  • keystone user-role-add --user 9d0c1c1fdf644edf97c580d4c4c18551 --role-id ca43f5025e094c42a6bd526524a59c67 --tenant-id 9bbda53d26db4cbb8a52552bed7d54dd
    usage: keystone user-role-add --user --role
    [--tenant_id ]
    keystone user-role-add: error: argument --role is required
  • get_id keystone role-create --name=Member
  • awk / id / { print $4 }
  • keystone role-create --name=Member
  • echo 4902905ee6ea47ad8f1e9294f7f555c1
  • MEMBER_ROLE=4902905ee6ea47ad8f1e9294f7f555c1
  • get_id keystone user-create --name=nova --pass=service_pass --tenant-id aa91f29e54d94286977ac65dedc98473 --email=[email protected]
  • awk / id / { print $4 }
  • keystone user-create --name=nova --pass=service_pass --tenant-id aa91f29e54d94286977ac65dedc98473 --email=[email protected]
    usage: keystone [--os_username ]
    [--os_password ]
    [--os_tenant_name ]
    [--os_tenant_id ] [--os_auth_url ]
    [--os_region_name ]
    [--os_identity_api_version ]
    [--token ] [--endpoint ]
    [--username ] [--password ]
    [--tenant_name ] [--auth_url ]
    [--region_name ]
    ...
    keystone: error: unrecognized arguments: --tenant-id aa91f29e54d94286977ac65dedc98473

Failed to connect socket to '/var/run/libvirt/libvirt-sock'

I trying to grizzly version on ubuntu 13.04. it's says Failed to connect
socket to '/var/run/libvirt/libvirt-sock'

Please check my console log
root@dahlia:# kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
root@dahlia:
# virsh net-destroy default
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such
file or directory
root@dahlia:~# virsh net-undefine default
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such
file or directory

Do you have any suggestion how to resolve this issue?

Can't reach to floating IP & VM cannot access internet

Hi,
First of all I really thank you for all the hard-work put together. I followed your recent documentation and setup a 3 node openstack cluster "Grizzly". Having the following issues.

  1. I can assign a floating IP to a VM but cannot ping from any of the 3 nodes or from outside the cluster.

  2. Created a couple of VM. Can inter-communicate between the VM but cannot talk to outside world or even to either of the cluster nodes.

      Examining your docs. I don't see any iptable rules that help forward the traffic between. Can you help me resolving this issue. 
    

Thank you
Chakri

router's gateway to the external network / the status of the external port are down

Ext_net :
http://img827.imageshack.us/img827/6711/screenshotfrom201305141.png

quantum net-show ext_net : http://paste.ubuntu.com/5663984/

quantum subnet-show id-sub-ext : http://paste.ubuntu.com/5663991/

quantum port-show id-port : http://paste.ubuntu.com/5664006/
after this command : "quantum router-gateway-set $put_router_proj_one_id_here $put_id_of_ext_net_here"
notice - the tenant_id is EMPTY ! this is seems a bug ?

Internal net working fine all the port are ACTIVE and ping each other.

from /var/log/quantum/ all fine instead :
/var/log/quantum/openvswitch-agent.log in network node :
ERROR [quantum.agent.linux.ovs_lib] Unable to execute ['ovs-vsctl', '--timeout=2', 'add-port', 'br-tun', 'gre-2']. Exception:
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'add-port', 'br-tun', 'gre-2']
Exit code: 1
Stdout: ''
Stderr: 'ovs-vsctl: cannot create a port named gre-2 because a port named gre-2 already exists on bridge br-tun\n'
2013-05-14 10:34:37 ERROR [quantum.agent.linux.ovs_lib] Unable to execute ['ovs-vsctl', '--timeout=2', 'add-port', 'br-tun', 'gre-2']. Exception:
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'add-port', 'br-tun', 'gre-2']
Exit code: 1
Stdout: ''
Stderr: 'ovs-vsctl: cannot create a port named gre-2 because a port named gre-2 already exists on bridge br-tun\n'

and in compute node is :
ERROR [quantum.agent.linux.ovs_lib] Unable to execute ['ovs-vsctl', '--timeout=2', 'add-port', 'br-tun', 'gre-1']. Exception:
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'add-port', 'br-tun', 'gre-1']
Exit code: 1
Stdout: ''
Stderr: 'ovs-vsctl: cannot create a port named gre-1 because a port named gre-1 already exists on bridge br-tun\n'

and controller node seems fine :
/var/log/quantum/server.log
2013-05-14 10:33:54 WARNING [quantum.api.extensions] Extension port-security not supported by any of loaded plugins
2013-05-14 10:33:54 WARNING [quantum.api.extensions] Extension service-type not supported by any of loaded plugins
2013-05-14 10:33:54 WARNING [quantum.api.extensions] Extension lbaas not supported by any of loaded plugins
2013-05-14 10:33:54 WARNING [quantum.api.extensions] Extension routed-service-insertion not supported by any of loaded plugins
2013-05-14 10:33:54 WARNING [quantum.api.extensions] Extension flavor not supported by any of loaded plugins
2013-05-14 10:33:54 WARNING [quantum.api.extensions] Extension router-service-type not supported by any of loaded plugins
2013-05-14 10:33:54 WARNING [quantum.api.extensions] Extension security-group not supported by any of loaded plugins

any one have idea how turn on or why the status are not ACTIVE the ports in ext-net ?

thanks a lot

Quantum Sudoer

Hi,

Could you please explain exactly we are supposed to do in this part? Do I need to edit /etc/sudoers? Actually, I don't have "/etc/sudoers/sudoers.d/".

nano /etc/sudoers/sudoers.d/quantum_sudoers

Modify the quantum user

quantum ALL=NOPASSWD: ALL

Multinode Grizzly setup -> No IP getting assigned for the VM

Hi,

I have setup a multinode grizzly setup as per https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst.

Contoller [2 NICs] , Network [3 Nics] and Compute [3 Nics]

I did not face any issue till the lats step of creating a net/subnet, router etc.. On creating my first VM, its not getting an IP from the pool.
I am using CLI command:

nova boot --image Ubuntu --flavor m1.small test

nova list

+--------------------------------------+------+--------+----------+
| ID | Name | Status | Networks |
+--------------------------------------+------+--------+----------+
| a4513691-b3df-42b5-94cc-36ff3f00f615 | test | ACTIVE | |
+--------------------------------------+------+--------+----------+

The console-log end with the following:

cloudinit startlocal running: Mon, 22 Apr 2013 12:47:38 +0000. up 2.95 seconds
no instance data found in startlocal
cloudinitnonet waiting 120 seconds for a network device.
cloudinitnonet gave up waiting for a network device.
ciinfo: lo : 1 127.0.0.1 255.0.0.0 .
route_info failed

  • Stopping Handle applying cloudconfig[74G[ OK ]
    Waiting for network configuration...
    Waiting up to 60 more seconds for network configuration...
    Booting system without full network configuration...
  • Stopping Failsafe Boot Delay[74G[ OK ]
  • Starting System V initialisation compatibility[74G[ OK ]
    Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
  • Starting AppArmor profiles [80G [74G[ OK ]
    landscapeclient is not configured, please run landscapeconfig.
  • Stopping System V initialisation compatibility[74G[ OK ]
  • Starting System V runlevel compatibility[74G[ OK ]
  • Starting ACPI daemon[74G[ OK ]
  • Starting save kernel messages[74G[ OK ]
  • Starting regular background program processing daemon[74G[ OK ]
  • Starting deferred execution scheduler[74G[ OK ]
  • Starting CPU interrupts balancing daemon[74G[ OK ]
  • Starting automatic crash report generation[74G[ OK ]
  • Stopping save kernel messages[74G[ OK ]
  • Stopping System V runlevel compatibility[74G[ OK ]
  • Starting execute cloud user/final scripts[74G[ OK ]

I think its not even taking the ethx device.. How should i trouble shoot the issue.

In Network Node:

ovs-vsctl show

e67977d5-e1d3-41c5-b887-0d96fc3b6a2a
Bridge br-int
Port "qr-362ffad5-1e"
tag: 1
Interface "qr-362ffad5-1e"
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
Port "tap8742e677-f3"
tag: 1
Interface "tap8742e677-f3"
type: internal
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port "eth1"
Interface "eth1"
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port "gre-2"
Interface "gre-2"
type: gre
options: {in_key=flow, out_key=flow, remote_ip="10.20.20.53"}
Port "gre-1"
Interface "gre-1"
type: gre
options: {in_key=flow, out_key=flow, remote_ip="192.168.124.82"}
ovs_version: "1.4.0+build0"

In Compute Node:

d# ovs-vsctl show
88e9142a-41e6-40df-b22d-57c8c92ea761
Bridge br-int
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
Bridge br-tun
Port "gre-1"
Interface "gre-1"
type: gre
options: {in_key=flow, out_key=flow, remote_ip="192.168.124.82"}
Port "gre-3"
Interface "gre-3"
type: gre
options: {in_key=flow, out_key=flow, remote_ip="10.20.20.52"}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "1.4.0+build0"

Please let me know how to trouble shoot..

quantum agent-list returns empty

Please help us debug this issue.

Logs indicate:

2013-06-07 15:20:59 ERROR [quantum.agent.l3_agent] Failed synchronizing routers
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/quantum/agent/l3_agent.py", line 637, in _sync_routers_task
context, router_id)
File "/usr/lib/python2.7/dist-packages/quantum/agent/l3_agent.py", line 77, in get_routers
topic=self.topic)
File "/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call
return rpc.call(context, self._get_topic(topic), msg, timeout)
File "/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/init.py", line 140, in call
return _get_impl().call(CONF, context, topic, msg, timeout)
File "/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/impl_kombu.py", line 798, in call
rpc_amqp.get_connection_pool(conf, Connection))
File "/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/amqp.py", line 613, in call
rv = list(rv)
File "/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/amqp.py", line 562, in iter
raise result
RemoteError: Remote error: AgentNotFoundByTypeHost Agent with agent_type=L3 agent and host=network.xxx.com could not be found

VM Not accessing Internet

Hello, I followed this guide and created a WIndows VM. Everything is working fine except that my VM cannot access the Internet. I can ping the DHCP server IP but thats it. My network:router_interface is showing DOWN. Here is the Port-show output:
| admin_state_up | True |
| binding:capabilities | {"port_filter": false} |
| binding:vif_type | bridge |
| device_id | 0d2ce3b8-4221-4dea-b020-e9669b1f018f |
| device_owner | network:router_interface |
| fixed_ips | {"subnet_id": "cef746ce-3b39-4bb2-9c66-7dfdaf18ccc6", "ip_address": "50.50.1.1"} |
| id | f8fcb631-f8b4-4af7-9017-bc8c8a1ec06d |
| mac_address | fa:16:3e:50:f8:c3 |
| name | |
| network_id | 59628211-4497-4824-b7e1-91641c03e1f6 |
| status | DOWN |
| tenant_id | abba6b1274ec446b90dc7beb100d6928

Im not sure what I need to do to get this VM on the internet. I know that it is receiving its IP address through DHCP. If I do a agent-list in quantum, I get all : - ) True.

python-quantumclient

When running step 5 I ran into:

Unpacking python-quantumclient (from .../python-quantumclient_2%3a2.2.1.7.g22fd452+git201304040531~precise-0ubuntu1_all.deb) ...
dpkg: error processing /var/cache/apt/archives/python-quantumclient_2%3a2.2.1.7.g22fd452+git201304040531~precise-0ubuntu1_all.deb (--unpack):
 trying to overwrite '/usr/share/pyshared/tests/__init__.py', which is also in package python-boto 2.2.2-0ubuntu2
Errors were encountered while processing:
 /var/cache/apt/archives/python-quantumclient_2%3a2.2.1.7.g22fd452+git201304040531~precise-0ubuntu1_all.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

quantum agent-list and quantum router-create error

Hi,

I followed your this guide. Everything seems ok except that I cannot use "quantum agent-list" and "quantum router-create". The output of these two command is "
404 Not Found

The resource could not be found.
"
And the log said: DEBUG [routes.middleware] No route matched for GET /agents.json

Do you have any clue on how to make it work?

Thanks

uncompressablefileerror

Hi Team ,

i am getting a error while i am entering credintials in dashboard.i am using (https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/master/OpenStack_Grizzly_Install_Guide.rst ) this guide.

i tried three times to recheck but getting the same error.

Here is log file.

Log file
[Wed Apr 24 05:08:24 2013] [error] [client 192.168.1.121] UncompressableFileError: 'horizon/js/horizon.js' isn't accessible via COMPRESS_URL ('/static/') and can't be compressed

Internal router interface is always DOWN

I followed this guide to install single-node Grizzly on Ubuntu 13.04. Everything went fine until the networking issue. The VMs all come fine and i can ping between VMs (using console form Dashboard). i can't ssh into VM from the host node yet as i have not setup the external gateway for the router. The internal interface of the router is connected to the subnet to which my other VMs are attached. The status of internal interface shows always DOWN and I believe that is the reason that I am not able to ping the router interface from any of the VMs. All three agents show alive (dhcp, l3 and OpenVSwitch). I can see that router is correctly shown in the tenant and network topology shows my VM and the router on the subnet 20.20.1.1/24.
Any suggestions?
thanks,
nkmittal

Below is the output of command "port-show":

root@osk-team:~# quantum port-show 8e18e352-2fa9-450e-8112-dd31c7e494a8
+----------------------+----------------------------------------------------------------------------------+
| Field | Value |
+----------------------+----------------------------------------------------------------------------------+
| admin_state_up | True |
| binding:capabilities | {"port_filter": false} |
| binding:vif_type | ovs |
| device_id | 2e977c4d-f65e-4f4c-93b2-1abb50309960 |
| device_owner | network:router_interface |
| fixed_ips | {"subnet_id": "714e085c-dbcb-460a-88ed-a5c6c43712af", "ip_address": "20.20.1.1"} |
| id | 8e18e352-2fa9-450e-8112-dd31c7e494a8 |
| mac_address | fa:16:3e:a0:a4:78 |
| name | |
| network_id | 7fae3018-f2b2-4e78-bd0a-ed3763cd626c |
| status | DOWN |
| tenant_id | f4d4c792225345a4baa6224465706377 |

+----------------------+----------------------------------------------------------------------------------+

Horizon> Error: Unable to retrieve images.

Great job! I try to get it to work on an one node / one NIC setup (see my fork).
Everything seems to work kind of fine, but under horizon I can not see any images.'glance image-list' get the image correctly.
Where to look for hints?

How to add Extra one more compute node OVS_MultiNode Environment

Hi Guys,

I followed your OVS_Multinode openstack setup working good for me. I need to add one more compute node for computing purpose.

I added compute node. the node successfully added to the cluster. I can able start instance on new compute node. I manually verified login particular compute node. ps command says kvm process running.

Problem is Quantum will assign ip address also assign floating ip also.

But the ip's I can't reach

For example I assigned floating ip as 192.168.70.157 it's pinging

ping output

64 bytes from 192.168.70.157: icmp_req=165 ttl=64 time=0.597 ms
64 bytes from 192.168.70.157: icmp_req=166 ttl=64 time=0.516 ms
64 bytes from 192.168.70.157: icmp_req=167 ttl=64 time=0.621 ms
64 bytes from 192.168.70.157: icmp_req=168 ttl=64 time=0.468 ms
64 bytes from 192.168.70.157: icmp_req=169 ttl=64 time=0.633 m

dhanasekaran ~ $ ssh -i test.pem -l ubuntu 192.168.70.157
ssh: connect to host 192.168.70.157 port 22: No route to host

root@controllernode:~# nova list
+--------------------------------------+--------+--------+----------------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+--------+--------+----------------------------------------+
| 3c645492-3ebe-49d0-95c5-0931cb43be3a | ubu | ACTIVE | net_proj_one=50.50.1.2, 192.168.70.157 |
| a25edf39-0408-41b7-acd4-a89239f687b3 | ubuntu | ACTIVE | net_proj_one=50.50.1.4, 192.168.70.161 |
+--------------------------------------+--------+--------+----------------------------------------+

root@controllernode:~# nova-manage service list
Binary Host Zone Status State Updated_At
nova-cert controllernode internal enabled :-) 2013-05-29 14:14:39
nova-conductor controllernode internal enabled :-) 2013-05-29 14:14:32
nova-scheduler controllernode internal enabled :-) 2013-05-29 14:14:32
nova-consoleauth controllernode internal enabled :-) 2013-05-29 14:14:39
nova-compute computenode nova enabled :-) 2013-05-29 14:14:40
nova-compute computnodetwo nova enabled :-) 2013-05-29 14:14:37

Please guide me How to fix this.

-Dhanasekaran.

Not able to connect to internet from the compute node

Following the multinode setup with three separate networks (isolated by port based vlan support on my switch). But, when I setup the compute node, I'm not able to connect to the public internet and therefore not able to use apt-get. Do I need to have a third interface on the compute node?

Should also include "OpenvSwitch with Vlan" mode

The guild line is based on OpenvSwitch with GRE-tunneling mode, however, the Vlan mode can be utilized easily and efficiently, with configurations at the network node as:

==/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini==
tenant_network_type=vlan
network_vlan_ranges = physnet1:1:4094

Cinder can't create volume neither access old volume

Hi Guys,

I need your help! Last week I upgraded Folsom to Grizzly Everything was perfect except for cinder.

He can't create new volume and he can't access the old ones too.

Here's my cinder.conf

rootwrap_config=/etc/cinder/rootwrap.conf
sql_connection = mysql://cinderUser:[email protected]/cinder
api_paste_config = /etc/cinder/api-paste.ini
iscsi_helper=iscsiadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
iscsi_target_prefix=iqn.2010-10.org.openstack:
iscsi_ip_address=192.168.4.76


Here's my cinder-volume log when i try to create a volume

*_2013-06-25 09:41:55 ERROR [cinder.volume.manager] volume volume-b679ad36-9e22-4d01-87f2-84f70d6ef4a4: create failed
2013-06-25 09:41:55 ERROR [cinder.openstack.common.rpc.amqp] Exception during message handling
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py", line 430, in _process_data
rval = self.proxy.dispatch(ctxt, version, method, *_args)
File "/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/dispatcher.py", line 133, in dispatch
return getattr(proxyobj, method)(ctxt, *kwargs)
File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 283, in create_volume
LOG.error(
("volume %s: create failed"), volume_ref['name'])
File "/usr/lib/python2.7/contextlib.py", line 24, in exit
self.gen.next()
File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 275, in create_volume
model_update = self.driver.create_export(context, volume_ref)
File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py", line 489, in create_export
chap_auth)
File "/usr/lib/python2.7/dist-packages/cinder/volume/iscsi.py", line 225, in create_iscsi_target
self._new_target(name, tid, *_kwargs)
File "/usr/lib/python2.7/dist-packages/cinder/volume/iscsi.py", line 284, in _new_target
*_kwargs)
File "/usr/lib/python2.7/dist-packages/cinder/volume/iscsi.py", line 73, in _run
self._execute(self._cmd, *args, run_as_root=True, *_kwargs)
File "/usr/lib/python2.7/dist-packages/cinder/utils.py", line 190, in execute
cmd=' '.join(cmd))
ProcessExecutionError: Unexpected error while running command.
Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf ietadm --op new --tid=3 --params Name=iqn.2010-10.org.openstack:volume-b679ad36-9e22-4d01-87f2-84f70d6ef4a4
Exit code: 239
Stdout: ''
Stderr: 'File exists.\n' **


Your help will be appreciated ! Thank

network node is failing

Hello

I'm pretty new to quantum, but I have deployed several versions of openstack...

I encountered a problem when trying to ping the floating IP of my instance... it didnt work..

The vnc console shows host (169.254.169.254): Network is unreachable

Anyway I dug deeper and found that network node is failing, most of the services don't work... the dhcp_agent, l3_agent...

here are the outputs:

the dhcp agent: http://pastebin.com/Jsk25j0q
l3_agent: http://pastebin.com/wEKKvCFT
openvswitch: http://pastebin.com/pmMD6u5x

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.