Giter VIP home page Giter VIP logo

puppetlabs-openstack's Introduction

#puppetlabs-openstack Puppet Labs Reference and Testing Deployment Module for OpenStack.

Version 5.0 / 2014.2 / Juno

####Table of Contents

  1. Overview - What is the puppetlabs-openstack module?
  2. A Note on Versioning
  3. Module Description - What does the module do?
  4. Setup - The basics of getting started with OpenStack
  5. Usage - Configuration and customization options
  6. Reference - An under-the-hood peek at what the module is doing
  7. Limitations - OS compatibility, etc.
  8. License

##Overview

The puppetlabs-openstack module is used to deploy a multi-node, all-in-one, or swift-only installation of OpenStack Juno.

###Getting Help

If you need help configuring your puppet master or your puppet agents, or any help understanding puppet or hiera, please review puppet's guide to getting help.

If your puppet ran successfully but OpenStack is not working as expected, please visit OpenStack's getting started guide to find help operating OpenStack.

If puppet did not run successfully, examine what resources failed the puppet run. Please keep in mind that this module is a thin wrapper around the OpenStack Puppet Modules, so problems that arise are often due to bugs in the main modules rather than this one. If you are having trouble with one of the OpenStack Puppet Modules, first check these resources for getting help, and if necessary file a bug for the particular module. If you believe the problem is in the puppetlabs-openstack module, file an issue with the Puppet Labs ticket tracker.

Similarly, if after seeking help from OpenStack's getting started guide you believe puppet has caused a misconfiguration, file a launchpad bug for the particular module causing the issue or for the puppetlabs-openstack module.

##Versioning

This module has been given version 4 to track the puppet-openstack modules. The versioning for the puppet-openstack modules are as follows:

Puppet Module :: OpenStack Version :: OpenStack Codename
2.0.0         -> 2013.1.0          -> Grizzly
3.0.0         -> 2013.2.0          -> Havana
4.0.0         -> 2014.1.0          -> Icehouse
5.0.0         -> 2014.2.0          -> Juno

##Module Description

Using the stable/juno branch of the puppet-openstack modules, puppetlabs-openstack allows for the rapid deployment of an installation of OpenStack Juno. For the multi-node, up to six types of nodes are created for the deployment:

  • A controller node that hosts databases, message queues and caches, and most api services.
  • A storage node that hosts volumes, image storage, and the image storage api.
  • A network node that performs L2 routing, L3 routing, and DHCP services.
  • A compute node to run guest operating systems.
  • Optional object storage nodes to host an object/blob store.
  • An optional Tempest node to test your deployment.

The all-in-one deployment sets up all of the services except for Swift on a single node, including the Tempest testing.

The Swift deployment sets up:

  • A controller node that hosts databases, message queues and caches, and the Swift API.
  • Three storage nodes in different Swift Zones.

##Setup

###Setup Requirements

This module assumes nodes running on a RedHat 7 variant (RHEL, CentOS, or Scientific Linux) or Ubuntu 14.04 (Trusty) with either Puppet Enterprise or Puppet.

Each node needs a minimum of two network interfaces, and up to four. The network interfaces are divided into two groups.

  • Public interfaces:
    • API network.
    • External network.
  • Internal interfaces:
    • Management network.
    • Data network.

This module have been tested with Puppet 3.5 and Puppet Enterprise. This module depends upon Hiera. Object store support (Swift) depends upon exported resources and PuppetDB.

###Beginning with OpenStack

To begin, you will need to do some basic setup on the compute node. SElinux needs to be disabled on the compute nodes to give OpenStack full control over the KVM hypervisor and other necessary services. This is the only node that SELinux needs to be disabled on.

Additionally, you need to know the network address ranges for all four of the public/private networks, and the specific ip addresses of the controller node and the storage node. Keep in mind that your public networks can overlap with one another, as can the private networks.

The examples directory contains Vagrantfiles with CentOS 7 boxes to test out all-in-one, multi-node, or swift-only deployments.

##Usage

###Hiera Configuration The first step to using the puppetlabs-openstack module is to configure hiera with settings specific to your installation. In this module, the example directory contains sample common.yaml (for multi-node) and allinone.yaml (for all-in-one) files with all of the settings required by this module, as well as an example user and networks to test your deployment with. These configuration options include network settings, locations of specific nodes, and passwords for Keystone and databases. If any of these settings are undefined or not properly set, your deployment may fail.

###Controller Node For your controller node, you need to assign your node the controller role. For example:

node 'control.localdomain' {
  include ::openstack::role::controller
}

It's important to apply this configuration to the controller node before any of the other nodes are applied. The other nodes depend upon the service and database setup in the controller node.

###Other Nodes

For the remainder nodes, there are roles to assign for each. For example:

node 'storage.localdomain' {
  include ::openstack::role::storage
}

node 'network.localdomain' {
  include ::openstack::role::network
}

node /compute[0-9]+.localdomain/ {
  include ::openstack::role::compute
}

For this deployment, it's assumed that there is only one storage node and one network node. There may be multiple compute nodes.

After applying the configuration to the controller node, apply the remaining configurations to the worker nodes.

You will need to reboot all of the nodes after installation to ensure that the kernel module that provides network namespaces, required by Open VSwitch, is loaded.

Object Store Nodes

Begin by setting up PuppetDB. The easiest way to do this is to use the module provided by Puppet Labs. The module only needs to be installed on the master. See the puppet node configuration in the multinode or swift site.pp.

You will need to create three nodes as object stores for Swift, assigning three zones:

node /swift[0-9]+zone1.localdomain/ {
  class { '::openstack::role::swiftstorage':
    zone => '1',
  }

node /swift[0-9]+zone2.localdomain/ {
  class { '::openstack::role::swiftstorage':
    zone => '2',
  }

node /swift[0-9]+zone3.localdomain/ {
  class { '::openstack::role::swiftstorage':
    zone => '3',
  }

Because of the use of exported resources, puppet will need multiple runs to converge. First run the Puppet Agent on all of the Swift nodes, which will build out the basic storage and store the exported resource information in PuppetDB. Then run the agent on the control node, which will build out the ring files required by Swift. Finally, run Puppet against the Swift storage nodes again to copy the ring files over and successfully start the Swift services.

##Reference

The puppetlabs-openstack module is built on the 'Roles and Profiles' pattern. Every node in a deployment is assigned a single role. Every role is composed of some number of profiles, which ideally should be independent of one another, allowing for composition of new roles. The puppetlabs-openstack module does not strictly adhere to this pattern, but should serve as a useful example of how to build profiles from modules for customized and maintainable OpenStack deployments.

##Limitations

  • High availability and SSL-enabled endpoints are not provided by this module.

##License Puppet Labs OpenStack - A Puppet Module for a Multi-Node OpenStack Juno Installation.

Copyright (C) 2013, 2014 Puppet Labs, Inc. and Authors

Original Author - Christian Hoge

Puppet Labs can be contacted at: [email protected]

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

puppetlabs-openstack's People

Contributors

alokjani avatar arnoudj avatar beddari avatar bunchc avatar cmurphy avatar croomes avatar emilienm avatar enelen avatar guessi avatar guimaluf avatar hogepodge avatar hunner avatar jcfischer avatar juniorsysadmin avatar ladynamedlaura avatar mosibi avatar oscerd avatar sandlbn avatar taiojia avatar tfhartmann avatar thierrymarianne avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

puppetlabs-openstack's Issues

openstack::profile::nova::compute installs all the controller services too

Hi,

I'm trying to deploy a computing node with puppetlabs-openstack.
I just instantiate the class openstack::profile::nova::compute but the agent installs all the nova services
is there a reason for this behaviour ?

NOTE: is_controller = false

thanks

Ale
root@openstack-compute-01:~# puppet agent --server puppet --onetime --no-daemonize --verbose --environment cloud
info: Caching catalog for openstack-compute-01.ba.infn.it
info: /Service[libvirt]: Provider upstart does not support features enableable; not managing attribute enable
info: Applying configuration version '1409919489'
notice: /Stage[main]/Nova::Compute::Libvirt/Package[nova-compute-kvm]/ensure: created
notice: /Stage[main]/Nova::Consoleauth/Nova::Generic_service[consoleauth]/Package[nova-consoleauth]/ensure: created
info: /Stage[main]/Nova::Consoleauth/Nova::Generic_service[consoleauth]/Package[nova-consoleauth]: Scheduling refresh of Service[nova-consoleauth]
notice: /Stage[main]/Nova::Compute::Libvirt/Package[libvirt]/ensure: created
notice: /Stage[main]/Nova/Package[nova-common]/ensure: created
notice: /Stage[main]/Nova/File[/var/log/nova]/group: group changed 'adm' to 'nova'
notice: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Package[nova-compute]/ensure: created
info: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Package[nova-compute]: Scheduling refresh of Service[nova-compute]
notice: /Stage[main]/Nova::Objectstore/Nova::Generic_service[objectstore]/Package[nova-objectstore]/ensure: created
info: /Stage[main]/Nova::Objectstore/Nova::Generic_service[objectstore]/Package[nova-objectstore]: Scheduling refresh of Service[nova-objectstore]
notice: /Stage[main]/Openstack::Profile::Nova::Compute/File[/etc/libvirt/qemu.conf]/mode: mode changed '0600' to '0644'
info: /Stage[main]/Openstack::Profile::Nova::Compute/File[/etc/libvirt/qemu.conf]: Scheduling refresh of Service[libvirt]
notice: /Service[libvirt]: Triggered 'refresh' from 1 events
notice: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Package[nova-conductor]/ensure: created
info: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Package[nova-conductor]: Scheduling refresh of Service[nova-conductor]
notice: /Stage[main]/Nova::Vncproxy/Nova::Generic_service[vncproxy]/Package[nova-vncproxy]/ensure: created
info: /Stage[main]/Nova::Vncproxy/Nova::Generic_service[vncproxy]/Package[nova-vncproxy]: Scheduling refresh of Service[nova-vncproxy]
notice: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: created
info: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]: Scheduling refresh of Service[nova-scheduler]
notice: /Stage[main]/Nova::Api/Nova::Generic_service[api]/Package[nova-api]/ensure: created
info: /Stage[main]/Nova::Api/Nova::Generic_service[api]/Package[nova-api]: Scheduling refresh of Service[nova-api]
notice: /Stage[main]/Nova::Vncproxy/Nova::Generic_service[vncproxy]/Service[nova-vncproxy]/ensure: ensure changed 'running' to 'stopped'
notice: /Stage[main]/Nova::Vncproxy/Nova::Generic_service[vncproxy]/Service[nova-vncproxy]: Triggered 'refresh' from 1 events
notice: /Stage[main]/Nova::Objectstore/Nova::Generic_service[objectstore]/Service[nova-objectstore]/ensure: ensure changed 'running' to 'stopped'
notice: /Stage[main]/Nova::Objectstore/Nova::Generic_service[objectstore]/Service[nova-objectstore]: Triggered 'refresh' from 1 events
notice: /Stage[main]/Nova::Consoleauth/Nova::Generic_service[consoleauth]/Service[nova-consoleauth]/ensure: ensure changed 'running' to 'stopped'
notice: /Stage[main]/Nova::Consoleauth/Nova::Generic_service[consoleauth]/Service[nova-consoleauth]: Triggered 'refresh' from 1 events
notice: /Stage[main]/Nova::Api/Nova::Generic_service[api]/Service[nova-api]: Triggered 'refresh' from 1 events
notice: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Service[nova-scheduler]/ensure: ensure changed 'running' to 'stopped'
notice: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Service[nova-scheduler]: Triggered 'refresh' from 1 events
notice: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Service[nova-conductor]/ensure: ensure changed 'running' to 'stopped'
notice: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Service[nova-conductor]: Triggered 'refresh' from 1 events
notice: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Service[nova-compute]/enable: enable changed 'false' to 'true'
notice: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Service[nova-compute]: Triggered 'refresh' from 1 events
notice: /Stage[main]/Nova::Cert/Nova::Generic_service[cert]/Package[nova-cert]/ensure: created
info: /Stage[main]/Nova::Cert/Nova::Generic_service[cert]/Package[nova-cert]: Scheduling refresh of Service[nova-cert]
notice: /Stage[main]/Nova::Cert/Nova::Generic_service[cert]/Service[nova-cert]/ensure: ensure changed 'running' to 'stopped'
notice: /Stage[main]/Nova::Cert/Nova::Generic_service[cert]/Service[nova-cert]: Triggered 'refresh' from 1 events
notice: Finished catalog run in 127.56 seconds

openstack-dashboard: 404

Hi,
Deployed puppetlabs-openstack icehouse using puppet. I am having issues getting the openstack dashboard services up and running.

== Horizon service ==
openstack-dashboard: 404

a. After the full installation, I notice

> rpm -V openstack-dashboard

missing c /etc/httpd/conf.d/openstack-dashboard.conf
SM5....T. c /etc/openstack-dashboard/local_settings
.M....... /var/log/horizon

Then I copied
cp /etc/httpd/conf.d/15-horizon_vhost.conf openstack-dashboard.conf

No was able to restart the httpd services, without any issues but still the dashboard services are stuck in "404" error mode.

I thank you in advance for all the help. Can some one help me with some ideas.

Thank you
Chakri

Errors when installing neutron

Hi,

I've attempted both allinone and multinode but keep getting Errors relating to Neutron when puppet attempts to apply the catalog, resulting in a broken install. A couple of quick points to give more context and then I'll share the errors.

*environment is Openstack Icehouse: attempting to deploy onto instances within a test tenant.
*using ubuntu 14.04 uec images
*2 network setup: API and External networks are network 'A', and management and data are combined on network 'B'.

Can also say that I expected the network node install to fail since I saw a Neutron related error during the control install, but still thought I should see it through in case of false alarm.

Since I haven't yet successfully deployed openstack with this module yet, I can't discount the possibility that I'm simply making a configuration mistake, though I have double and triple checked my hiera yaml file and couldn't find any problems.

Whether this is a bug or simply a configuration mistake, any help is appreciated!

_Errors during control node install_
Error: /Stage[main]/Neutron::Plugins::Ml2/File_line[/etc/default/neutron-server:NEUTRON_PLUGIN_CONFIG]: Could not evaluate: No such file or directory - /etc/default/neutron-server

Info: Concat[/etc/swift/proxy-server.conf]: Scheduling refresh of Service[swift-proxy]
Error: Could not start Service[swift-proxy]: Execution of '/sbin/start swift-proxy' returned 1: start: Job failed to start
Wrapped exception:
Execution of '/sbin/start swift-proxy' returned 1: start: Job failed to start
Error: /Stage[main]/Swift::Proxy/Service[swift-proxy]/ensure: change from stopped to running failed: Could not start Service[swift-proxy]: Execution of '/sbin/start swift-proxy' returned 1: start: Job failed to start

Notice: /Stage[main]/Neutron::Server/Service[neutron-server]: Dependency File_line[/etc/default/neutron-server:NEUTRON_PLUGIN_CONFIG] has failures: true
Warning: /Stage[main]/Neutron::Server/Service[neutron-server]: Skipping because of failed dependencies

_Errors during network node install_

Error: /Stage[main]/Neutron::Plugins::Ml2/File_line[/etc/default/neutron-server:NEUTRON_PLUGIN_CONFIG]: Could not evaluate: No such file or directory - /etc/default/neutron-server

...and then I completely lose network after it attempts this:

Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Vs_bridge[br-int]/ensure: created

looking on the system console, I can see that after this step it attempts to install various packages and utterly fails since there is no network connectivity...

Vagrant with Libvirt issue

Hi,

I am currently trying to use Vagrant (1.6.3) with libvirt (vagrant-libvirt 0.0.8) using a centOS box.

frequently as I have been working, an error has arisen in my .vagrant.d file:

_$HOME/.vagrant.d/gems/gems/vagrant-libvirt-0.0.12/lib/vagrant-libvirt/action.rb:28:in `block (2 levels) in action_up’: uninitialized constant VagrantPlugins::ProviderLibvirt::Action::NFS (NameError)_

this issue is addressed as a problem for Vagrant 1.4.0, with a solution being to rollback to 1.3.5 (https://ttboj.wordpress.com/2013/12/09/vagrant-on-fedora-with-libvirt/)

I have started from scratch with a new user, and this same problem has arisen. In addition to the vagrant-libvirt (0.0.8) plugin, I am using vagrant-hostmanager.

Missing file - No such file or directory - /etc/puppet/modules/openstack/examples/swift/Puppetfile

When I try and install this module, it reports missing file, ie -

puppet module install puppetlabs-openstack --debug
...
Notice: Installing -- do not interrupt ...
Error: No such file or directory - /etc/puppet/modules/openstack/examples/swift/Puppetfile
Error: Try 'puppet help module install' for usage

Same with trying to do a local install -

puppet module install ./puppetlabs-openstack-4.1.0.tar.gz --debug --force
Notice: Installing -- do not interrupt ...
Error: No such file or directory - /etc/puppet/modules/openstack/examples/swift/Puppetfile
Error: Try 'puppet help module install' for usage

The file is a link -
lrwxrwxrwx 1 501 games 16 Aug 4 10:07 examples/swift/Puppetfile -> ../../Puppetfile

The file exsist -
-rw-r--r-- 1 501 games 3105 Jul 16 18:05 Puppetfile

head -5 Puppetfile
forge "http://forge.puppetlabs.com"

The core OpenStack modules

mod "keystone",

Tried reverting back to the 4.0.0 release, almost the same error but different directory -

puppet module install puppetlabs-openstack --version 4.0.0 --debug
Notice: Installing -- do not interrupt ...
Error: No such file or directory - /etc/puppet/modules/openstack/examples/allinone/Puppetfile
Error: Try 'puppet help module install' for usage

Not sure where to go from here.
Thanks for the help in advance.

Where to configure OpenStack version?

I see that there're 3.x, 4.x versions, corresponding to havana and icehouse, respectively. But I don't find a place to configure which version to install. Should I configure the system installation repo by myself? Say, if I'd like to install havana, I should first direct the repo to a openstack-havana repo, and then run puppet?

About deployment

I have a question to how to config the module's configuration.
1 step, I install the module using the flowing command:
puppet module install puppetlabs-openstack
2 setp, I edit the configuration file to deploy multi-node:
vi examples/common.yaml
3 setp, Deploy a controller node just like this:

node 'control.localdomain' {
  include ::openstack::role::controller
} 

Do it right? Also need other Settings?

openstack VM not getting IP

I am pretty confident in openstack cluster technology, as I configure and maintain a multi-node cluster built manually and using packstack. Works like a champ. Now I am working on creating a full multinode cluster using puppetlabs/openstack modules. Deployed the manifest to our controller + 2 compute nodes. Verified for no errors.

Was able to log into the dashboard and create the external network, project specific network, router, editing security rules, importing keys.. etc. Verified the network network topology map.

Created a VM using the default cirros image, which landed on compute1. For some reason the the VM is not getting any IP. Verified dhcp-agent services running, all the 4 neutron services running on the controller. Need some help in resolving this road block. Thank you in advance for any help.

Using :
openstack/puppetlabs version 3.0.0 with all recommended dependency modules satisfied.

Few network level checks:

  • on the compute node "tcpdump -i tap-interface"

[root@compute1 autostart]# tcpdump -i tap085b3af5-84
tcpdump: WARNING: tap085b3af5-84: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap085b3af5-84, link-type EN10MB (Ethernet), capture size 65535 bytes
15:24:33.984933 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:c1:32:38 (oui Unknown), length 280

About HA

Whether can deploy a highly available environment when I using this module?

Juno - Glance is not correctly configured

Hi,
I have seen that glance is not really properly configured with version 5.1 of this module.
Indeed, I have to modify identity_uri = http://127.0.0.1:35357 by default to identity_uri = http://my.keystone.server:35357 in /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf with Ubuntu 14.04.
If I rerun puppet, no change was occured so I guess it's a bug or an oversight.

Regards.

Duplicate resources definition

Hi,
I'm trying to use this puppet module to install Openstack trying to run a very minimal configuration following the examples.

I've setup a puppet master and installed the agent on a node that will be the controller (Ubuntu 14.04.1 Trusty). When I run puppet on the controller node I get:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate declaration: Keystone_user[admin] is already declared in file /etc/puppet/modules/keystone/manifests/roles/admin.pp:57; cannot redeclare at /etc/puppet/modules/openstack/manifests/resources/user.pp:14 on node controller-dev.bf.vertulabs.co.uk

The puppet definition just contains:

node 'controller-dev.example.com' {
include ::ntp
include ::openstack::role::controller
}

Am I doing something wrong?

Thanks

Duplicate declaration: Glance_api_config[DEFAULT/image_cache_dir] under Ubuntu 14.04

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 14.04 LTS
Release:        14.04
Codename:       trusty
$ sudo puppet apply -v /etc/puppet/manifests/site.pp
[...]
Error: Duplicate declaration: Glance_api_config[DEFAULT/image_cache_dir] is already declared in file /etc/puppet/modules/glance/manifests/api.pp:270; cannot redeclare at /etc/puppet/modules/openstack/manifests/profile/glance/api.pp:57 on node cloud-script-disposable-vm.openstacklocal
Error: Duplicate declaration: Glance_api_config[DEFAULT/image_cache_dir] is already declared in file /etc/puppet/modules/glance/manifests/api.pp:270; cannot redeclare at /etc/puppet/modules/openstack/manifests/profile/glance/api.pp:57 on node cloud-script-disposable-vm.openstacklocal

package ensure => does not work

Hi,

dealing with openstack/manifests/profile/cinder/volume.pp, we found that
"package_ensure => true" does not install the package. Of course we tried to set it in this way but we didn't got it.
"""
cinder::volume::package_ensure: latest

root@puppet:~# hiera -c /etc/puppet/hiera.yaml cinder::volume::package_ensure ::fqdn=openstack-controller.ba.infn.it ::environment=cloud
latest
"""

We suppose it should be set at least at "package_ensure => present", in order to let puppet install it or a parameterised version could be better.

thanks in advance
Alessandro

need help

hi,
i request you to help me ,I need the puppet files which would install openstack juno to ubuntu 14.0.4.
Iam new to everything please guide me.

Can not find openstack::mysql::service_password in hiera

This is doing a hiera lookup from openstack/manifests/resources/connectors.pp:3

I configured hiera with a foo password and of course this just led down a rabbit hole to another missing hiera data key.

This could be a configuration issue on my end as I ran this code earlier today on another VM and I didn't encounter this issue. However, I'm not sure where the hiera data is supposed to come from since there isn't documentation anywhere in the module about configuring hiera data sources for it.

exited with 128: fatal: No such remote 'cache'

Hi

I got this error when I try multinode example with vmware fusion + vagrand.
Can we ignore this one?

[R10K::TaskRunner - ERROR] Task #R10K::Task::Module::Sync:0x007ff7d2150960 failed while running: Command git --git-dir /Users/nueno/work/puppetlabs-openstack/examples/multinode/modules/apt/.git --work-tree /Users/nueno/work/puppetlabs-openstack/examples/multinode/modules/apt remote set-url cache /Users/nueno/.r10k/git/git---github.com-puppetlabs-puppetlabs-apt exited with 128: fatal: No such remote 'cache'

Can't install 4.0.0-devel

Hi,
I am trying to install the latest development version:
puppet module install puppetlabs-openstack --version=4.0.0-devel
I get this error:
Error: Could not install module 'puppetlabs-openstack' (v4.0.0-devel)
No version of 'puppetlabs-openstack' will satisfy dependencies
You specified 'puppetlabs-openstack' (v4.0.0-devel)
Use puppet module install --force to install this module anyway

cloning into /etc/puppet/modules/openstack then "librarian-puppet install --path ../"
doesn't work either (Could not resolve the dependencies.).

How to install the devel version?

Linux Bridge Plugin?

I know the puppet openstack module utilizes OVS as it's default mechanism driver/plugin, is there a way to utilize a Linux Bridge instead of the default OVS mechanism driver/plugin with the puppet openstack module?

Thanks for any input,

error staring neutron service, using all-in-one, centos 7

After fixing the dependency for mariadb in place of mysql in centos 7 I succesfully installed puppetlabs-openstack. However, neutron service does not start (msg in neutron/server.log: "Table 'neutron.ml2_gre_allocations' doesn't exist").

I'm a puppet virgin, but I know openstack. It looks as if puppet either didn't do the dbsync, or maybe did it before the neutron config file(s) was complete.

My install config is very simple - I just changed IP interface networks/and addresses in allinone.yaml and reference allinone.yaml via hiera.conf:

[root@puppet ~]# cat /etc/puppet/hiera.yaml

---
:backends:
  - yaml
:yaml:
  :datadir: /etc/puppet/hieradata
:hierarchy:
  - allinone

Any advice gratefully received

Info: Resolving dependencies ...

Hi,
I want to install your module on my puppet server but I occur some error that I don't know how to resolve.
When I run puppet module install --verbose puppetlabs-openstack i got :

Notice: Preparing to install into /etc/puppet/modules ...
Notice: Downloading from https://forgeapi.puppetlabs.com ...
Info: Resolving dependencies ...

I have 2 cpu's core and one of them is at 100% use.
I try to install some random others modules and it's work fine.
Have you a solution please ?

EDIT:
With debug On :

root@puppetServer:~# puppet module install --verbose --debug puppetlabs-openstack
Debug: Runtime environment: puppet_version=3.7.5, ruby_version=1.9.3, run_mode=user, default_encoding=UTF-8
Notice: Preparing to install into /etc/puppet/modules ...
Notice: Downloading from https://forgeapi.puppetlabs.com ...
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-openstack
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-glance
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-inifile
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-keystone
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-keystone&limit=20&offset=20
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-apache
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-apache&limit=20&offset=20
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-stdlib
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-stdlib&limit=20&offset=20
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-concat
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-firewall
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-firewall&limit=20&offset=20
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=ripienaar-concat
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-mysql
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-mysql&limit=20&offset=20
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=nanliu-staging
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=bodepd-create_resources
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=cprice404-inifile
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-apt
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-horizon
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=saz-memcached
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-nova
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=dprince-qpid
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=duritong-sysctl
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-cinder
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-rabbitmq
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=garethr-erlang
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=stahnma-epel
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-prosvc_repo
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-swift
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-rsync
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-xinetd
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=saz-ssh
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-neutron
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-vswitch
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-ceilometer
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-heat
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-quantum
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=stackforge-ceilometer
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=stackforge-keystone
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=stackforge-openstacklib
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=aimonb-aviator
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=stackforge-cinder
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=stackforge-glance
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=stackforge-heat
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=stackforge-horizon
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=stackforge-neutron
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=stackforge-nova
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=stackforge-vswitch
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=stackforge-swift
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=stackforge-tempest
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-vcsrepo
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-mongodb
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-ntp
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-ntp&limit=20&offset=20
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-tempest
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=ekarlso-quantum
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=ekarlso-puppet_quantum
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Info: Resolving dependencies ...

EDIT:
I think it's puppetlabs-mysql who causing all this trouble.
I have version 3.3.0 and your module is only compatible with puppetlabs/mysql (>=2.2.0 <3.0.0).
Do you plan to update this requirement soon ?

Regards.

multiple floating ip pools

Is there a way to create multiple floating ip pools using hiera.

openstack::network::external::ippool::start: 192.168.1.100
openstack::network::external::ippool::end: 192.168.1.200

Failed dependencies when installing controller mode

I have encountered the following error when using a controller node (openstack::role::controller) on a Debian Wheezy 7.5.0

nova-common (= 2012.1.1-18) but 2014.1-8~bpo70+1 is to be installed

INFO interface: info: Warning: /Stage[main]/Nova::Objectstore/Nova::Generic_service[objectstore]/Service[nova-objectstore]: 
Skipping because of failed dependencies

Error: /Stage[main]/Nova::Objectstore/Nova::Generic_service[objectstore]/Service[nova-objectstore]: 
Failed to call refresh: Could not find init script for 'nova-objectstore'
Error: /Stage[main]/Nova::Objectstore/Nova::Generic_service[objectstore]/Service[nova-objectstore]: 
Could not find init script for 'nova-objectstore'

Any thoughts about how to select the valid package (the most recent one, maybe)?

Firewalling data network

In multinode environment, default setting is also blocking data network. So compute node and network node can not talk on data network. VM were not able to get ip. I added following to openstack/manifests/profile/firewall/post.pp
->
firewall { '9100 - Accept all vm network traffic':
proto => 'all',
state => ['NEW'],
action => 'accept',
source => $::openstack::config::network_data,
}

VM started getting ip address after that.
Cheers
Kashif

IPAddr.new blows up

Trying to rerun an allinone manifest on a host with currently provisioned VMs results in a lot of the following error:

Error: /Firewall[8004 - Heat API]: Could not evaluate: Invalid address from IPAddr.new: FA:16:3E:FC:F9:CB

Looking around, these appear to be the MAC address for VM host:

$ iptables --list -n  | grep 'FA:16:3E:FC:F9:CB'
RETURN     all  --  172.16.0.3           0.0.0.0/0           MAC FA:16:3E:FC:F9:CB 

Is there a simple parsing error happening here?

Cheers

How to add multiple storage servers

I'm trying to add multiple storage servers and I'm getting below error .

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: invalid address at /etc/puppet/modules/openstack/manifests/profile/cinder/volume.pp:4 on node cinder2.themicro.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

Line number 4.
4 $management_address = ip_for_network($management_network)

Here is my fixed ip's for storage server. Is it possible to add multiple ip's in common.yaml file.
openstack::storage::address::api: '192.168.1.154'
openstack::storage::address::management: '200.200.200.154'

Juno Installation

I tried to configure/install Juno on CentOS6.6 and 7 and I'm getting a lot of error. I'm not sure if these error are expected. Errors are below..

  1. When I tried to configure controller server with CenOS7 I'm getting the following errors.

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Error from DataBinding 'hiera' while looking up 'openstack::tempest_username_admin': syntax error on line 108, col 85: `openstack::neutron::service_plugins: ['router','firewall','lbaas','vpnaas','metering']' on node controller.example.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping ru

Here is my common.yaml config for neutron.
openstack::neutron::password: 'whi-rtuz'
openstack::neutron::shared_secret: 'by-sa-bo'
openstack::neutron::core_plugin: 'ml2'
openstack::neutron::service_plugins: ['router','firewall','lbaas','vpnaas','metering']

I've downloaded most updated modules and as per the puppet commits this bug has been fixed.
https://github.com/puppetlabs/puppetlabs-openstack/commits/stable/juno

Local ip for OVS must be set

HI
I am installing a multinode setup on RHEL6.5 with neutron server running on controller node and a seperate machine for other network services. It failed with this error
Local ip for ovs agent must be set when tunneling is enabled at <MY_MODULE_PATH>/modules/neutron/manifests/agents/ml2/ovs.pp:107

I can see that neutron server in profile/neutron/server.pp is including openstack::common::ovs which in turn creating neutrons::agents::ml2::ovs
I dont understand that why a controller server needs ovs agent ? Or may be I am getting it wrong.
Thanks

Many error when deploying on controller node

Hi,
I try to install deploy openstack on a node with your puppet module but I have got a lot of errors.
I have install your module with puppet module install puppetlabs-openstack but I have another error with git clone.
Here the error messages :

root@server2:~# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts


Info: Caching catalog for server2
Warning: /Apt_key[Add key: 056E8E56 from Apt::Source rabbitmq]: The id should be a full fingerprint (40 characters), see README.
Warning: /Apt_key[Add key: 9ECBEC467F0CEB10 from Apt::Source downloads-distro.mongodb.org]: The id should be a full fingerprint (40 characters), see README.
Info: Applying configuration version '1432799811'
Notice: /Stage[main]/Cinder::Scheduler/Service[cinder-scheduler]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Cinder::Scheduler/Service[cinder-scheduler]: Unscheduling refresh on Service[cinder-scheduler]
Error: Execution of '/usr/sbin/rabbitmqctl add_user openstack pose-vix' returned 2: Creating user "openstack" ...
Error: user_already_exists: openstack
Error: /Stage[main]/Openstack::Profile::Rabbitmq/Rabbitmq_user[openstack]/ensure: change from absent to present failed: Execution of '/usr/sbin/rabbitmqctl add_user openstack pose-vix' returned 2: Creating user "openstack" ...
Error: user_already_exists: openstack
Error: Could not start Service[swift-proxy]: Execution of '/sbin/start swift-proxy' returned 1: start: Job failed to start
Error: /Stage[main]/Swift::Proxy/Service[swift-proxy]/ensure: change from stopped to running failed: Could not start Service[swift-proxy]: Execution of '/sbin/start swift-proxy' returned 1: start: Job failed to start
Notice: /Stage[main]/Openstack::Profile::Rabbitmq/Rabbitmq_user_permissions[openstack@/]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Openstack::Profile::Rabbitmq/Rabbitmq_user_permissions[openstack@/]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Anchor[nova-start]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Anchor[nova-start]: Skipping because of failed dependencies
Notice: /Package[nova-common]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Package[nova-common]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_auth_url]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_auth_url]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/network_api_class]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/network_api_class]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/debug]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/debug]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/log_dir]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/log_dir]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/amqp_durable_queues]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/amqp_durable_queues]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/vncserver_proxyclient_address]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/vncserver_proxyclient_address]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rabbit_hosts]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/rabbit_hosts]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/notify_api_faults]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/notify_api_faults]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_username]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_username]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/kombu_ssl_certfile]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/kombu_ssl_certfile]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/memcached_servers]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/memcached_servers]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rabbit_use_ssl]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/rabbit_use_ssl]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_tenant_name]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_tenant_name]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/notification_driver]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/notification_driver]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Objectstore/Nova_config[DEFAULT/s3_listen]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Objectstore/Nova_config[DEFAULT/s3_listen]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_ca_file]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_ca_file]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Vncproxy/Nova_config[DEFAULT/novncproxy_host]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Vncproxy/Nova_config[DEFAULT/novncproxy_host]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rabbit_userid]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/rabbit_userid]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rabbit_ha_queues]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/rabbit_ha_queues]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/network_device_mtu]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/network_device_mtu]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rabbit_password]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/rabbit_password]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/verbose]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/verbose]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/File[/var/log/nova]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/File[/var/log/nova]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Compute::Neutron/Nova_config[libvirt/vif_driver]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Compute::Neutron/Nova_config[libvirt/vif_driver]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/security_group_api]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/security_group_api]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[database/idle_timeout]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[database/idle_timeout]: Skipping because of failed dependencies
Notice: /Package[nova-consoleauth]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Package[nova-consoleauth]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/kombu_ssl_ca_certs]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/kombu_ssl_ca_certs]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/vnc_keymap]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/vnc_keymap]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url_timeout]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url_timeout]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ca_certificates_file]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ca_certificates_file]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/vnc_enabled]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/vnc_enabled]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rabbit_virtual_host]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/rabbit_virtual_host]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[glance/api_servers]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/kombu_ssl_version]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/kombu_ssl_version]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Compute::Neutron/Nova_config[DEFAULT/force_snat_range]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Compute::Neutron/Nova_config[DEFAULT/force_snat_range]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/kombu_ssl_keyfile]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/kombu_ssl_keyfile]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/notify_on_state_change]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/notify_on_state_change]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Vncproxy/Nova_config[DEFAULT/novncproxy_port]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Vncproxy/Nova_config[DEFAULT/novncproxy_port]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_password]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_password]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[database/connection]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[database/connection]: Skipping because of failed dependencies
Notice: /Stage[main]/Openstack::Common::Nova/Nova_config[DEFAULT/default_floating_pool]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Openstack::Common::Nova/Nova_config[DEFAULT/default_floating_pool]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/os_region_name]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/os_region_name]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/novncproxy_base_url]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/novncproxy_base_url]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_strategy]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_strategy]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_tenant_id]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_tenant_id]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/lock_path]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/lock_path]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_cert_file]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_cert_file]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/notification_topics]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/notification_topics]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_key_file]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_key_file]: Skipping because of failed dependencies
Notice: /Package[nova-objectstore]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Package[nova-objectstore]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[neutron/service_metadata_proxy]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[neutron/service_metadata_proxy]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/auth_port]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/auth_port]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[osapi_v3/enabled]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[osapi_v3/enabled]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/ec2_listen]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[DEFAULT/ec2_listen]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/admin_password]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/admin_password]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/admin_user]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/admin_user]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/auth_protocol]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/auth_protocol]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/enabled_apis]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[DEFAULT/enabled_apis]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/volume_api_class]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[DEFAULT/volume_api_class]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/auth_host]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/auth_host]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/auth_admin_prefix]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/auth_admin_prefix]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/auth_version]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/auth_version]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/ec2_workers]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[DEFAULT/ec2_workers]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/admin_tenant_name]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/admin_tenant_name]: Skipping because of failed dependencies
Notice: /Package[nova-api]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Package[nova-api]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/admin_tenant_name]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/admin_tenant_name]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/auth_uri]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/auth_uri]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/admin_user]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/admin_user]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/auth_port]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/auth_port]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/admin_password]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/admin_password]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/auth_admin_prefix]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/auth_admin_prefix]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/use_forwarded_for]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[DEFAULT/use_forwarded_for]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_volume_listen]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_volume_listen]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/auth_strategy]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/auth_strategy]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/keystone_ec2_url]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[DEFAULT/keystone_ec2_url]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/auth_protocol]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/auth_protocol]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_workers]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_workers]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rpc_backend]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/rpc_backend]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]: Skipping because of failed dependencies
Notice: /Firewall[5000 - Keystone API]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Firewall[5000 - Keystone API]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/use_syslog]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/use_syslog]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_workers]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_workers]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_config_drive]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_config_drive]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/auth_uri]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/auth_uri]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_config[neutron/metadata_proxy_shared_secret]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_config[neutron/metadata_proxy_shared_secret]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/auth_host]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova_paste_api_ini[filter:authtoken/auth_host]: Skipping because of failed dependencies
Error: /Stage[main]/Neutron::Keystone::Auth/Keystone_service[neutron]: Could not evaluate: Expected 4 columns for service row, found 0. Line +----------------------------------+------------+----------------+----------------------------------+
Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_service[ceilometer]: Could not evaluate: Expected 4 columns for service row, found 0. Line +----------------------------------+------------+----------------+----------------------------------+
Notice: /Stage[main]/Neutron::Keystone::Auth/Keystone_endpoint[openstack/neutron]: Dependency Keystone_service[neutron] has failures: true
Warning: /Stage[main]/Neutron::Keystone::Auth/Keystone_endpoint[openstack/neutron]: Skipping because of failed dependencies
Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]: Could not evaluate: Expected 3 columns for tenant row, found 0. Line +----------------------------------+----------+---------+
Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_role[ResellerAdmin]: Could not evaluate: Expected 2 columns for role row, found 0. Line +----------------------------------+------------------+
Error: /Stage[main]/Nova::Keystone::Auth/Keystone_service[nova_ec2]: Could not evaluate: Expected 4 columns for service row, found 0. Line +----------------------------------+------------+----------------+----------------------------------+
Notice: /Stage[main]/Nova::Keystone::Auth/Keystone_endpoint[openstack/nova_ec2]: Dependency Keystone_service[nova_ec2] has failures: true
Warning: /Stage[main]/Nova::Keystone::Auth/Keystone_endpoint[openstack/nova_ec2]: Skipping because of failed dependencies
Error: /Stage[main]/Cinder::Keystone::Auth/Keystone_service[cinderv2]: Could not evaluate: Expected 4 columns for service row, found 0. Line +----------------------------------+------------+----------------+----------------------------------+
Notice: /Stage[main]/Cinder::Keystone::Auth/Keystone_endpoint[openstack/cinderv2]: Dependency Keystone_service[cinderv2] has failures: true
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone_endpoint[openstack/cinderv2]: Skipping because of failed dependencies
Error: /Stage[main]/Swift::Keystone::Auth/Keystone_role[SwiftOperator]: Could not evaluate: Expected 2 columns for role row, found 0. Line +----------------------------------+------------------+
Error: /Stage[main]/Nova::Keystone::Auth/Keystone_service[novav3]: Could not evaluate: Expected 4 columns for service row, found 0. Line +----------------------------------+------------+----------------+----------------------------------+
Notice: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::Tenant[test]/Keystone_tenant[test]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::Tenant[test]/Keystone_tenant[test]: Skipping because of failed dependencies
Error: /Stage[main]/Heat::Keystone::Auth/Keystone_service[heat]: Could not evaluate: Expected 4 columns for service row, found 0. Line +----------------------------------+------------+----------------+----------------------------------+
Notice: /Stage[main]/Nova::Keystone::Auth/Keystone_endpoint[openstack/novav3]: Dependency Keystone_service[novav3] has failures: true
Warning: /Stage[main]/Nova::Keystone::Auth/Keystone_endpoint[openstack/novav3]: Skipping because of failed dependencies
Notice: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_endpoint[openstack/ceilometer]: Dependency Keystone_service[ceilometer] has failures: true
Warning: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_endpoint[openstack/ceilometer]: Skipping because of failed dependencies
Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[services]: Could not evaluate: Expected 3 columns for tenant row, found 0. Line +----------------------------------+----------+---------+
Notice: /Stage[main]/Neutron::Keystone::Auth/Keystone_user[neutron]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Neutron::Keystone::Auth/Keystone_user[neutron]: Skipping because of failed dependencies
Notice: /Stage[main]/Heat::Keystone::Auth/Keystone_user[heat]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Heat::Keystone::Auth/Keystone_user[heat]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Keystone::Auth/Keystone_user[nova]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Nova::Keystone::Auth/Keystone_user[nova]: Skipping because of failed dependencies
Notice: /Stage[main]/Glance::Keystone::Auth/Keystone_user[glance]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Glance::Keystone::Auth/Keystone_user[glance]: Skipping because of failed dependencies
Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[admin]: Could not evaluate: Expected 2 columns for role row, found 0. Line +----------------------------------+------------------+
Notice: /Stage[main]/Heat::Keystone::Auth/Keystone_user_role[heat@services]: Dependency Keystone_tenant[services] has failures: true
Notice: /Stage[main]/Heat::Keystone::Auth/Keystone_user_role[heat@services]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Heat::Keystone::Auth/Keystone_user_role[heat@services]: Skipping because of failed dependencies
Notice: /Stage[main]/Glance::Keystone::Auth/Keystone_user_role[glance@services]: Dependency Keystone_tenant[services] has failures: true
Notice: /Stage[main]/Glance::Keystone::Auth/Keystone_user_role[glance@services]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Glance::Keystone::Auth/Keystone_user_role[glance@services]: Skipping because of failed dependencies
Error: /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_service[keystone]: Could not evaluate: Expected 4 columns for service row, found 0. Line +----------------------------------+------------+----------------+----------------------------------+
Notice: /Stage[main]/Heat::Keystone::Auth/Keystone_endpoint[openstack/heat]: Dependency Keystone_service[heat] has failures: true
Warning: /Stage[main]/Heat::Keystone::Auth/Keystone_endpoint[openstack/heat]: Skipping because of failed dependencies
Error: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]: Could not evaluate: Expected 2 columns for role row, found 0. Line +----------------------------------+------------------+
Error: /Stage[main]/Heat::Engine/Keystone_role[heat_stack_owner]: Could not evaluate: Expected 2 columns for role row, found 0. Line +----------------------------------+------------------+
Notice: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::Tenant[test2]/Keystone_tenant[test2]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::Tenant[test2]/Keystone_tenant[test2]: Skipping because of failed dependencies
Notice: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::User[demo2]/Keystone_user[demo2]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::User[demo2]/Keystone_user[demo2]: Skipping because of failed dependencies
Notice: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::User[demo2]/Keystone_user_role[demo2@test2]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::User[demo2]/Keystone_user_role[demo2@test2]: Skipping because of failed dependencies
Error: /Stage[main]/Heat::Keystone::Auth_cfn/Keystone_service[heat-cfn]: Could not evaluate: Expected 4 columns for service row, found 0. Line +----------------------------------+------------+----------------+----------------------------------+
Notice: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_user[ceilometer]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_user[ceilometer]: Skipping because of failed dependencies
Notice: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_user_role[ceilometer@services]: Dependency Keystone_role[ResellerAdmin] has failures: true
Notice: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_user_role[ceilometer@services]: Dependency Keystone_tenant[services] has failures: true
Notice: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_user_role[ceilometer@services]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_user_role[ceilometer@services]: Skipping because of failed dependencies
Error: /Stage[main]/Cinder::Keystone::Auth/Keystone_service[cinder]: Could not evaluate: Expected 4 columns for service row, found 0. Line +----------------------------------+------------+----------------+----------------------------------+
Error: /Stage[main]/Glance::Keystone::Auth/Keystone_service[glance]: Could not evaluate: Expected 4 columns for service row, found 0. Line +----------------------------------+------------+----------------+----------------------------------+
Notice: /Stage[main]/Glance::Keystone::Auth/Keystone_endpoint[openstack/glance]: Dependency Keystone_service[glance] has failures: true
Warning: /Stage[main]/Glance::Keystone::Auth/Keystone_endpoint[openstack/glance]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Keystone::Auth/Keystone_user_role[nova@services]: Dependency Keystone_tenant[services] has failures: true
Notice: /Stage[main]/Nova::Keystone::Auth/Keystone_user_role[nova@services]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Nova::Keystone::Auth/Keystone_user_role[nova@services]: Skipping because of failed dependencies
Notice: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Dependency Keystone_tenant[admin] has failures: true
Warning: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Skipping because of failed dependencies
Notice: /Stage[main]/Keystone::Roles::Admin/Keystone_user_role[admin@admin]: Dependency Keystone_tenant[admin] has failures: true
Notice: /Stage[main]/Keystone::Roles::Admin/Keystone_user_role[admin@admin]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Keystone::Roles::Admin/Keystone_user_role[admin@admin]: Skipping because of failed dependencies
Notice: /Stage[main]/Cinder::Keystone::Auth/Keystone_user[cinder]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone_user[cinder]: Skipping because of failed dependencies
Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]: Skipping because of failed dependencies
Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/nova_admin_auth_url]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/nova_admin_auth_url]: Skipping because of failed dependencies
Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/send_events_interval]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/send_events_interval]: Skipping because of failed dependencies
Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/nova_admin_password]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/nova_admin_password]: Skipping because of failed dependencies
Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/nova_region_name]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/nova_region_name]: Skipping because of failed dependencies
Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]: Skipping because of failed dependencies
Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/nova_admin_username]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/nova_admin_username]: Skipping because of failed dependencies
Notice: /Stage[main]/Cinder::Keystone::Auth/Keystone_endpoint[openstack/cinder]: Dependency Keystone_service[cinder] has failures: true
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone_endpoint[openstack/cinder]: Skipping because of failed dependencies
Notice: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::User[demo]/Keystone_user[demo]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::User[demo]/Keystone_user[demo]: Skipping because of failed dependencies
Notice: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::User[demo]/Keystone_user_role[demo@test]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::User[demo]/Keystone_user_role[demo@test]: Skipping because of failed dependencies
Notice: /Stage[main]/Heat::Keystone::Auth_cfn/Keystone_user[heat-cfn]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Heat::Keystone::Auth_cfn/Keystone_user[heat-cfn]: Skipping because of failed dependencies
Notice: /Stage[main]/Heat::Keystone::Auth_cfn/Keystone_user_role[heat-cfn@services]: Dependency Keystone_tenant[services] has failures: true
Notice: /Stage[main]/Heat::Keystone::Auth_cfn/Keystone_user_role[heat-cfn@services]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Heat::Keystone::Auth_cfn/Keystone_user_role[heat-cfn@services]: Skipping because of failed dependencies
Error: /Stage[main]/Swift::Keystone::Auth/Keystone_service[swift_s3]: Could not evaluate: Expected 4 columns for service row, found 0. Line +----------------------------------+------------+----------------+----------------------------------+
Notice: /Stage[main]/Swift::Keystone::Auth/Keystone_endpoint[openstack/swift_s3]: Dependency Keystone_service[swift_s3] has failures: true
Warning: /Stage[main]/Swift::Keystone::Auth/Keystone_endpoint[openstack/swift_s3]: Skipping because of failed dependencies
Notice: /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_endpoint[openstack/keystone]: Dependency Keystone_service[keystone] has failures: true
Warning: /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_endpoint[openstack/keystone]: Skipping because of failed dependencies
Error: /Stage[main]/Nova::Keystone::Auth/Keystone_service[nova]: Could not evaluate: Expected 4 columns for service row, found 0. Line +----------------------------------+------------+----------------+----------------------------------+
Notice: /Stage[main]/Nova::Keystone::Auth/Keystone_endpoint[openstack/nova]: Dependency Keystone_service[nova] has failures: true
Warning: /Stage[main]/Nova::Keystone::Auth/Keystone_endpoint[openstack/nova]: Skipping because of failed dependencies
Notice: /Stage[main]/Heat::Keystone::Auth_cfn/Keystone_endpoint[openstack/heat-cfn]: Dependency Keystone_service[heat-cfn] has failures: true
Warning: /Stage[main]/Heat::Keystone::Auth_cfn/Keystone_endpoint[openstack/heat-cfn]: Skipping because of failed dependencies
Notice: /Stage[main]/Neutron::Keystone::Auth/Keystone_user_role[neutron@services]: Dependency Keystone_tenant[services] has failures: true
Notice: /Stage[main]/Neutron::Keystone::Auth/Keystone_user_role[neutron@services]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Neutron::Keystone::Auth/Keystone_user_role[neutron@services]: Skipping because of failed dependencies
Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/nova_url]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/nova_url]: Skipping because of failed dependencies
Notice: /Stage[main]/Neutron::Server::Notifications/Nova_admin_tenant_id_setter[nova_admin_tenant_id]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Neutron::Server::Notifications/Nova_admin_tenant_id_setter[nova_admin_tenant_id]: Skipping because of failed dependencies
Error: /Stage[main]/Swift::Keystone::Auth/Keystone_service[swift]: Could not evaluate: Expected 4 columns for service row, found 0. Line +----------------------------------+------------+----------------+----------------------------------+
Notice: /Stage[main]/Swift::Keystone::Auth/Keystone_endpoint[openstack/swift]: Dependency Keystone_service[swift] has failures: true
Warning: /Stage[main]/Swift::Keystone::Auth/Keystone_endpoint[openstack/swift]: Skipping because of failed dependencies
Notice: /Stage[main]/Swift::Keystone::Auth/Keystone_user[swift]: Dependency Keystone_tenant[services] has failures: true
Warning: /Stage[main]/Swift::Keystone::Auth/Keystone_user[swift]: Skipping because of failed dependencies
Notice: /Stage[main]/Swift::Keystone::Auth/Keystone_user_role[swift@services]: Dependency Keystone_tenant[services] has failures: true
Notice: /Stage[main]/Swift::Keystone::Auth/Keystone_user_role[swift@services]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Swift::Keystone::Auth/Keystone_user_role[swift@services]: Skipping because of failed dependencies
Notice: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::User[test]/Keystone_user[test]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::User[test]/Keystone_user[test]: Skipping because of failed dependencies
Notice: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::User[test]/Keystone_user_role[test@test]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::User[test]/Keystone_user_role[test@test]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Openstack::Profile::Keystone/Openstack::Resources::User[test]/Keystone_user_role[test@test]: Skipping because of failed dependencies
Notice: /Firewall[8080 - Swift Proxy]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[8080 - Swift Proxy]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[8080 - Swift Proxy]: Skipping because of failed dependencies
Notice: /Stage[main]/Openstack::Profile::Ceilometer::Api/Mongodb_database[ceilometer]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Stage[main]/Openstack::Profile::Ceilometer::Api/Mongodb_database[ceilometer]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Openstack::Profile::Ceilometer::Api/Mongodb_database[ceilometer]: Skipping because of failed dependencies
Notice: /Stage[main]/Openstack::Profile::Ceilometer::Api/Mongodb_user[mongo]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Stage[main]/Openstack::Profile::Ceilometer::Api/Mongodb_user[mongo]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Openstack::Profile::Ceilometer::Api/Mongodb_user[mongo]: Skipping because of failed dependencies
Notice: /Stage[main]/Ceilometer::Db/Exec[ceilometer-dbsync]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Stage[main]/Ceilometer::Db/Exec[ceilometer-dbsync]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Ceilometer::Db/Exec[ceilometer-dbsync]: Skipping because of failed dependencies
Notice: /Stage[main]/Ceilometer::Collector/Service[ceilometer-collector]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Stage[main]/Ceilometer::Collector/Service[ceilometer-collector]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Ceilometer::Collector/Service[ceilometer-collector]: Skipping because of failed dependencies
Notice: /Stage[main]/Ceilometer::Api/Service[ceilometer-api]: Dependency Keystone_role[ResellerAdmin] has failures: true
Notice: /Stage[main]/Ceilometer::Api/Service[ceilometer-api]: Dependency Keystone_tenant[services] has failures: true
Notice: /Stage[main]/Ceilometer::Api/Service[ceilometer-api]: Dependency Keystone_role[admin] has failures: true
Notice: /Stage[main]/Ceilometer::Api/Service[ceilometer-api]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Ceilometer::Api/Service[ceilometer-api]: Skipping because of failed dependencies
Notice: /Firewall[8777 - Ceilometer API]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[8777 - Ceilometer API]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[8777 - Ceilometer API]: Skipping because of failed dependencies
Notice: /Firewall[8776 - Cinder API]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[8776 - Cinder API]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[8776 - Cinder API]: Skipping because of failed dependencies
Notice: /Firewall[3333 - Nova S3]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[3333 - Nova S3]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[3333 - Nova S3]: Skipping because of failed dependencies
Notice: /Firewall[8773 - Nova EC2]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[8773 - Nova EC2]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[8773 - Nova EC2]: Skipping because of failed dependencies
Notice: /Firewall[8774 - Nova API]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[8774 - Nova API]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[8774 - Nova API]: Skipping because of failed dependencies
Notice: /Firewall[8775 - Nova Metadata]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[8775 - Nova Metadata]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[8775 - Nova Metadata]: Skipping because of failed dependencies
Notice: /Firewall[6080 - Nova novnc]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[6080 - Nova novnc]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[6080 - Nova novnc]: Skipping because of failed dependencies
Notice: /Firewall[89696 - Neutron API]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[89696 - Neutron API]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[89696 - Neutron API]: Skipping because of failed dependencies
Notice: /Firewall[8000 - Heat CFN API]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[8000 - Heat CFN API]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[8000 - Heat CFN API]: Skipping because of failed dependencies
Notice: /Stage[main]/Cinder::Keystone::Auth/Keystone_user_role[cinder@services]: Dependency Keystone_tenant[services] has failures: true
Notice: /Stage[main]/Cinder::Keystone::Auth/Keystone_user_role[cinder@services]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone_user_role[cinder@services]: Skipping because of failed dependencies
Notice: /Stage[main]/Cinder::Api/Service[cinder-api]: Dependency Keystone_tenant[services] has failures: true
Notice: /Stage[main]/Cinder::Api/Service[cinder-api]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Cinder::Api/Service[cinder-api]: Skipping because of failed dependencies
Notice: /Stage[main]/Glance::Api/Service[glance-api]: Dependency Keystone_tenant[services] has failures: true
Notice: /Stage[main]/Glance::Api/Service[glance-api]: Dependency Keystone_role[admin] has failures: true
Notice: /Stage[main]/Glance::Api/Service[glance-api]: Dependency Keystone_service[glance] has failures: true
Warning: /Stage[main]/Glance::Api/Service[glance-api]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Nova_config[DEFAULT/enabled_ssl_apis]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Nova_config[DEFAULT/enabled_ssl_apis]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/File[/etc/nova/nova.conf]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/File[/etc/nova/nova.conf]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Exec[post-nova_config]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Exec[post-nova_config]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Exec[nova-db-sync]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Exec[nova-db-sync]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Vncproxy/Nova::Generic_service[vncproxy]/Service[nova-vncproxy]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Vncproxy/Nova::Generic_service[vncproxy]/Service[nova-vncproxy]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Objectstore/Nova::Generic_service[objectstore]/Service[nova-objectstore]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Objectstore/Nova::Generic_service[objectstore]/Service[nova-objectstore]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Consoleauth/Nova::Generic_service[consoleauth]/Service[nova-consoleauth]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Consoleauth/Nova::Generic_service[consoleauth]/Service[nova-consoleauth]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Api/Nova::Generic_service[api]/Service[nova-api]: Dependency Keystone_service[nova] has failures: true
Notice: /Stage[main]/Nova::Api/Nova::Generic_service[api]/Service[nova-api]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Api/Nova::Generic_service[api]/Service[nova-api]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Service[nova-scheduler]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Service[nova-scheduler]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Service[nova-conductor]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Service[nova-conductor]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Service[nova-compute]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Service[nova-compute]: Skipping because of failed dependencies
Notice: /Stage[main]/Nova/Exec[networking-refresh]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova/Exec[networking-refresh]: Skipping because of failed dependencies
Notice: /Firewall[8004 - Heat API]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[8004 - Heat API]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[8004 - Heat API]: Skipping because of failed dependencies
Notice: /Firewall[80 - Apache (Horizon)]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[80 - Apache (Horizon)]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[80 - Apache (Horizon)]: Skipping because of failed dependencies
Notice: /Firewall[443 - Apache SSL (Horizon)]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[443 - Apache SSL (Horizon)]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[443 - Apache SSL (Horizon)]: Skipping because of failed dependencies
Notice: /Stage[main]/Heat::Api_cfn/Service[heat-api-cfn]: Dependency Keystone_tenant[services] has failures: true
Notice: /Stage[main]/Heat::Api_cfn/Service[heat-api-cfn]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Heat::Api_cfn/Service[heat-api-cfn]: Skipping because of failed dependencies
Notice: /Stage[main]/Heat::Api/Service[heat-api]: Dependency Keystone_tenant[services] has failures: true
Notice: /Stage[main]/Heat::Api/Service[heat-api]: Dependency Keystone_role[admin] has failures: true
Warning: /Stage[main]/Heat::Api/Service[heat-api]: Skipping because of failed dependencies
Notice: /Stage[main]/Heat::Engine/Service[heat-engine]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Heat::Engine/Service[heat-engine]: Unscheduling refresh on Service[heat-engine]
Notice: /Stage[main]/Nova::Cert/Nova::Generic_service[cert]/Service[nova-cert]: Dependency Rabbitmq_user[openstack] has failures: true
Warning: /Stage[main]/Nova::Cert/Nova::Generic_service[cert]/Service[nova-cert]: Skipping because of failed dependencies
Notice: /Stage[main]/Neutron::Server/Service[neutron-server]: Dependency Keystone_tenant[services] has failures: true
Notice: /Stage[main]/Neutron::Server/Service[neutron-server]: Dependency Keystone_role[admin] has failures: true
Notice: /Stage[main]/Neutron::Server/Service[neutron-server]: Dependency Keystone_service[neutron] has failures: true
Warning: /Stage[main]/Neutron::Server/Service[neutron-server]: Skipping because of failed dependencies
Notice: /Firewall[8999 - Accept all management network traffic]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[8999 - Accept all management network traffic]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[8999 - Accept all management network traffic]: Skipping because of failed dependencies
Notice: /Firewall[9100 - Accept all vm network traffic]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[9100 - Accept all vm network traffic]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[9100 - Accept all vm network traffic]: Skipping because of failed dependencies
Notice: /Firewall[9999 - Reject remaining traffic]: Dependency Rabbitmq_user[openstack] has failures: true
Notice: /Firewall[9999 - Reject remaining traffic]: Dependency Keystone_role[admin] has failures: true
Warning: /Firewall[9999 - Reject remaining traffic]: Skipping because of failed dependencies
Notice: Finished catalog run in 35.81 seconds

Here my server2.yaml :


---
classes:
 - openstack::role::controller
# - openstack::role::storage
# - openstack::role::network
# - openstack::role::compute
 - openstack
# - openstack_extras

openstack::region: 'openstack'

######## Networks
openstack::network::api: '10.0.249.0/24'
openstack::network::external: '192.168.22.0/24'
openstack::network::external::ippool::start: '192.168.22.0'
openstack::network::external::ippool::end: '192.168.22.100'
openstack::network::external::gateway: '10.0.249.254'
openstack::network::external::dns: '10.0.249.254'
openstack::networks:
  public:
    tenant_name: 'services'
    provider_network_type: 'gre'
    router_external: true
    provider_segmentation_id: 3604
    shared: true
  private:
    tenant_name: 'services'
    provider_network_type: 'gre'
    router_external: false
    provider_segmentation_id: 4063
    shared: true
openstack::subnets:
  '192.168.22.0/24':
    cidr: '192.168.22.0/24'
    ip_version: '4'
    gateway_ip: 192.168.22.2
    enable_dhcp: false
    network_name: 'public'
    tenant_name: 'services'
    allocation_pools: ['start=192.168.22.100,end=192.168.22.200']
    dns_nameservers: [192.168.22.2]
  '10.0.0.0/24':
    cidr: '10.0.0.0/24'
    ip_version: '4'
    enable_dhcp: true
    network_name: 'private'
    tenant_name: 'services'
    dns_nameservers: [192.168.22.2]
openstack::routers:
  test:
    tenant_name: 'test'
    gateway_network_name: 'public'
openstack::router_interfaces:
  'test:10.0.0.0/24': {}

openstack::network::management: '10.0.249.0/24'
openstack::network::data: '10.0.249.0/24'

######## Fixed IPs (controllers)

openstack::controller::address::api: '10.0.249.148'
openstack::controller::address::management: '10.0.249.148'
openstack::storage::address::api: '10.0.249.148'
openstack::storage::address::management: '10.0.249.148'

######## Database

openstack::mysql::root_password: 'spam-gak'
openstack::mysql::service_password: 'fuva-wax'
openstack::mysql::allowed_hosts: ['localhost', '127.0.0.1', '10.0.249.148']

openstack::mysql::keystone::user: 'keystone'
openstack::mysql::keystone::pass: 'fuva-wax'

openstack::mysql::cinder::user: 'cinder'
openstack::mysql::cinder::pass: 'fuva-wax'

openstack::mysql::glance::user: 'glance'
openstack::mysql::glance::pass: 'fuva-wax'
openstack::glance::api_servers: ['10.0.249.148:9292']

openstack::mysql::nova::user: 'nova'
openstack::mysql::nova::pass: 'fuva-wax'

openstack::mysql::neutron::user: 'neutron'
openstack::mysql::neutron::pass: 'fuva-wax'

openstack::mysql::heat::user: 'heat'
openstack::mysql::heat::pass: 'fuva-wax'

######## RabbitMQ

openstack::rabbitmq::user: 'openstack'
openstack::rabbitmq::password: 'pose-vix'
openstack::rabbitmq::hosts: ['10.0.249.148:5672']

######## Keystone

openstack::keystone::admin_token: 'sosp-kyl'
openstack::keystone::admin_email: '[email protected]'
openstack::keystone::admin_password: 'fyby-tet'

openstack::keystone::tenants:
    "test":
        description: "Test tenant"
    "test2":
        description: "Test tenant"

openstack::keystone::users:
    "test":
        password: "abc123"
        tenant: "test"
        email: "[email protected]"
        admin: true
    "demo":
        password: "abc123"
        tenant: "test"
        email: "[email protected]"
        admin: false
    "demo2":
        password: "abc123"
        tenant: "test2"
        email: "[email protected]"
        admin: false

######## Glance

openstack::images:
  Cirros:
    container_format: 'bare'
    disk_format: 'qcow2'
    source: 'http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img'

openstack::glance::password: 'na-mu-va'

######## Cinder

openstack::cinder::password: 'zi-co-se'
openstack::cinder::volume_size: '8G'

######## Swift

openstack::swift::password: 'dexc-flo'
openstack::swift::hash_suffix: 'pop-bang'

######## Nova

openstack::nova::libvirt_type: 'kvm'
openstack::nova::password: 'quuk-paj'

######## Neutron

openstack::neutron::password: 'whi-rtuz'
openstack::neutron::shared_secret: 'by-sa-bo'
openstack::neutron::core_plugin: 'ml2'
openstack::neutron::service_plugins: ['router', 'firewall', 'lbaas', 'vpnaas', 'metering']
openstack::network::neutron::private: '192.168.22.0'


######## Ceilometer
openstack::ceilometer::address::management: '10.0.249.148'
openstack::ceilometer::mongo::username: 'mongo'
openstack::ceilometer::mongo::password: 'mongosecretkey123'
openstack::ceilometer::password: 'whi-truz'
openstack::ceilometer::meteringsecret: 'ceilometersecretkey'

######## Heat
openstack::heat::password: 'zap-bang'
openstack::heat::encryption_key: 'heatsecretkey123'


######## Horizon

openstack::horizon::secret_key: 'whu-ghuk'

######## Tempest

openstack::tempest::configure_images    : true
openstack::tempest::image_name          : 'Cirros'
openstack::tempest::image_name_alt      : 'Cirros'
openstack::tempest::username            : 'demo'
openstack::tempest::username_alt        : 'demo2'
openstack::tempest::username_admin      : 'test'
openstack::tempest::configure_network   : true
openstack::tempest::public_network_name : 'public'
openstack::tempest::cinder_available    : true
openstack::tempest::glance_available    : true
openstack::tempest::horizon_available   : true
openstack::tempest::nova_available      : true
openstack::tempest::neutron_available   : true
openstack::tempest::heat_available      : false
openstack::tempest::swift_available     : false

######## Log levels
openstack::verbose: 'True'
openstack::debug: 'True'

Do you have an idea of the problem?
Secondly, have you a roadmap or a date for the upgrade to kilo release?

Best regards.

Neutron Server Migration

I'm running into an issue where I need to replace my existing neutron server with a new server and I'm not sure how to do that and I couldn't find any solution by googling. I will appreciate if you share your thoughts.

I'm using icehouse and CentOS6.5.

My thoughts is run this "::openstack::role::network" on new server with a different IP [eth0, 1 ] and then swipe IP's with existing server and rerun puppet client but I'm not sure how to backup and restart ovs stuff.

[root@abc123 network-scripts]# ovs-vsctl show
a23e5f6d-fd2a-4458-a642-be415c421354
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port "gre-0a500981"
Interface "gre-0a500981"
type: gre
options: {in_key=flow, local_ip="192.169.1.25", out_key=flow, remote_ip="172.16.5.10"}

Wrong inference of database location for controller node

When executing puppet agent -t on my controller node,
I have encountered the following error :

Error: Could not retrieve catalog from remote server: 
Error 400 on SERVER: MySQL setup failed. 
The inferred location of the database based on the
openstack::network::management hiera value is 10.0.2.15. 
The explicit address from openstack::controller::address::management
is 10.0.0.11. Please correct this difference. at /etc/puppet/environments/puppetmaster/modules/openstack/manifests/profile/mysql.pp:9 on node controller.local

I set the value of openstack::controller::address::management to 10.0.0.11,
There are two network interfaces configured on this node

eth0: 10.0.2.15
eth1: 10.0.0.11 

My issue comes most likely from the ip_for_network function

https://github.com/puppetlabs/puppetlabs-openstack/blob/master/lib/puppet/parser/functions/ip_for_network.rb#L28

Any idea about how I could fix it or do things a little differently? Unfortunately, Ican not get rid of eth0.

Exisiting RabbitMQ vhost and user causes error when running 'pupet apply' multiple times

Info: Class[Rabbitmq::Config]: Scheduling refresh of Class[Rabbitmq::Service]
Info: Class[Rabbitmq::Service]: Scheduling refresh of Service[rabbitmq-server]
Notice: /Stage[main]/Rabbitmq::Service/Service[rabbitmq-server]: Triggered 'refresh' from 1 events
Error: Execution of '/usr/sbin/rabbitmqctl add_vhost /' returned 2: Creating vhost "/" ...
Error: vhost_already_exists: /

Error: /Stage[main]/Nova::Rabbitmq/Rabbitmq_vhost[/]/ensure: change from absent to present failed: Execution of '/usr/sbin/rabbitmqctl add_vhost /' returned 2: Creating vhost "/" ...
Error: vhost_already_exists: /

Error: Execution of '/usr/sbin/rabbitmqctl add_user openstack testing' returned 2: Creating user "openstack" ...
Error: user_already_exists: openstack

Error: /Stage[main]/Nova::Rabbitmq/Rabbitmq_user[openstack]/ensure: change from absent to present failed: Execution of '/usr/sbin/rabbitmqctl add_user openstack testing' returned 2: Creating user "openstack" ...
Error: user_already_exists: openstack

When I want to run 'puppet apply' again, I have to delete the vhost and user:

sudo rabbitmqctl delete_vhost /
sudo rabbitmqctl delete_user openstack

Can't install openstack-nova-conductor

Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-nova-conductor' returned 1: Error: Nothing to do
Error: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Package[nova-conductor]/ensure: change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-nova-conductor' returned 1: Error: Nothing to do

I try to find any clue about this issue but can't find it.
Does any one can tell me on which repo i can install it or what i make wrong ?
Thanks

ssh module

I've modified nova init.pp "$nova_shell = '/bin/bash'," and generate rsa key but I'm not sure where to add rsa public and private key for nova user.

I'm using puppet modules v4.1.0

[root@controller ~]# su - nova
-bash-4.1$ pwd
/var/lib/nova

deploy openstack meet mysql installation failure

I deploy openstack controller node with puppet module puppetlabs-openstack , and meet below error:

puppet agent --test --debug

Debug: Executing '/usr/bin/yum -d 0 -e 0 -y list mysql'
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list mysql' returned 1: Error: No matching Packages to list
Error: /Stage[main]/Mysql::Client::Install/Package[mysql_client]/ensure: change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y list mysql' returned 1: Error: No matching Packages to list
Debug: /Stage[main]/Glance::Notify::Rabbitmq/Glance_api_config[DEFAULT/kombu_ssl_keyfile]: Nothing to manage: no ensure and the resource doesn't exist
Debug: Executing '/usr/bin/systemctl is-active memcached'
Debug: Executing '/usr/bin/systemctl is-enabled memcached'

My environment is :
openstack :rdo-juno
os:centos 7
puppetlabs-openstack : 5.0.1
[root@puppet ~]# puppet module list --tree
/etc/puppet/modules
└─┬ puppetlabs-openstack (v5.0.1)
├─┬ stackforge-ceilometer (v5.0.0)
│ ├── puppetlabs-inifile (v1.2.0)
│ └─┬ stackforge-keystone (v5.0.0)
│ ├─┬ puppetlabs-apache (v1.2.0)
│ │ ├── puppetlabs-stdlib (v4.5.1)
│ │ └── puppetlabs-concat (v1.1.2)
│ └─┬ stackforge-openstacklib (v5.0.0)
│ ├── aimonb-aviator (v0.5.1)
│ ├── puppetlabs-mysql (v2.3.1)
│ └─┬ puppetlabs-rabbitmq (v3.1.0)
│ ├── puppetlabs-apt (v1.7.0)
│ └─┬ garethr-erlang (v0.3.0)
│ └── stahnma-epel (v1.0.2)
├─┬ stackforge-cinder (v5.0.0)
│ └── dprince-qpid (v1.0.2)
├── stackforge-glance (v5.0.0)
├── stackforge-heat (v5.0.0)
├─┬ stackforge-horizon (v5.0.0)
│ └─┬ saz-memcached (v2.6.0)
│ └── puppetlabs-firewall (v1.3.0)
├─┬ stackforge-neutron (v5.0.0)
│ ├─┬ stackforge-nova (v5.0.0)
│ │ └── duritong-sysctl (v0.0.1)
│ └── stackforge-vswitch (v1.0.0)
├─┬ stackforge-swift (v5.0.0)
│ ├─┬ puppetlabs-rsync (v0.4.0)
│ │ └── puppetlabs-xinetd (v1.4.0)
│ └── saz-ssh (v1.4.0)
├─┬ stackforge-tempest (v5.0.0)
│ └── puppetlabs-vcsrepo (v1.2.0)
├── puppetlabs-mongodb (v0.10.0)
└── puppetlabs-ntp (v3.3.0)
i have try it with this way:

try install mysql with yum manually ,it's success but run puppet still failure.

yum install mariadb mariadb-server MySQL-python

any else i missed?

For details logs and ,please see attached file. Thanks a lot!

Neutron exec resource error

When using the multimode example, I'm getting the following error -

Error 400 on SERVER: Could not find resource 'Exec[neutron-db-sync]' for relationship from 'Class[Neutron::Db::Mysql]' on node ....

I've updated the info from common.yaml to be environment specific and am trying to initialize the controller node before I move on to the other nodes.

Unable to configure cinder with glusterfs

I am unable to get Cinder to work with GlusterFS, and I can't tell if it's a configuration issue or a bug. Cinder refuses to use Gluster at all and continues to use LVMISCSI, and I'm not sure what to do about it. The relevant part of my configuration for the storage node is:

class { 'cinder::backends':
  enabled_backends => ['glusterfs']
}
class { 'cinder::volume::glusterfs':
  glusterfs_shares => ['192.168.2.5:/cinder-volumes'],
  glusterfs_mount_point_base => '/var/lib/cinder/mnt'
}

which, along with everything else, results in a cinder.conf file that looks like this on the storage node:

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = no
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rabbit_host=192.168.2.1
use_syslog=False
api_paste_config=/etc/cinder/api-paste.ini
glance_num_retries=0
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
enabled_backends=glusterfs
debug=no
glance_api_ssl_compression=False
glance_api_insecure=False
rabbit_userid=openstack
rabbit_use_ssl=False
log_dir=/var/log/cinder
glance_api_servers=192.168.1.5:9292
volume_backend_name=DEFAULT
rabbit_virtual_host=/
rabbit_hosts=192.168.2.1:5672
glusterfs_shares_config=/etc/cinder/shares.conf
control_exchange=openstack
rabbit_ha_queues=False
glance_api_version=2
amqp_durable_queues=False
rabbit_password=**redacted**
rabbit_port=5672
rpc_backend=cinder.openstack.common.rpc.impl_kombu

No matter how much I try to get rid of the iscsi references, they keep coming back. Thoughts?

Could not find data item openstack::keystone::tenants in any Hiera data file and no default supplied

I got this error when I try multinode example with vagrant + vmware fusion.

Info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find data item openstack::keystone::tenants in any Hiera data file and no default supplied at /etc/puppet/modules/openstack/manifests/init.pp:330 on node control.localdomain
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Connection to 172.16.25.143 closed.
Info: Caching certificate for network.localdomain
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for network.localdomain
Info: Retrieving pluginfacts
Error: /File[/var/lib/puppet/facts.d]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://puppet/pluginfacts
Info: Retrieving plugin

error on glance setup

Info: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/auth_url]: Scheduling refresh of Service[glance-api]
Notice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]: Triggered 'refresh' from 53 events
Info: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]: Scheduling refresh of Service[glance-api]
Info: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]: Scheduling refresh of Service[glance-registry]
Notice: /Stage[main]/Glance::Registry/Service[glance-registry]: Triggered 'refresh' from 18 events
Notice: /Stage[main]/Glance::Api/Service[glance-api]: Triggered 'refresh' from 47 events
Error: Could not prefetch glance_image provider 'glance': Execution of '/usr/bin/glance --os-tenant-name services --os-username glance --os-password na-mu-va --os-region-name openstack --os-auth-url http://172.16.33.4:35357/v2.0/ image-list' returned 1: Request returned failure status 401.
Invalid OpenStack Identity credentials.

Error: Execution of '/usr/bin/glance --os-tenant-name services --os-username glance --os-password na-mu-va --os-region-name openstack --os-auth-url http://172.16.33.4:35357/v2.0/ image-create --name=Cirros --is-public=Yes --container-format=bare --disk-format=qcow2 --copy-from=http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img' returned 1: Request returned failure status 401.
Invalid OpenStack Identity credentials.

Error: /Stage[main]/Openstack::Setup::Cirros/Glance_image[Cirros]/ensure: change from absent to present failed: Execution of '/usr/bin/glance --os-tenant-name services --os-username glance --os-password na-mu-va --os-region-name openstack --os-auth-url http://172.16.33.4:35357/v2.0/ image-create --name=Cirros --is-public=Yes --container-format=bare --disk-format=qcow2 --copy-from=http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img' returned 1: Request returned failure status 401.
Invalid OpenStack Identity credentials.

Notice: /Stage[main]/Openstack::Profile::Firewall::Pre/Firewall[0001 - related established]/ensure: created
Notice: /Stage[main]/Openstack::Profile::Firewall::Pre/Firewall[0002 - localhost]/ensure: created
Notice: /Stage[main]/Openstack::Profile::Firewall::Pre/Firewall[0003 - localhost]/ensure: created
Notice: /Stage[main]/Openstack::Profile::Firewall::Pre/Firewall[0022 - ssh]/ensure: created
Notice: /Stage[main]/Openstack::Profile::Firewall::Post/Firewall[8999 - Accept all management network traffic]/ensure: created
Notice: /Stage[main]/Openstack::Profile::Firewall::Post/Firewall[9100 - Accept all vm network traffic]/ensure: created
Notice: /Stage[main]/Openstack::Profile::Firewall::Post/Firewall[9999 - Reject remaining traffic]/ensure: created
Info: Creating state file /var/lib/puppet/state/state.yaml
Notice: Finished catalog run in 397.60 seconds

storage node setup is ending with this error.

root@storage:~# puppet agent -t --server=pupmaster
Info: Retrieving plugin
Info: Loading facts in /var/lib/puppet/lib/facter/ip6tables_version.rb
Info: Loading facts in /var/lib/puppet/lib/facter/concat_basedir.rb
Info: Loading facts in /var/lib/puppet/lib/facter/apt_update_last_success.rb
Info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
Info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
Info: Loading facts in /var/lib/puppet/lib/facter/apt_updates.rb
Info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
Info: Loading facts in /var/lib/puppet/lib/facter/iptables_version.rb
Info: Loading facts in /var/lib/puppet/lib/facter/rabbitmq_erlang_cookie.rb
Info: Loading facts in /var/lib/puppet/lib/facter/pe_version.rb
Info: Loading facts in /var/lib/puppet/lib/facter/iptables_persistent_version.rb
Info: Caching catalog for storage
Info: Applying configuration version '1432484231'
Error: Could not prefetch glance_image provider 'glance': Execution of '/usr/bin/glance --os-tenant-name services --os-username glance --os-password na-mu-va --os-region-name openstack --os-auth-url http://172.16.33.4:35357/v2.0/ image-list' returned 1: Request returned failure status 401.
Invalid OpenStack Identity credentials.

Error: Execution of '/usr/bin/glance --os-tenant-name services --os-username glance --os-password na-mu-va --os-region-name openstack --os-auth-url http://172.16.33.4:35357/v2.0/ image-create --name=Cirros --is-public=Yes --container-format=bare --disk-format=qcow2 --copy-from=http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img' returned 1: Request returned failure status 401.
Invalid OpenStack Identity credentials.

Error: /Stage[main]/Openstack::Setup::Cirros/Glance_image[Cirros]/ensure: change from absent to present failed: Execution of '/usr/bin/glance --os-tenant-name services --os-username glance --os-password na-mu-va --os-region-name openstack --os-auth-url http://172.16.33.4:35357/v2.0/ image-create --name=Cirros --is-public=Yes --container-format=bare --disk-format=qcow2 --copy-from=http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img' returned 1: Request returned failure status 401.
Invalid OpenStack Identity credentials.

Notice: Finished catalog run in 10.35 seconds
root@storage:~#

root@controller:# source openrc
root@controller:
# glance image-list
Request returned failure status 401.
Invalid OpenStack Identity credentials.
root@controller:~#

Can't install openstack-cinder....and others

Hi All,
I try to deploy openstack on 3 nodes.
After assign roles into differnt nodes and pulling changes into host by puppet agent -t i got info that cant instal some packeges:
Controller:

Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-nova-console' returned 1: Error: Nothing to do
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-glance' returned 1: Error: Nothing to do
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-cinder' returned 1: Error: Nothing to do
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install rabbitmq-server' returned 1: Error: Nothing to do
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install python-keystone' returned 1: Error: Nothing to do
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-nova-objectstore' returned 1: Error: Nothing to do
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install python-greenlet' returned 1: Error: Nothing to do
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-nova-conductor' returned 1: Error: Nothing to do
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-nova-novncproxy' returned 1: Error: Nothing to do

etc...

how can i get this packages installed?
I install module openstack and he pull all dependence but maybe i miss something?

Similar situation i got after deploying Compute node

resize instance

Need help.. I'm trying to resize an instance on same compute node and I'm getting following err.

So far I tied with following changes in nova.conf file on compute node. Still no luck.

allow_resize_to_same_host=True
scheduler_default_filters=AllHostsFilter
allow_migrate_to_same_host=True

I'm using puppet modules v4.2.0

" conductor.log "
2015-03-13 23:48:02.583 3487 WARNING nova.scheduler.utils [req-6ccd63e7-58f1-47dc-9c45-623c095faa49 cc73301018d44104a0a76c4a94512565 7
80221eb5f9f44adbdd7f510306afb77] Failed to compute_task_migrate_server: No valid host was found.
Traceback (most recent call last):

File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/server.py", line 139, in inner
return func(_args, *_kwargs)

File "/usr/lib/python2.6/site-packages/nova/scheduler/manager.py", line 298, in select_destinations
filter_properties)

File "/usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py", line 140, in select_destinations
raise exception.NoValidHost(reason='')

NoValidHost: No valid host was found.

2015-03-13 23:48:02.584 3487 WARNING nova.scheduler.utils [req-6ccd63e7-58f1-47dc-9c45-623c095faa49 cc73301018d44104a0a76c4a94512565 7
80221eb5f9f44adbdd7f510306afb77] [instance: 2a877215-9a8c-47b5-8cae-9962c897990b] Setting instance to ACTIVE state.
2015-03-13 23:48:02.679 3487 WARNING nova.conductor.manager [req-6ccd63e7-58f1-47dc-9c45-623c095faa49 cc73301018d44104a0a76c4a94512565
780221eb5f9f44adbdd7f510306afb77] [instance: 2a877215-9a8c-47b5-8cae-9962c897990b] No valid host found for cold migrate

Icehouse to Juno upgrade

We are using Icehouse in out staging environment with CentOS 6.5. (One Controller, One Network, One Storage and couple of Compute nodes)

Now we are planning to upgrade out staging environment to Juno.

Is it possible to do the upgrade without upgrading Operating System ( CentOS 6 to 7) using puppetlabs-openstack modules ?

I found below link but I'm not sure the right procedure for the upgrade since we were using puppet modules.

http://docs.openstack.org/openstack-ops/content/upgrade-icehouse-juno.html

image_cache_dir is defined twice in modules/glance/manifests/api.pp & penstack/manifests/profile/glance/api.pp

Module Version:
puppetlabs-glance (v4.1.0)
puppetlabs-openstack (v4.0.0)

Files:

/etc/puppet/modules/glance/manifests/api.pp
  $image_cache_dir       = '/var/lib/glance/image-cache',

/etc/puppet/modules/openstack/manifests/profile/glance/api.pp
  if $::osfamily == 'Debian' {
    glance_api_config { 'DEFAULT/image_cache_dir': value => '/var/lib/glance/image-cache/'}
  }

Of course the error:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate declaration: Glance_api_config[DEFAULT/image_cache_dir] is already declared in file /etc/puppet/modules/glance/manifests/api.pp:271; cannot redeclare at /etc/puppet/modules/openstack/manifests/profile/glance/api.pp:57 on node controller-1.intcloud.kaseya.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

loss of conectivity after neutron deployment

Hi
Im trying to setup an openstack envrionement using puppet. I tried both allinone and multinode deployments. The debug files point to loss of conectivity on neutron deployment, specifically upon the creation of brex bridge. If i delete this bridge, i restore connectivity. I only lose connectivity on the external network (which has the internet, and therefore, the install process of horizon and heat do not go through).
This is my allinonde.yaml
http://paste.openstack.org/show/204899/

this is the route -n of a failed deployment machine(i manually altered the default gateway prior to deployment)
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 p3p1
10.155.216.0 0.0.0.0 255.255.252.0 U 0 0 0 em1
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 p3p1
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0

this is from other machine (unaltered)
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.155.216.1 0.0.0.0 UG 0 0 0 eth0
10.155.216.0 0.0.0.0 255.255.252.0 U 0 0 0 eth0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0

EDIT-i forgot to mention, my puppet is master is communicating on 10.155.216.0

Missing packages

Hi,
there is some package missing with the latest version of your module installed with r10k (network node):
CentOS Linux release 7.1.1503 (Core):

Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list openstack-neutron-lbaas' returned 1: Error: No matching Packages to list
Error: /Stage[main]/Neutron::Agents::Lbaas/Package[neutron-lbaas-agent]/ensure: change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y list openstack-neutron-lbaas' returned 1: Error: No matching Packages to list
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/use_ssl]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/rabbit_userid]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/use_syslog]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/ssl_ca_file]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/kombu_ssl_ca_certs]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/rabbit_virtual_host]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/kombu_reconnect_delay]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/log_dir]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Lbaas/Neutron_lbaas_agent_config[DEFAULT/use_namespaces]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/log_file]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[agent/root_helper]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/rpc_backend]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/rabbit_use_ssl]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Lbaas/Neutron_lbaas_agent_config[haproxy/user_group]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/state_path]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/ssl_cert_file]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/rabbit_ha_queues]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/control_exchange]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/memcached_servers]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Lbaas/Neutron_lbaas_agent_config[DEFAULT/debug]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[agent/report_interval]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/kombu_ssl_certfile]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/debug]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/verbose]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/ssl_key_file]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/kombu_ssl_keyfile]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_sorting]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/mac_generation_retries]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/lock_path]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_bulk]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Lbaas/Neutron_lbaas_agent_config[DEFAULT/interface_driver]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_port]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/kombu_ssl_version]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Lbaas/Neutron_lbaas_agent_config[DEFAULT/device_driver]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/network_device_mtu]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/rabbit_hosts]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/api_extensions_path]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_pagination]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_lease_duration]: Skipping because of failed dependencies
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list openstack-neutron-fwaas' returned 1: Error: No matching Packages to list
Error: /Stage[main]/Neutron::Services::Fwaas/Package[openstack-neutron-fwaas]/ensure: change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y list openstack-neutron-fwaas' returned 1: Error: No matching Packages to list
Warning: /Stage[main]/Neutron::Services::Fwaas/Neutron_fwaas_service_config[fwaas/driver]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Services::Fwaas/Neutron_fwaas_service_config[fwaas/enabled]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/rabbit_password]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron/Neutron_config[DEFAULT/base_mac]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/admin_user]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/auth_host]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/auth_port]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/admin_tenant_name]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[database/min_pool_size]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/auth_protocol]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/identity_uri]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_l3agent_failover]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/admin_password]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[database/connection]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[database/max_pool_size]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/auth_uri]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/agent_down_time]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/auth_admin_prefix]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[database/max_overflow]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[database/max_retries]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_distributed]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[database/retry_interval]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[database/idle_timeout]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Metering/Service[neutron-metering-service]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Lbaas/Service[neutron-lbaas-service]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Metadata/Service[neutron-metadata]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Vpnaas/Service[neutron-vpnaas-service]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Service[neutron-server]: Skipping because of failed dependencies
  • Missing packages:
    • openstack-neutron-lbaas
    • openstack-neutron-fwaas

Ubuntu 14.04:

Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install neutron-fwaas' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package neutron-fwaas
Error: /Stage[main]/Neutron::Services::Fwaas/Package[neutron-fwaas]/ensure: change from purged to present failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install neutron-fwaas' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package neutron-fwaas
Warning: /Stage[main]/Neutron::Services::Fwaas/Neutron_fwaas_service_config[fwaas/driver]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Services::Fwaas/Neutron_fwaas_service_config[fwaas/enabled]: Skipping because of failed dependencies
Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install neutron-plugin-openvswitch-agent' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
  neutron-plugin-openvswitch-agent
0 upgraded, 1 newly installed, 0 to remove and 84 not upgraded.
Need to get 3872 B of archives.
After this operation, 76.8 kB of additional disk space will be used.
Err http://ubuntu-cloud.archive.canonical.com/ubuntu/ trusty-updates/juno/main neutron-plugin-openvswitch-agent all 1:2014.2.3-0ubuntu1~cloud0
  Could not resolve 'ubuntu-cloud.archive.canonical.com'
E: Failed to fetch http://ubuntu-cloud.archive.canonical.com/ubuntu/pool/main/n/neutron/neutron-plugin-openvswitch-agent_2014.2.3-0ubuntu1~cloud0_all.deb  Could not resolve 'ubuntu-cloud.archive.canonical.com'
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Error: /Stage[main]/Neutron::Agents::Ml2::Ovs/Package[neutron-ovs-agent]/ensure: change from purged to present failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install neutron-plugin-openvswitch-agent' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
  neutron-plugin-openvswitch-agent
0 upgraded, 1 newly installed, 0 to remove and 84 not upgraded.
Need to get 3872 B of archives.
After this operation, 76.8 kB of additional disk space will be used.
Err http://ubuntu-cloud.archive.canonical.com/ubuntu/ trusty-updates/juno/main neutron-plugin-openvswitch-agent all 1:2014.2.3-0ubuntu1~cloud0
  Could not resolve 'ubuntu-cloud.archive.canonical.com'
E: Failed to fetch http://ubuntu-cloud.archive.canonical.com/ubuntu/pool/main/n/neutron/neutron-plugin-openvswitch-agent_2014.2.3-0ubuntu1~cloud0_all.deb  Could not resolve 'ubuntu-cloud.archive.canonical.com'
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Warning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/enable_tunneling]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/polling_interval]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]: Skipping because of failed dependencies
Error: /Stage[main]/Keystone::Db::Sync/Exec[keystone-manage db_sync]: Failed to call refresh: keystone-manage db_sync returned 1 instead of one of [0]
Error: /Stage[main]/Keystone::Db::Sync/Exec[keystone-manage db_sync]: keystone-manage db_sync returned 1 instead of one of [0]
Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_service[neutron]: Could not evaluate: undefined method `collect' for nil:NilClass
Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]: Could not evaluate: undefined method `collect' for nil:NilClass
Warning: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_endpoint[openstack/neutron]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user_role[neutron@services]: Skipping because of failed dependencies
Warning: /Stage[main]/Neutron::Server/Service[neutron-server]: Skipping because of failed dependencies
Error: Could not send report: No route to host - connect(2)
  • Missing packages:
    • neutron-fwaas
    • neutron-plugin-openvswitch-agent

Debian 8:

Error: Could not apply complete catalog: Found 1 dependency cycle:
(Anchor[apt::source::debian_wheezy] => Apt::Source[debian_wheezy] => Apt::Source[debian_wheezy_backports] => File[debian_wheezy_backports.list] => Exec[apt_update] => Class[Apt::Update] => Anchor[apt::source::debian_wheezy])
Try the '--graph' option and opening the resulting '.dot' file in OmniGraffle or GraphViz

I haven't change anything in my source.list
You can take a look at my neutron config:

######## Neutron

openstack::neutron::password: 'whi-rtuz'
openstack::neutron::shared_secret: 'by-sa-bo'
openstack::neutron::core_plugin: 'ml2'
openstack::neutron::service_plugins: ['router', 'firewall', 'lbaas', 'vpnaas', 'metering']

My package version:

forge "http://forge.puppetlabs.com"

## The core OpenStack modules

mod "openstack",
  :git => "git://github.com/puppetlabs/puppetlabs-openstack",
  :ref => "master"

mod "keystone",
  :git => "git://github.com/stackforge/puppet-keystone",
  :ref => "master"

mod "swift",
  :git => "git://github.com/stackforge/puppet-swift",
  :ref => "master"

mod "glance",
  :git => "git://github.com/stackforge/puppet-glance",
  :ref => "master"

mod "cinder",
  :git => "git://github.com/stackforge/puppet-cinder",
  :ref => "master"

mod "neutron",
  :git => "git://github.com/stackforge/puppet-neutron",
  :ref => "master"

mod "nova",
  :git => "git://github.com/stackforge/puppet-nova",
  :ref => "master"

mod "heat",
  :git => "git://github.com/stackforge/puppet-heat",
  :ref => "master"

mod "ceilometer",
  :git => "git://github.com/stackforge/puppet-ceilometer",
  :ref => "master"

mod "horizon",
  :git => "git://github.com/stackforge/puppet-horizon",
  :ref => "master"

mod "openstacklib",
  :git => "git://github.com/stackforge/puppet-openstacklib",
  :ref => "master"

mod "openstack_extras",
  :git => "git://github.com/stackforge/puppet-openstack_extras",
  :ref => "master"

mod "tempest",
  :git => "git://github.com/stackforge/puppet-tempest",
  :ref => "master"

mod "vswitch",
  :git => "git://github.com/stackforge/puppet-vswitch",
  :ref => "master"

## R10K doesn't handle dependencies, so let's handle them here
# pointing to as many stable projects as possible
# TODO automate this dependency list

mod "apache",
  :git => "git://github.com/puppetlabs/puppetlabs-apache",
  :ref => "1.2.x"

mod "epel",
  :git => "git://github.com/stahnma/puppet-module-epel",
  :ref => "master"

mod "erlang",
  :git => "git://github.com/garethr/garethr-erlang",
  :ref => "master"

mod "inifile",
  :git => "git://github.com/puppetlabs/puppetlabs-inifile",
  :ref => "1.1.x"

mod "mysql",
  :git => "git://github.com/puppetlabs/puppetlabs-mysql",
  :ref => "3.3.x"

mod "stdlib",
  :git => "git://github.com/puppetlabs/puppetlabs-stdlib",
  :ref => "4.3.x"

mod "rsync",
  :git => "git://github.com/puppetlabs/puppetlabs-rsync",
  :ref => "0.2.0"

mod "xinetd",
  :git => "git://github.com/puppetlabs/puppetlabs-xinetd",
  :ref => "1.2.0"

mod "concat",
  :git => "git://github.com/puppetlabs/puppetlabs-concat",
  :ref => "1.1.x"

mod "memcached",
  :git => "git://github.com/saz/puppet-memcached",
  :ref => "658374848a6d2cf07f0bf714bc34709e9d0ee109"

mod "ssh",
  :git => "git://github.com/saz/puppet-ssh",
  :ref => "a0f5d5da20c91775c76c77d3b57b41f4245a260a"

mod "qpid",
  :git => "git://github.com/dprince/puppet-qpid",
  :ref => "1.0.2"

mod "sysctl",
  :git => "git://github.com/duritong/puppet-sysctl"

mod "rabbitmq",
  :git => "git://github.com/puppetlabs/puppetlabs-rabbitmq",
  :ref => "master" # 3.0.0

mod "staging",
  :git => "git://github.com/nanliu/puppet-staging",
  :ref => "1.0.2"

mod "vcsrepo",
  :git => "git://github.com/puppetlabs/puppetlabs-vcsrepo",
  :ref => "0.2.0"

# indirect dependencies

mod "firewall",
  :git => "git://github.com/puppetlabs/puppetlabs-firewall",
  :ref => "master"

mod "apt",
  :git => "git://github.com/puppetlabs/puppetlabs-apt",
  :ref => "1.4.x"

mod "mongodb",
  :git => "git://github.com/puppetlabs/puppetlabs-mongodb",
  :ref => "0.10.0"

mod "ntp",
  :git => "git://github.com/puppetlabs/puppetlabs-ntp",
  :ref => "3.0.x"

mod "postgresql",
  :git => "git://github.com/puppetlabs/puppetlabs-postgresql",
  :ref => "4.0.x"

mod "puppetdb",
  :git => "git://github.com/puppetlabs/puppetlabs-puppetdb",
  :ref => "4.0.0"

Thank you.

Version dependency pinning needs to be explicit!

For the past couple of weeks I've been trying to produce a 5 node environment pretty much according to your video here:
https://www.youtube.com/watch?v=HRzlmt56gCk

While most of the time has basically been spent just learning the idiosyncrasies of Openstack and getting the environment to run on virtualbox and vagrant, I have hit a multitude of issues which I can only blame on the changing dependencies this module pulls in because they are not explicitly declared.

I just built a new environment today, which has pulled in these dependency versions, and I am receiving a variety of issues such as 500 internal server errors when the Cirros image is attempted to be uploaded, and the rabbit-mq server not starting correctly so horizon throws a 500 error too.
/etc/puppetlabs/puppet/modules
├── dprince-qpid (v1.0.2)
├── duritong-sysctl (v0.0.1)
├── garethr-erlang (v0.3.0)
├── puppetlabs-apache (v1.1.1)
├── puppetlabs-ceilometer (v4.1.0)
├── puppetlabs-cinder (v4.1.0)
├── puppetlabs-glance (v4.1.0)
├── puppetlabs-heat (v4.1.0)
├── puppetlabs-horizon (v4.1.0)
├── puppetlabs-keystone (v4.1.0)
├── puppetlabs-mongodb (v0.8.0)
├── puppetlabs-mysql (v2.3.1)
├── puppetlabs-neutron (v4.2.0)
├── puppetlabs-nova (v4.1.0)
├── puppetlabs-ntp (v3.0.4)
├── puppetlabs-openstack (v4.1.0)
├── puppetlabs-rabbitmq (v3.1.0)
├── puppetlabs-rsync (v0.3.1)
├── puppetlabs-swift (v4.1.0)
├── puppetlabs-tempest (v3.0.0)
├── puppetlabs-vcsrepo (v0.2.0)
├── puppetlabs-vswitch (v0.3.0)
├── puppetlabs-xinetd (v1.3.1)
├── saz-memcached (v2.5.0)
├── saz-ssh (v1.4.0)
└── stahnma-epel (v0.1.0)

About a week ago I built an environment, which pulled in these dependencies, and that environment was much more stable.
/etc/puppetlabs/puppet/modules
├── dprince-qpid (v1.0.2)
├── duritong-sysctl (v0.0.4)
├── garethr-erlang (v0.3.0)
├── puppetlabs-apache (v1.1.0)
├── puppetlabs-ceilometer (v4.1.0)
├── puppetlabs-cinder (v4.1.0)
├── puppetlabs-glance (v4.1.0)
├── puppetlabs-heat (v4.1.0)
├── puppetlabs-horizon (v4.1.0)
├── puppetlabs-keystone (v4.1.0)
├── puppetlabs-mongodb (v0.8.0)
├── puppetlabs-mysql (v2.3.0)
├── puppetlabs-neutron (v4.2.0)
├── puppetlabs-nova (v4.1.0)
├── puppetlabs-ntp (v3.0.3)
├── puppetlabs-openstack (v4.1.0) --(was actually cloned from source as 4.1.0 was not released on stackforge yet)
├── puppetlabs-rabbitmq (v3.1.0)
├── puppetlabs-rsync (v0.3.0)
├── puppetlabs-swift (v4.1.0)
├── puppetlabs-tempest (v3.0.0)
├── puppetlabs-vcsrepo (v0.2.0)
├── puppetlabs-vswitch (v0.3.0)
├── puppetlabs-xinetd (v1.3.0)
├── saz-memcached (v2.5.0)
├── saz-ssh (v1.4.0)
└── stahnma-epel (v0.1.0)

Can you please explicitly pin all these dependency versions in future releases and then leave it up to the users to decide when and how to update them?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.