Giter VIP home page Giter VIP logo

openstack-automation's Introduction

About the project

The project gives you a working OpenStack cluster in a mater of minutes. We do this using saltstack. There is almost no coding involved and it can be easily maintained. Above all it is as easy as talking to your servers and asking them to configure themselves.

Saltstack provides us an infrastructure management framework that makes our job easier. Saltstack supports most of tasks that you would want to perform while installing OpenStack and more.

A few reasons why this is cool

  1. Don't just install OpenStack but also keep your installation formulae versioned, so you can always go back to the last working set.
  2. Salt formulae are not meant to just install. They can also serve to document the steps and the settings involved.
  3. OpenStack has a release cycle of 6 months and every six months you just have to tinker with a few text files to stay in the game.
  4. OpenStack has an ever increasing list of sub projects and we have an ever increasing number of formulae.

And we are opensource.

What is New

  1. Support for glance images to be created using glance state and module
  2. Support for cinder added. Check 'Cinder Volume' section of README.
  3. Support for all linux distros added. Check 'Packages, Services, Config files and Repositories section' of README.
  4. Partial support for 'single server single nic' installations. Check section 'Single Interface Scenario' for details.
  5. Pillar data has been segregated in multiple files according to its purpose. Check 'Customizations' section for details.
  6. Branches icehouse and juno will alone exist and continue forward
  7. Repo modified to be used as git fileserver backend. User may use 'git_pillar' pointed to 'pillar_root' sub directory or download the files to use in 'roots' backend.
  8. Pull request to the project has some regulations, documented towards the end of the README.
  9. 'yaml' will be default format for sls files. This is done as maintaining sls files across format is causing mismatches and errors. Further 'json' does not go well with 'jinja' templating(formulas end up less readable).
  10. 'cluster_ops' salt module has been removed. Its functionality has been achieved using 'jinja macros', in an attempt to remove any dependencies that are not available in saltstack's list of modules.

Yet to Arrive

  1. Neutron state and execution module, pilllar and formulas for creation of initial networks.
  2. Pillar and formulas for creation of instances.

Getting started

When you are ready to install OpenStack modify the salt-master configuration file at "/etc/salt/master" to hold the below contents.

fileserver_backend:
  - roots
  - git
gitfs_remotes:
  - https://github.com/Akilesh1597/salt-openstack.git:
    root: file_root
ext_pillar:
  - git: icehouse https://github.com/Akilesh1597/salt-openstack.git root=pillar_root
jinja_trim_blocks: True
jinja_lstrip_blocks: True

This will create a new environment called "icehouse" in your state tree. The 'file_roots' directory of the github repo will have state definitions in a bunch of '.sls' files and the few special directories, while the 'pillar_root' directory has your cluster definition files.

At this stage I assume that you have a machine with minion id 'openstack.icehouse' and ip address '192.168.1.1' added to the salt master.

Lets begin...

salt-run fileserver.update
salt '*' saltutil.sync_all
salt -C 'I@cluster_type:icehouse' state.highstate

This instructs the minion having 'cluster_type=icehouse' defined in its pillar data to download all the formulae defined for it and execute the same. If all goes well you can login to your newly installed OpenStack setup at 'http://192.168.1.1/horizon'.

But that is not all. We can have a fully customized install with multiple hosts performing different roles, customized accounts and databases. All this can be done simply by manipulating the pillar data.

Check salt walkthrough to understand how salt works.

Multi node setup

Edit 'cluster_resources.sls' file inside 'pillar_root' sub directory. It looks like below.

roles: 
  - "compute"
  - "controller"
  - "network"
  - "storage"
compute: 
  - "openstack.icehouse"
controller: 
  - "openstack.icehouse"
network: 
  - "openstack.icehouse"
storage:
  - "openstack.icehouse"

It just means that in our cluster all roles are played by a single host 'openstack.icehouse'. Now let just distribute the responsibilities.

roles: 
  - "compute"
  - "controller"
  - "network"
  - "storage"
compute: 
  - "compute1.icehouse"
  - "compute2.icehouse"
controller: 
  - "controller.icehouse"
network: 
  - "network.icehouse"
storage:
  - "storage.icehouse"

We just added five hosts to perform different roles. "compute1.icehouse" and "compute2.icehouse" perform the role "compute" for example. Make sure the tell their ip addresses in the file 'openstack_cluster.sls'.

hosts: 
  controller.icehouse: 192.168.1.1
  network.icehouse: 192.168.1.2
  storage.icehouse: 192.168.1.3
  compute1.icehouse: 192.168.1.4
  compute2.icehouse: 192.168.1.5

Lets sync up.

salt-run fileserver.update
salt '*' saltutil.sync_all
salt -C 'I@cluster_type:icehouse' state.highstate

Ah and if you use git as your backend, you have to fork us before doing any changes.

Add new roles

Lets say we want the 'queue server' as a separate role. This is how we do it.

  1. Add a new role "queue_server" under "roles"
  2. Define what minions will perform the role of "queue_server"
  3. Finally define which formula deploys a "queue_server" under "sls" under "queue_server" section.
roles: 
  - "compute"
  - "controller"
  - "network"
  - "storage"
  - "queue_server"
compute: 
  - "compute1.icehouse"
  - "compute2.icehouse"
controller: 
  - "controller.icehouse"
network: 
  - "network.icehouse"
storage:
  - "storage.icehouse"
queue_server:
  - "queueserver.icehouse"
sls: 
  queue_server:
    - "queue.rabbit"
  controller: 
    - "generics.host"
    - "mysql"
    - "mysql.client"
    - "mysql.OpenStack_dbschema"
    - "keystone"
    - "keystone.OpenStack_tenants"
    - "keystone.OpenStack_users"
    - "keystone.OpenStack_services"
    - "nova"
    - "horizon"
    - "glance"
    - "cinder"
  network: 
    - "generics.host"
    - "mysql.client"
    - "neutron"
    - "neutron.service"
    - "neutron.openvswitch"
    - "neutron.ml2"
  compute: 
    - "generics.host"
    - "mysql.client"
    - "nova.compute_kvm"
    - "neutron.openvswitch"
    - "neutron.ml2"
  storage:
    - "cinder.volume"

You may want the same machine to perform many roles or you may add a new machine. Make sure to update the machine's ip address as mentioned earlier.

Customizations

The pillar data has been structured as below, in order have a single sls file to modify, for each kind of customization.

---------------------------------------------------------------------------------
|Pillar File		| Purpose						|
|:----------------------|:------------------------------------------------------|
|OpenStack_cluster	| Generic cluster Data					| 
|access_resources	| Keystone tenants, users, roles, services and endpoints|
|cluster_resources	| Hosts, Roles and their correspoinding formulas	|
|network_resources	| OpenStack Neutron data, explained below		|
|db_resources		| Databases, Users, Passwords and Grants		|
|deploy_files		| Arbitrary files to be deployed on all minions		|
|misc_OpenStack_options	| Arbitrary OpenStack options and affected services	|
|[DISTRO].sls		| Distro specific package data				|
|[DISTRO]_repo.sls	| Distro specific repository data			|
---------------------------------------------------------------------------------

Should you need more 'tenants' or change credentials of an OpenStack user you should look into 'access_resources.sls' under 'pillar_root'. You may tweak your OpenStack in any way you want.

Neutron Networking

'network_resources' under 'pillar_root' defines the OpenStack "Data" and "External" networks. The default configuration will install a "gre" mode "Data" network and a "flat" mode "External" network, and looks as below.

neutron: 
  intergration_bridge: br-int
  metadata_secret: "414c66b22b1e7a20cc35"
  type_drivers: 
    flat: 
      physnets: 
        External: 
          bridge: "br-ex"
          hosts:
            network.icehouse: "eth3"
    gre:
      tunnel_start: "1"
      tunnel_end: "1000"

Choosing the vxlan mode is not difficult either.

neutron: 
  intergration_bridge: br-int
  metadata_secret: "414c66b22b1e7a20cc35"
  type_drivers: 
    flat: 
      physnets: 
        External: 
          bridge: "br-ex"
          hosts:
            network.icehouse: "eth3"
    vxlan:
      tunnel_start: "1"
      tunnel_end: "1000"

For vlan mode tenant networks, you need to add 'vlan' under 'tenant_network_types' and for each compute host and network host specify the list of physical networks and their corresponding bridge, interface and allowed vlan range.

neutron: 
  intergration_bridge: br-int
  metadata_secret: "414c66b22b1e7a20cc35"
  type_drivers: 
    flat: 
      physnets: 
        External: 
          bridge: "br-ex"
          hosts:
            network.icehouse: "eth3"
    vlan: 
      physnets: 
        Internal1: 
          bridge: "br-eth1"
          vlan_range: "100:200"
          hosts:
            network.icehouse: "eth2"
            compute1.icehouse: "eth2"
            compute2.icehouse: "eth2"
        Internal2:
          bridge: "br-eth2"
          vlan_range: "200:300"
          hosts:
            network.icehouse: "eth4"
            compute3.icehouse: "eth2"

Single Interface Scenario

Most users trying OpenStack for first time would need OpenStack up and running on machine with single interface. Please set "single_nic" pillar in 'network_resources' to the 'primary interface id' in your machine.

This will connect all bridges to a bridge named 'br-proxy'. Later you have to manually add your primary nic to this bridge and configure the bridge with the ip address of your primary nic.

We have not automated the last part because you may loose connectivity to your minion at this phase and it is best you do it manually. Further setting up the briges in your 'interfaces configuration' file varies per distro.

User would have to bear with us, untill we find formula for the same.

Cinder Volume

The 'cinder.volume' formula will find any free disk space available on the minion and create lvm partition and also a volume group 'cinder-volume'. This will be used by OpenStack's volume service. It is therefore adviced to deploy this particular formula on machines with available free space. We are using a custom state module for this purpose. Efforts are guaranteed to push the additional state to official salt state modules.

Packages, Services, Config files and Repositories

The 'pillar_root' sub directory contains a [DISTRO].sls file that contains package names, service names, and config file paths for each of OpenStack's component. This file is supposed to be replicated for all the distros that you plan to use on your minions.

The [DISTRO]_repo.sls that has repository details specific to each distro that house the OpenStack packages. The parameters defined in the file should be the ones that are accepted by saltstack's pkgrepo module.

Contributing

As with all opensource projects, we need support too. However since this repo is used for deploying OpenStack clusters, it may end up making lots of changes after forking. These changes may include sensitive information in pillar. The changes may also be some trivial changes that are not necessary to be merged back here. So please follow the steps below.

After forking please create a new branch, which you will use to deploy OpenStack and make changes specific to yourself. If you feel any changes are good to be maintained and carried forwad, make them in the 'original' branches and merge them to your other branches. Always pull request from the 'original' branches.

As you may see the pillar data as of now only have 'Ubuntu.sls' and 'Debian.sls'. We need to update this repo with all those other distros available for OpenStack.

openstack-automation's People

Contributors

akilesh1597 avatar arthurzenika avatar fritzstauff avatar naviens avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openstack-automation's Issues

glance_sync step fails on debian deployment.

After some tweaking for debian (will do a pull request shortly) I get a working highstate. Only exception is the following:

          ID: glance_sync
    Function: cmd.run
        Name: glance-manage db_sync
      Result: False
     Comment: Command "glance-manage db_sync" run
     Changes:   
              ----------
              pid:
                  2958
              retcode:
                  1
              stderr:
                  2014-06-19 14:47:12.420 2959 INFO glance.db.sqlalchemy.migration [-] Upgrading database to version latest
                  2014-06-19 14:47:12.434 2959 INFO migrate.versioning.api [-] 13 -> 14... 
                  2014-06-19 14:47:12.437 2959 CRITICAL glance [-] `images`
                  2014-06-19 14:47:12.437 2959 TRACE glance Traceback (most recent call last):
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/bin/glance-manage", line 10, in <module>
                  2014-06-19 14:47:12.437 2959 TRACE glance     sys.exit(main())
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/glance/cmd/manage.py", line 127, in main
                  2014-06-19 14:47:12.437 2959 TRACE glance     CONF.command.func()
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/glance/cmd/manage.py", line 77, in do_db_sync
                  2014-06-19 14:47:12.437 2959 TRACE glance     CONF.command.current_version)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/glance/db/sqlalchemy/migration.py", line 122, in db_sync
                  2014-06-19 14:47:12.437 2959 TRACE glance     upgrade(version=version)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/glance/db/sqlalchemy/migration.py", line 61, in upgrade
                  2014-06-19 14:47:12.437 2959 TRACE glance     return versioning_api.upgrade(sql_connection, repo_path, version)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/migrate/versioning/api.py", line 186, in upgrade
                  2014-06-19 14:47:12.437 2959 TRACE glance     return _migrate(url, repository, version, upgrade=True, err=err, **opts)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "<string>", line 2, in _migrate
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/migrate/versioning/util/__init__.py", line 159, in with_en
gine
                  2014-06-19 14:47:12.437 2959 TRACE glance     return f(*a, **kw)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/migrate/versioning/api.py", line 366, in _migrate
                  2014-06-19 14:47:12.437 2959 TRACE glance     schema.runchange(ver, change, changeset.step)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/migrate/versioning/schema.py", line 91, in runchange
                  2014-06-19 14:47:12.437 2959 TRACE glance     change.run(self.engine, step)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/migrate/versioning/script/py.py", line 145, in run
                  2014-06-19 14:47:12.437 2959 TRACE glance     script_func(engine)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/glance/db/sqlalchemy/migrate_repo/versions/014_add_image_t
ags_table.py", line 64, in upgrade
                  2014-06-19 14:47:12.437 2959 TRACE glance     tables = [define_image_tags_table(meta)]
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/glance/db/sqlalchemy/migrate_repo/versions/014_add_image_t
ags_table.py", line 23, in define_image_tags_table
                  2014-06-19 14:47:12.437 2959 TRACE glance     schema.Table('images', meta, autoload=True)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/sqlalchemy/schema.py", line 318, in __new__
                  2014-06-19 14:47:12.437 2959 TRACE glance     table._init(name, metadata, *args, **kw)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/sqlalchemy/schema.py", line 381, in _init
                  2014-06-19 14:47:12.437 2959 TRACE glance     self._autoload(metadata, autoload_with, include_columns)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/sqlalchemy/schema.py", line 408, in _autoload
                  2014-06-19 14:47:12.437 2959 TRACE glance     self, include_columns, exclude_columns
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2426, in run_callable
                  2014-06-19 14:47:12.437 2959 TRACE glance     return conn.run_callable(callable_, *args, **kwargs)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1969, in run_callable
                  2014-06-19 14:47:12.437 2959 TRACE glance     return callable_(self, *args, **kwargs)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 260, in reflecttable
                  2014-06-19 14:47:12.437 2959 TRACE glance     return insp.reflecttable(table, include_columns, exclude_columns)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py", line 350, in reflecttabl
e
                  2014-06-19 14:47:12.437 2959 TRACE glance     tbl_opts = self.get_table_options(table_name, schema, **table.kwargs)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py", line 178, in get_table_options
                  2014-06-19 14:47:12.437 2959 TRACE glance     **kw)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "<string>", line 1, in <lambda>
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py", line 47, in cache
                  2014-06-19 14:47:12.437 2959 TRACE glance     ret = fn(self, con, *args, **kw)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/sqlalchemy/dialects/mysql/base.py", line 2057, in get_table_options
                  2014-06-19 14:47:12.437 2959 TRACE glance     parsed_state = self._parsed_state_or_create(connection, table_name, schema, **kw)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/sqlalchemy/dialects/mysql/base.py", line 2156, in _parsed_state_or_create
                  2014-06-19 14:47:12.437 2959 TRACE glance     info_cache=kw.get('info_cache', None)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "<string>", line 1, in <lambda>
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py", line 47, in cache
                  2014-06-19 14:47:12.437 2959 TRACE glance     ret = fn(self, con, *args, **kw)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/sqlalchemy/dialects/mysql/base.py", line 2181, in _setup_parser
                  2014-06-19 14:47:12.437 2959 TRACE glance     full_name=full_name)
                  2014-06-19 14:47:12.437 2959 TRACE glance   File "/usr/lib/python2.7/dist-packages/sqlalchemy/dialects/mysql/base.py", line 2268, in _show_create_table
                  2014-06-19 14:47:12.437 2959 TRACE glance     raise exc.NoSuchTableError(full_name)
                  2014-06-19 14:47:12.437 2959 TRACE glance NoSuchTableError: `images`
                  2014-06-19 14:47:12.437 2959 TRACE glance
              stdout:

Went back to the install manual http://docs.openstack.org/havana/install-guide/install/apt/content/glance-install.html dropped the database and re-created from the instructions there. And the db_sync worked.

Would it be because the mysql handling is different between ubuntu and debian ? Any ideas why this would fail ?

Jinja variable 'dict object' has no attribute 'install'; line 2

Hello,

Really nice work. Thank you.
I am trying to test the automatically deployment of openstack using salt but I receive an error:
On one of the minion:
root@hawk:~# salt-call state.highstate -l debug
the message is:
local:

Data failed to compile:

Traceback (most recent call last):

File "/usr/lib/pymodules/python2.7/salt/state.py", line 2349, in call_highstate
top = self.get_top()
File "/usr/lib/pymodules/python2.7/salt/state.py", line 1926, in get_top
tops = self.get_tops()
File "/usr/lib/pymodules/python2.7/salt/state.py", line 1809, in get_tops
env=env
File "/usr/lib/pymodules/python2.7/salt/template.py", line 69, in compile_template
ret = render(input_data, env, sls, **render_kwargs)
File "/usr/lib/pymodules/python2.7/salt/renderers/jinja.py", line 42, in render
tmp_data.get('data', 'Unknown render error in jinja renderer')
SaltRenderError: Jinja variable 'dict object' has no attribute 'install'; line 2


havana:
{% for cluster_component in pillar['install'] %} <======================
{% for server in pillar[cluster_component] %}
{{ server }}:
{% for sls in pillar['install'][cluster_component] %}
- {{ sls }}
{% endfor %}
[...]

On the salt-master I configured:
file_roots:
base:
- /srv/salt/
havana:
- /srv/havana/file
pillar_roots:
base:
- /srv/pillar
havana:
- /srv/havana/pillar

The project I put in /srv/havana

Thanks a lot,
Gabriel

neutron/ml2.sls fails when run twice

The following step in neutron/ml2.sls fails when run twice

----------
          ID: intergrationg_bridge
    Function: cmd.run
        Name: ovs-vsctl add-br br-int
      Result: False
     Comment: Command "ovs-vsctl add-br br-int" run
     Changes:   
              ----------
              pid:
                  16413
              retcode:
                  1
              stderr:
                  ovs-vsctl: cannot create a bridge named br-int because a bridge named br-int already exists
              stdout:

It should check if the bridge is already configured (and configured according to the spec) and not do anything if it is.

Changes to be ammended to this repo

Hi All,
The repo has been lacking activity from my side for quiet some time now. The following changes will be made to this project.

  1. All branches except the master shall be removed
  2. The changes and commits to all other branches will be manually updated to the master
  3. Pull requests to all branches will be consolidated and merged manually to master
  4. 'yaml' will be default format for sls files. This is done as maintaining sls files across format is causing mismatches and errors
  5. Support for cinder sls will be added
  6. Repo will be modified in a way to enable users to use the repo directly as salt gitfs backend
  7. Support will be added to pillar file to make the project distro agnostic

The changes mentioned above shall first be made in https://github.com/Akilesh1597/openstack-automation , tested and then merged here.

Thank you and please do leave a comment for your suggestions

Use mariadb instead of mysql

featurerequest

(if we start using theses states in production I might contribute this if no one has since I posted this ticket)

Release versions of this project

First off, am glad there is some activity on the project, looking forward to re-using this for our next openstack deployment.

It would be nice to have a release cycle on openstack-automation, some versionning would enable users to see some sort of changelog and keep track of what is planned and what works for which version.

Add a single node nova.conf state file.

Something like nova.controller_compute_combined so nova can be removed in controller pillar and nova.compute_kvm can be replaced by nova.controller_compute_combined in compute pillar.

openvswitch gre tunnel does not work with hostname

file '/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini' has a key named 'local_ip' giving the host name of the machine here renders the setup useless.

Work around is to manually find the host name of the machine where neutron-ovs-plugin is installed and edit the corresponding config files cached in the master in the location
'file_root/config/<cluster_name>/<machine_name>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini' and set the local_ip key to the machines ip address.

No License on this Project

Hi there,

you're stating nowhere what's the license on this project. Not even
"this is licensed under the same term's as (OpenStack||SaltStack)"
(which would be Apache Public License in both cases).

As far as I know this means it's "All Rights reserved" as this is the
default under most jurisdictions and this is probably not what's
intended here.

Regards, Florian

PS: I took a quick look on your neutron-module but I guess I can't
use it until the license is clear.

neutron module passes profile setting onto command so fails on many

salt-call --local neutron.delete_subnet profile=virl flat1
[INFO ] Executing command '/sbin/zfs help' in directory '/home/virl'
[INFO ] Executing command '/sbin/zfs help' in directory '/home/virl'
[INFO ] Starting new HTTP connection (1): 127.0.0.1
[ERROR ] calling with args ('flat1',)
[ERROR ] calling with kwargs {'profile': 'virl'}
Passed invalid arguments: delete_subnet() got an unexpected keyword argument 'profile'

Running everything on one physical server

I would like to run everything one one server with just a eth0 interface.

I run into three issues:

  1. nova.conf in nova.compute_kvm conflicts with the one in nova.init
  2. wich pillar should i use for net when i only have a eth0
  3. the way i tried, the net it goes down and i have to log in without net and remove the bridges before i can reach the physical server again.

Any tips? :)

Why JSON?

Why did you decide to write salt states in JSON / Jinja format?

JSON doesn't allow for comments and such. Its really not a great choice for configuration files that a human has to edit.

have example pillars in the repo so pillars aren't tracked by git

Openstack_Icehouse/pillar/openstack_cluster.sls and Openstack_Icehouse/pillar/openstack_cluster_inverse.sls could be moved to .example files and then .gitignore could include these files. In the README you would instruct to copy both these files and then edit them.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.