Giter VIP home page Giter VIP logo

grifter's Introduction

Grifter

Python library to build large scale Vagrant topologies for the networking space. Can also be used the build small scale labs for networking/compute devices.

Build Status Coverage Status

NOTE: Python 3.6+ is required to make use of this library.

*****************************************************************
This project is currently in beta and stability is not currently 
guaranteed. Breaking API changes can be expected.
*****************************************************************

Vagrant

What is Vagrant? From the Vagrant website

A command line utility for managing the lifecycle of virtual machines

Vagrant Libvirt

What is Vagrant Libvirt? From the vagrant-libvirt github page

A Vagrant plugin that adds a Libvirt provider to Vagrant, allowing Vagrant to control and provision machines via Libvirt toolkit.

Why

When simulating large topologies Vagrantfiles can become thousands of lines long. Getting all the configuration correct is often a frustrating, error riddled process especially for those not familiar with Vagrant. Grifter aims to help simplify that process.

Additional project goals
  • Generate topology.dot files for use with PTM โœ”๏ธ
  • Generate Inventory files for tools such as Ansible, Nornir

NOTE: Only a vagrant-libvirt compatible Vagrantfile for Vagrant version >= 2.1.0 will be generated.

Support for Virtualbox or any other provider type is not supported or on the road map.

Dependencies

Grifter requires the help of the following awesome projects from the Python community.

Installation

There is currently no PyPI release for this project. Grifter can be installed directly from source using PIP.

Create and activate virtualenv.

mkdir ~/test && cd ~/test
python3 -m venv .venv
source .venv/bin/activate

Install grifter with pip

# Install the master branch.
pip install https://github.com/bobthebutcher/grifter/archive/master.zip

Releases are distributed via Github Releases.

# Install the latest release.
pip install https://github.com/bobthebutcher/grifter/archive/v0.2.12.zip

Quick Start

Create a guests.yml file.

tee guests.yml > /dev/null << "EOF"
srv01:
  vagrant_box: 
    name: "centos/7"
EOF

Generate a Vagrantfile

grifter create guests.yml

Let Vagrant do its magic

vagrant up

Config File

A file named config.yml is required to define the base settings of each box managed within the grifter environment. The default config.yml file can be found here

Box Naming

Grifter expects Vagrant boxes to be named according to the following list.

Custom Boxes
  • arista/veos
  • cisco/csr1000v
  • cisco/iosv
  • cisco/xrv
  • juniper/vmx-vcp
  • juniper/vmx-vfp
  • juniper/vqfx-pfe
  • juniper/vqfx-re
  • juniper/vsrx
  • juniper/vsrx-packetmode
Vagrant Cloud Boxes
  • CumulusCommunity/cumulus-vx
  • centos/7
  • generic/ubuntu1804
  • opensuse/openSUSE-15.0-x86_64

guest_config

The guest_config section defines characteristics about the Vagrant boxes used with grifter.

Required Parameters.

  • data_interface_base
  • data_interface_offset
  • max_data_interfaces
  • management_interface

Note: data_interface_base cannot be an empty string. If the box does not have any data interfaces the suggested value is "NA". This field will be ignored so it can be anything as long as it is not empty.

guest_config:
  example/box:
    data_interface_base: "eth" # String pattern for data interfaces.
    data_interface_offset: 0 # Number of first data interface ie: 0, 1, 2, etc..
    internal_interfaces: 0 # Used for inter-box connections for multi-vm boxes.
    max_data_interfaces: 8 
    management_interface: "ma1"
    reserved_interfaces: 0 # Interfaces that are required but cannot be used.

  arista/veos:
    data_interface_base: "eth"
    data_interface_offset: 1
    internal_interfaces: 0
    max_data_interfaces: 24
    management_interface: "ma1"
    reserved_interfaces: 0

  juniper/vsrx-packetmode:
    data_interface_base: "ge-0/0/"
    data_interface_offset: 0
    internal_interfaces: 0
    max_data_interfaces: 16
    management_interface: "fxp0.0"
    reserved_interfaces: 0

guest_pairs

The guest_pairs section is used the define boxes that need two VMs to be fully functional. Some examples are the Juniper vMX and vQFX where one box is used for the control-plane and another for the forwarding-plane.

NOTE: This functionality will be added in a future release.

Custom config files

A default config file ships with the grifter python package. This file can be customized with your required parameters by creating a config.yml file in the following locations.

  • /opt/grifter/
  • ~/.grifter/
  • ./

Parameters in a users config.yml file will be merged with the default config.yml file with the user-defined parameters taking preference.

Usage

CLI Utility

Grifter ships with a CLI utility. Execute grifter -h to discover all the CLI options.

grifter -h
Usage: grifter [OPTIONS] COMMAND [ARGS]...

  Create a Vagrantfile from a YAML data input file.

Options:
  --version   Show the version and exit.
  -h, --help  Show this message and exit.

Commands:
  create   Create a Vagrantfile.
  example  Print example file declaration.

Create Vagrantfile

grifter create guests.yml

Guests Datafile

Guest VMs characteristics and interface connections are defined in a YAML file. This file can be named anything, but the recommended naming convention is guests.yml.

Guest Schema

Jinja2 is used a the templating engine to generate the Vagrantfiles. Guests definition within a guests file must use the following schema as it is required to ensure templates render correctly and without errors. The guest data will be validated against the schema using the Cerberus project.

some-guest: # guest name
  vagrant_box: # vagrant_box parameters
    name: # string - required
    version: # string - optional | default: ""
    url: # string - optional | default: ""
    provider: # string - optional | default: "libvirt"
    guest_type: # string - optional | default: ""
    boot_timeout: # integer - optional | default: 0
    throttle_cpu: # integer - optional | default: 0

  ssh: # dict - optional
    username: # string - optional | default: ""
    password: # string - optional | default: ""
    insert_key: # boolean - optional | default: False

  synced_folder: # dict - optional
    enabled: # boolean - default: False
    id: # string - default: "vagrant-root"
    src: # string - default: "."
    dst: # string - default: "/vagrant"

  provider_config: # dict - optional
    random_hostname: # boolean - optional | default: False
    nic_adapter_count: # integer - optional | default: 0
    disk_bus: # string - optional | default: ""
    cpus: # integer - optional | default: 1
    memory: # integer - optional | default: 512
    huge_pages: # boolean - optional | default: False
    storage_pool: # string - optional | default: ""
    additional_storage_volumes: # list - optional
      # For each list element the following is required.
      - location: # string
        type: # string
        bus: # string
        device: # string
    nic_model_type: # string - optional | default: ""
    management_network_mac: # string - optional | default: ""

  internal_interfaces: # list - optional
    # For each list element the following is required.
    - local_port: # integer
      remote_guest: # string
      remote_port: # integer

  data_interfaces: # list - optional
    # For each list element the following is required.
    - local_port: # integer
      remote_guest: # string
      remote_port: # integer

Example Datafile

The following example datafile defines two arista/veos switches connected together on ports 1 and 2.

sw01:
  vagrant_box:
    name: "arista/veos"
    version: "4.20.1F"
    guest_type: "tinycore"
    provider: "libvirt"
  ssh:
    insert_key: False
  synced_folder:
    enabled: False
  provider_config:
    nic_adapter_count: 2
    disk_bus: "ide"
    cpus: 2
    memory: 2048
  data_interfaces:
    - local_port: 1
      remote_guest: "sw02"
      remote_port: 1
    - local_port: 2
      remote_guest: "sw02"
      remote_port: 2

sw02:
  vagrant_box:
    name: "arista/veos"
    version: "4.20.1F"
    guest_type: "tinycore"
    provider: "libvirt"
  ssh:
    insert_key: False
  synced_folder:
    enabled: False
  provider_config:
    nic_adapter_count: 2
    disk_bus: "ide"
    cpus: 2
    memory: 2048
  data_interfaces:
    - local_port: 1
      remote_guest: "sw01"
      remote_port: 1
    - local_port: 2
      remote_guest: "sw01"
      remote_port: 2

Generated Vagrantfile

# -*- mode: ruby -*-
# vi: set ft=ruby :

def get_mac(oui="28:b7:ad")
  "Generate a MAC address"
  nic = (1..3).map{"%0.2x"%rand(256)}.join(":")
  return "#{oui}:#{nic}"
end

cwd = Dir.pwd.split("/").last
username = ENV['USER']
domain_prefix = "#{username}_#{cwd}"
domain_uuid = "1f22b55d-2d7e-5a24-b4fa-3a8878df5cc5"

Vagrant.require_version ">= 2.1.0"
Vagrant.configure("2") do |config|

  config.vm.define "sw01" do |node|
    guest_name = "sw01"
    node.vm.box = "arista/veos"
    node.vm.box_version = "4.20.1F"
    node.vm.guest = :tinycore
    node.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true

    node.ssh.insert_key = false

    node.vm.provider :libvirt do |domain|
      domain.default_prefix = "#{domain_prefix}"
      domain.cpus = 2
      domain.memory = 2048
      domain.disk_bus = "ide"
      domain.nic_adapter_count = 2
    end

    node.vm.network :private_network,
      # sw01-eth1 <--> sw02-eth1
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.146.53.1",
      :libvirt__tunnel_local_port => 10001,
      :libvirt__tunnel_ip => "127.146.53.2",
      :libvirt__tunnel_port => 10001,
      :libvirt__iface_name => "sw01-eth1-#{domain_uuid}",
      auto_config: false

    node.vm.network :private_network,
      # sw01-eth2 <--> sw02-eth2
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.146.53.1",
      :libvirt__tunnel_local_port => 10002,
      :libvirt__tunnel_ip => "127.146.53.2",
      :libvirt__tunnel_port => 10002,
      :libvirt__iface_name => "sw01-eth2-#{domain_uuid}",
      auto_config: false

  end
  config.vm.define "sw02" do |node|
    guest_name = "sw02"
    node.vm.box = "arista/veos"
    node.vm.box_version = "4.20.1F"
    node.vm.guest = :tinycore
    node.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true

    node.ssh.insert_key = false

    node.vm.provider :libvirt do |domain|
      domain.default_prefix = "#{domain_prefix}"
      domain.cpus = 2
      domain.memory = 2048
      domain.storage_pool_name = "disk1"
      domain.disk_bus = "ide"
      domain.nic_adapter_count = 2
    end

    node.vm.network :private_network,
      # sw02-eth1 <--> sw01-eth1
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.146.53.2",
      :libvirt__tunnel_local_port => 10001,
      :libvirt__tunnel_ip => "127.146.53.1",
      :libvirt__tunnel_port => 10001,
      :libvirt__iface_name => "sw02-eth1-#{domain_uuid}",
      auto_config: false

    node.vm.network :private_network,
      # sw02-eth2 <--> sw01-eth2
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.146.53.2",
      :libvirt__tunnel_local_port => 10002,
      :libvirt__tunnel_ip => "127.146.53.1",
      :libvirt__tunnel_port => 10002,
      :libvirt__iface_name => "sw02-eth2-#{domain_uuid}",
      auto_config: false

  end

end

Defaults Per-Guest Type

It is possible to define default values per guest group type. Grifter will look for a file named guest-defaults.yml in the following locations from the least to most preferred:

  • /opt/grifter/
  • ~/.grifter/
  • ./
arista/veos:
  vagrant_box:
    version: "4.20.1F"
    guest_type: "tinycore"
  ssh:
    insert_key: False
  synced_folder:
    enabled: False
  provider_config:
    nic_adapter_count: 24
    cpus: 2
    memory: 2048
    disk_bus: "ide"

juniper/vsrx-packetmode:
  vagrant_box:
    version: "18.3R1-S1.4"
    provider: "libvirt"
    guest_type: "tinycore"
  ssh:
    insert_key: False
  synced_folder:
    enabled: False
  provider_config:
    nic_adapter_count: 2
    disk_bus: "ide"
    cpus: 2
    memory: 4096

Group variables can be over-written by variables at the guest variable level. The values of the group and guest variables will be merged prior to building a Vagrantfile with the guest variables taking precedence over the group variables.

This means you can have a much more succinct guests file by reducing a lot of duplication. Here is an example of a simplified guest file. The values from the arista/veos guest type in the guest-defaults.yml file will be used to fill in the parameters for the guests.

sw01:
  vagrant_box:
    name: "arista/veos"
  provider_config:
    nic_adapter_count: 2
  data_interfaces:
    - local_port: 1
      remote_guest: "sw02"
      remote_port: 1
    - local_port: 2
      remote_guest: "sw02"
      remote_port: 2

sw02:
  vagrant_box:
    name: "arista/veos"
  provider_config:
    nic_adapter_count: 2
  data_interfaces:
    - local_port: 1
      remote_guest: "sw01"
      remote_port: 1
    - local_port: 2
      remote_guest: "sw01"
      remote_port: 2

The generated Vagrantfile below is the same as the one above, but with a much cleaner guest definition file.

# -*- mode: ruby -*-
# vi: set ft=ruby :

def get_mac(oui="28:b7:ad")
  "Generate a MAC address"
  nic = (1..3).map{"%0.2x"%rand(256)}.join(":")
  return "#{oui}:#{nic}"
end

cwd = Dir.pwd.split("/").last
username = ENV['USER']
domain_prefix = "#{username}_#{cwd}"
domain_uuid = "d35fb1b6-ecdc-5412-be22-185446af92d6"

Vagrant.require_version ">= 2.1.0"
Vagrant.configure("2") do |config|

  config.vm.define "sw01" do |node|
    guest_name = "sw01"
    node.vm.box = "arista/veos"
    node.vm.box_version = "4.20.1F"
    node.vm.guest = :tinycore
    node.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true

    node.ssh.insert_key = false

    node.vm.provider :libvirt do |domain|
      domain.default_prefix = "#{domain_prefix}"
      domain.cpus = 2
      domain.memory = 2048
      domain.disk_bus = "ide"
      domain.nic_adapter_count = 2
    end

    node.vm.network :private_network,
      # sw01-eth1 <--> sw02-eth1
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.127.145.1",
      :libvirt__tunnel_local_port => 10001,
      :libvirt__tunnel_ip => "127.127.145.2",
      :libvirt__tunnel_port => 10001,
      :libvirt__iface_name => "sw01-eth1-#{domain_uuid}",
      auto_config: false

    node.vm.network :private_network,
      # sw01-eth2 <--> sw02-eth2
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.127.145.1",
      :libvirt__tunnel_local_port => 10002,
      :libvirt__tunnel_ip => "127.127.145.2",
      :libvirt__tunnel_port => 10002,
      :libvirt__iface_name => "sw01-eth2-#{domain_uuid}",
      auto_config: false

  end
  config.vm.define "sw02" do |node|
    guest_name = "sw02"
    node.vm.box = "arista/veos"
    node.vm.box_version = "4.20.1F"
    node.vm.guest = :tinycore
    node.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true

    node.ssh.insert_key = false

    node.vm.provider :libvirt do |domain|
      domain.default_prefix = "#{domain_prefix}"
      domain.cpus = 2
      domain.memory = 2048
      domain.storage_pool_name = "disk1"
      domain.disk_bus = "ide"
      domain.nic_adapter_count = 2
    end

    node.vm.network :private_network,
      # sw02-eth1 <--> sw01-eth1
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.127.145.2",
      :libvirt__tunnel_local_port => 10001,
      :libvirt__tunnel_ip => "127.127.145.1",
      :libvirt__tunnel_port => 10001,
      :libvirt__iface_name => "sw02-eth1-#{domain_uuid}",
      auto_config: false

    node.vm.network :private_network,
      # sw02-eth2 <--> sw01-eth2
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.127.145.2",
      :libvirt__tunnel_local_port => 10002,
      :libvirt__tunnel_ip => "127.127.145.1",
      :libvirt__tunnel_port => 10002,
      :libvirt__iface_name => "sw02-eth2-#{domain_uuid}",
      auto_config: false

  end

end

Example Files

Examples of the config.yml, guests-defaults.yml and guests.yml files can be found here

Interfaces

There are 3 types of interfaces that can be defined.

  • internal_interfaces
  • data_interfaces
  • reserved_interfaces

Internal Interfaces

Config location: guests.yml
Used for an inter-vm communication channel for multi-vm boxes.
Known examples are the vMX and vQFX.

data_interfaces

Config location: guests.yml
Revenue ports that are used to pass data traffic.

reserved_interfaces

Config location: config.yml
Interfaces that need to be defined because 'reasons' but cannot be used. The only known example is the juniper/vqfx-re. The number of reserved_interfaces is defined per-box type in the config.yml file. Grifter builds out the interface definitions automatically as a blackhole interfaces.

Blackhole Interfaces

Interfaces defined in the Vagratfile relate to interfaces on the guest vm on a first to last basis. This can be undesirable when trying to accurately simulate a production environment when devices can have 48+ ports.

Grifter will automatically create blackhole interfaces to fill out undefined data_interfaces ports up to the box types max_data_interfaces parameter in the config.yml file.

Vagrantfile Interface Order

Interfaces are added to the Vagrantfile in the following order.

  • internal_interfaces
  • reserved_interfaces
  • data_interfaces

Interfaces are configured using the udp tunneling type. This will create a 'pseudo' layer 1 connection between VM ports.

Example interface definition
  data_interfaces:
    - local_port: 1
      remote_guest: "sw02"
      remote_port: 1
Rendered Vagrantfile interface
    node.vm.network :private_network,
      # sw01-eth1 <--> sw02-eth1
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.255.255.1",
      :libvirt__tunnel_local_port => 10001,
      :libvirt__tunnel_ip => "127.255.255.2",
      :libvirt__tunnel_port => 10001,
      :libvirt__iface_name => "sw01-eth1-#{domain_uuid}",
      auto_config: false

NIC Adapter Count

Config location: guests.yml
Defines the total number of data_interfaces to create on the VM. Any undefined data_interfaces will be added as a blackhole interface.

The total is calculated against the sum of the internal_interfaces, reserved_interfaces and data_interfaces parameters after blackhole interfaces have been added automatically by the template system.

grifter's People

Contributors

bwks avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

grifter's Issues

Add guest defaults file lookup

Add a directory lookup list for guest-defaults.yml

From least to most preferred.

/opt/vagrantfile/guest-defaults.yml
~/.vagrantfile/guest-defaults.yml
./guest-defaults.yml

cisco csr1000v crashing

Cisco csr1kv crashes on boot. Need to stop guest detection by setting the guest type.

config.vm.guest = :freebsd

and potentially also add boot timeout

config.vm.boot_timeout = 180

randomize loopback range and interface port numbers

at the moment, each guest takes its loopback ip and portrange from the same blocks (127.255.1.0/24) and (10000). This can result in conflicts between multiple labs for the same user or labs between users. The selection of the blocks should be randomized some how.

Add pre-checks for file based triggers

In Vagrant >= 2.2.2 if the HDD file does not exist the program exits on destroy with an error. Prior to this version it was just ignored.

Add a File.exists? check prior to attempting deletion of an image.

Validators for variables

Add validators as a pre-check for two things:

  1. All required variables are present
  2. All variables provided are of the correct type

Guests with no interfaces crash defined

  File "/home/bradmin/.local/share/virtualenvs/cumulus-demo-gpYtoeRI/bin/vagrantfile", line 11, in <module>
    sys.exit(cli())
  File "/home/bradmin/.local/share/virtualenvs/cumulus-demo-gpYtoeRI/lib/python3.6/site-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/home/bradmin/.local/share/virtualenvs/cumulus-demo-gpYtoeRI/lib/python3.6/site-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/home/bradmin/.local/share/virtualenvs/cumulus-demo-gpYtoeRI/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/bradmin/.local/share/virtualenvs/cumulus-demo-gpYtoeRI/lib/python3.6/site-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/bradmin/.local/share/virtualenvs/cumulus-demo-gpYtoeRI/lib/python3.6/site-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/home/bradmin/.local/share/virtualenvs/cumulus-demo-gpYtoeRI/lib/python3.6/site-packages/vagrantfile_builder/cli.py", line 26, in create
    update_hosts(data['hosts'])
  File "/home/bradmin/.local/share/virtualenvs/cumulus-demo-gpYtoeRI/lib/python3.6/site-packages/vagrantfile_builder/api.py", line 96, in update_hosts
    host['provider_config']['nic_adapter_count'], host['interfaces']
KeyError: 'interfaces'```

Guest type option

Investigate setting the guest type to other for network appliances.

Currently for some boxes I set the guest type to something like this: node.vm.guest = :freebsd.
It may be possible to totally disable guest ID using the node.vm.guest = :other options.
Not setting the guest type can cause some boxes to crash when vagrant tries to identify the guest type.

CPU throttle trigger

some instances run with 100% CPU. the vsrx v2/3 for example. Add an option to throttle the CPU to 34% after guest boots.

Map vagrant interface number to box interface name

Depending on the box type, interfaces dont always start at 1. sometimes 0, sometimes 2. It would be handy to have an auto-mapping function to line up the interface mappings.

interface_maps = {
    'csr': {
        1: 'GigabitEthernet2',
        2: 'GigabitEthernet3',
        3: 'GigabitEthernet4',
        4: 'GigabitEthernet5',
        5: 'GigabitEthernet6',
        6: 'GigabitEthernet7',
        7: 'GigabitEthernet8',
    },
    'iosv': {
        1: 'GigabitEthernet0/1',
        2: 'GigabitEthernet0/2',
        3: 'GigabitEthernet0/3',
        4: 'GigabitEthernet0/4',
        5: 'GigabitEthernet0/5',
        6: 'GigabitEthernet0/6',
        7: 'GigabitEthernet0/7',
    }
    'iosv-l2': {
        1: 'GigabitEthernet0/1',
        2: 'GigabitEthernet0/2',
        3: 'GigabitEthernet0/3',
        4: 'GigabitEthernet1/0',
        5: 'GigabitEthernet1/1',
        6: 'GigabitEthernet1/2',
        7: 'GigabitEthernet1/3',
        8: 'GigabitEthernet2/0',
        9: 'GigabitEthernet2/1',
        10: 'GigabitEthernet2/2',
        11: 'GigabitEthernet2/3',
    },
    'iosxrv': {
        1: 'GigabitEthernet0/0/0/0',
        2: 'GigabitEthernet0/0/0/1',
        3: 'GigabitEthernet0/0/0/2',
        4: 'GigabitEthernet0/0/0/3',
        5: 'GigabitEthernet0/0/0/4',
        6: 'GigabitEthernet0/0/0/5',
        7: 'GigabitEthernet0/0/0/6',
    }
}

Set box libvirt hostname

Need to set the hostname that is passed to libvirt. This will also feed into #1 as it will be possible to match on the hostname.

Add the ability to virtually 'shutdown' an interface

Currently when a box boots up all the interfaces will be virtually 'UP' which does not really make sense for blackholed interfaces.

Using the virsh -c qemu:///system domif-setlink <box-name> <interface-name> down to shutdown blockhole interfaces potentially via a provisioner or executing ruby via the Vagrantfile

User defined config not loading

For some reason the user defined config is not being loaded and results in errors.

grifter create guests.yml 
Traceback (most recent call last):
  File "/usr/local/bin/grifter", line 11, in <module>
    load_entry_point('grifter==0.2.11', 'console_scripts', 'grifter')()
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/grifter/cli.py", line 139, in create
    validated_guest_data = validate_guest_data(guest_data, guest_config)
  File "/usr/local/lib/python3.6/site-packages/grifter/cli.py", line 68, in validate_guest_data
    validate_guest_interfaces(merged_data, config, interface_mappings)
  File "/usr/local/lib/python3.6/site-packages/grifter/validators.py", line 121, in validate_guest_interfaces
    if not int_map[remote_box]['data_interfaces'].get(remote_port):
KeyError: 'cisco/vmanage'

Add trigger to manage additional storage volumes

Example:

def add_volumes(hostname, box_version, location="default")
  # The 'sleep' elements prevent vagrant from crashing libvirt by trying
  # to upload volumes too quickly and killing the libvirtd service.
  [
    "virsh vol-create-as #{location} #{hostname}-vmx-vcp-hdb-#{box_version}.img 1G",
    "sleep 1",
    "virsh vol-upload --pool #{location} #{hostname}-vmx-vcp-hdb-#{box_version}.img /opt/vagrant/storage/vmx-vcp-hdb-#{box_version}-base.img",
    "sleep 1",
    "virsh vol-create-as #{location} #{hostname}-vmx-vcp-hdc-#{box_version}.img 1G",
    "sleep 1",
    "virsh vol-upload --pool #{location} #{hostname}-vmx-vcp-hdc-#{box_version}.img /opt/vagrant/storage/vmx-vcp-hdc-#{box_version}-base.img"
  ]
end

def delete_volumes(hostname, box_version, location="default")
  [
    "virsh vol-delete #{hostname}-vmx-vcp-hdb-#{box_version}.img #{location}",
    "virsh vol-delete #{hostname}-vmx-vcp-hdc-#{box_version}.img #{location}"
  ]
end

    # This trigger runs prior to box bring up and copies the
    # additional storage volumes to the libvirt storage pool.
    add_volumes(hostname, box_version, "disk1").each do |i|
      node.trigger.before :up do |trigger|
        trigger.name = "add-volumes"
        trigger.info = "Adding Volumes"
        trigger.run = {inline: i}
      end
    end

    # This trigger removes the storage volumes from the libvirt
    # storage pool on box destruction.
    delete_volumes(hostname, box_version, "disk1").each do |i|
      node.trigger.after :destroy do |trigger|
        trigger.name = "remove-volumes"
        trigger.info = "Removing Volumes"
        trigger.run = {inline: i}
      end
    end

Backup vagrantfile if it exists

  • Create a Vagrantfile.bak directory
  • move existing Vagrantfile to backup directory appending a timestamp to the name.
  • Adding timestamp to Vagrantfile text could also be useful.

Dict merge after up guests format change

Since the change of format for guests.yml there is more than 1 level of nesting for data. This means that the update_context does not work as it used to. The function needs to recurse down to the nested dicts to achieve the required outcome.

port forwarding

Request to add port forwarding
config.vm.network "forwarded_port", guest: 80, host: 8080

I am not sure if I will add this but recording it for now.
When using the libvirt provider, the management interfaces are in a subnet shared with the host.
Therefore when logged onto the host port forwarding is not required.

But! if not logged into the host machine, say vagrant is hosted on a remote server port forwarding could be useful.

Single YAML guest declaration for 'Dual VM' instances

The Juniper vQFX and vMX require two VM's (Control/Forwarding plane) to get full functionality. It would be nice to have a single YAML declaration for a guest of this type and have the necessary vagrant config built under the hood.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.