Giter VIP home page Giter VIP logo

satellite-demo's Introduction

Satellite 6 Demo using Ansible

Group of ansible roles to install Satellite 6 and multiple systems in order to perform a demo.

INFORMATION

This whole set of playbooks will take a while to run (between 2.5 and 3 hrs with current inventory is about $1.59)

This was based on two projects. One is by Julio Villarreal Pelegrino, and another is from me (Billy Holmes).

Overview

This colletion of playbooks and roles will create a set of systems for a Satellite 6 Demo that use an example domain (example-dot-com) using a customized dns server, and an haproxy node to load balance the capsules. It does the following actions:

  1. Creates a bunch of VMs on AWS
    1. dns/haproxy for our fake domains and LB/HA stuff to work.
    2. satellite server VM based on the install recommendations
    3. two satellite capsule VMs in HA/LB mode as detailed in the upstream foreman
    4. two client VMs
  2. Configures the local install environment
    1. Creates a new inventory file that is AWS/GCE agnostic
    2. updates the local ssh config for easy ssh access
  3. Configures the dns node
    1. haproxy, dnsmasq
    2. Adds all the hosts in the demo for dns to work
  4. Configures the satellite server
    1. installs all the packages and configures the firewall
    2. runs the satellite installer and starts satellite
    3. copies or downloads the manifest from the Red Hat Portal
    4. enables all the demo repos and sets them for on-download for deferred downloads
    5. synchronizes the repos
    6. defines the demo content-views, adds all their repos, and publishes 1st version
    7. defines the demo lifecycle environments
    8. optional defines filters for one demo view, publishes, promotes them to environments, with one-month increment filters
    9. sets up the system with sane provisioning values:
      1. activiation keys
      2. hostgroups, subnets
      3. attaching subscriptions
      4. remastering the pxe-less discovery iso for automatic discovery
  5. cleans up the templates by removing ones we don't use
  6. Configures the Capsules
    1. registers them to the satellite
    2. generates the certificates
    3. installs the capsule packages and configures the firewall
    4. runs the capsule installer and starts the capsule process
    5. assigns the lifecycle environments to the capsules
    6. assigns capsules to default org and loc, sets download polciy to inherit from repo
    7. force content synchronization on capsules that haven't been
  7. Post-configuration: configures the demo settings
    1. turns extras repo to immediate download
    2. install rpm-devel on satellite to create demoapp
    3. create Demo product:
      1. Create Demo product and repo
      2. Create Demo App Content View
      3. Create Demp Comp Composite View
      4. Add Demo App and RHEL-7Server Content Views to the Demo Comp Compsite View
      5. Publish Demo CV's if needed
      6. Create Activation Key for Demo Product
      7. Assign Demo product to AK
      8. Create the Demo Host Group
      9. Wait for caspules to finish sync'ing the changes if needed
  8. Configures the clients
    1. registers them to the satellite or capsule LB/HA endpoint
    2. assigns a HG and gives them subscriptions via the AK

Requirements

  1. Assign subscriptions for RHEL:
    1. Create an Activation Key in the Portal
    1. Assign your subscriptions to Cloud Access
    • 6x Red Hat Enterprise Linux Server (will be running version RHEL 7.5)
    • You will effectively "double register" the Satellite server.
  2. A Satellite manifest.zip
    • Create a manifest Obtained from the portal
    • Put in the following subscriptions:
      • at least 2x Subscriptions for RHEL (ex: RH00008)
      • at least 2x Subscriptions for the capsules (ex: SVC3124)
      • for a complete list, look in the defaults file
    • download the manifest and place it where this README.md is located.
  3. Your AWS and Satellite credentials in an environment file:
    • AWS_ACCESS_KEY - your AWS access key
    • AWS_SECRET_KEY - your AWS secret key
    • SUBSCRIPTION_ORG - your Satellite Org, so the activation key works
  4. Source the environment file before running the playbook (or use the boostrap container detailed below).
$ source ./env_file

Additionally, you will need Ansible. There are several options:

Layout

Playbooks and tasks

I'm using a Makefile to break up the playbooks and save time when different steps are completed if you need to go back and re-run different tasks.

type make to get a help:

Usage: make (help | all | clean | create | config | install | capsules | post | clients | step[123]| bulk-[action])
   help:       this help
   all:        run everything in the proper order
   clean:      clean up environment - delete VMs instances
   create:     create the VMs instances
   config:     configure the VMs instances
   install:    configure the satellite server
   capsules:   configure satellite capsules
   post:       post configuration steps for demo
   clients:    configure clients
   myip:       helper target to add your current IP to the AWS VPC
   info:       helper target to get URL info
   step:       Demo help
   step0:      Demo step0 - build rpm for performance check
   step1:      Demo step1 - publish performance
   step2:      Demo step2 - HA/LB Capsule
   step3:      Demo step3 - content availability without Satellite master
   bulk-*:     Perform a bulk action:
         demoapp:    install or update demoapp on all the clients
         unregister: unregister all the systems
         poweroff:   stop services and poweroff all the systems
         poesron:    start all the ec2 instances

Inventory File

Variable Locations

There are (2) two inventory files:

  1. a static inventory file that you can edit.
  2. a dynamic inventory that is generated: ./playbooks/inventory/demo.satellite

The purpose is that after the create task, the playbooks operate on the dynamic inventory file and thus are unaware of which cloud provide (if any) the VMs are located.

Misc

Bulk Actions

You can run certain bulk actions using the bulk make task or the bulk action playbook:

  1. demoapp - cycles through the clients and updates the demoapp using yum
  2. unregister - cycles through all the VMs in the demo and unregisters them
  3. poweroff - cycles through all VMs, shutdowns satellite/capsules and docker, then powers them off
  4. poweron - cycles through the static inventory and powers on all the VMs

Once you poweron the VMs, you should run make all to have them reset their cloud provider DNS and hostname settings.

Default Inventory

Host AWS type vCPU Memory Purpose
dns1 t2.small 1 2GiB dns, haproxy
clients1,2 t2.small 1 2GiB demoapp, clients
capsule1,2 m5.large 2 8GiB capsule
satellite m5.xlarge 4 16GiB satellite

Inventory Variables you'll probably need to change

Variable Why change?
ec2_ami_image You need Red Hat Cloud Access in order to see the supplied image.
ansible_user If you change the image above, you'll need to know the login user (ex: ec2-user, cloud-user)
ec2_demo_tag If you change this, then you can run multiple of these clusters in AWS at the same time. You'll have to add it to the ec2 group in the inventory file.
private_ip_start If you change the above, then you'll need to change this variable, too, probably.
ec2_instance_type If you want beefier VMs. With the current inventory in AWS, it costs me $10.90 per day as of Jul 20th, 2018.
ec2_volume_sizes The array of volume sizes, current inventory configures a total of 562 GiB at a cost of about $1.77 a day. The more storage allocated to the Satellite and Capsules means more throughput and IOPS, you can also change the storage type and get more performance while paying more.

License

MIT

Author Information

Billy Holmes [email protected]

Based on work by:

Julio Villarreal Pelegrino [email protected] more at: http://wwww.juliovillarreal.com

Disclaimer

DISCLAIMER: I'm a Red Hat Solutions Architect. It's my job to introduce Red Hat customers to Red Hat products, and help them gain the most value from these products. I am not support, nor releasing this as a representative of Red Hat. Thus, I cannot help you use this playbook in production, enterprise, PoC, or bake-off situation. I will gladly help you get in contact with someone at Red Hat that CAN help you do these things.

The purpose of this playbook is to build a demo and an experimental environment for Satellite, some capsules, and some clients. If you have any questions or run into issues running these playbooks to achieve that goal, then please create a GitHub issue so I can address it!

If you have other questions or issues with Satellite in general, I'll gladly help you reach the correct resource at Red Hat!

satellite-demo's People

Contributors

gonoph avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

satellite-demo's Issues

README: Describe which subscriptions should be in the manifest

Need to describe which subscriptions need to be in the manifest.

The Activation Key creation uses an array of subscription SKUs to determine the best subscription to attach to the Activation Key.

Should document as a best practice:

  1. which subscriptions should be placed in the manifest.
  2. or, how to override the subscription selection array with a custom value.
  3. and, a short walk through or link to the kbase article that explains how to create a manifest and attach subscriptions to it.

satellite_ak_subscription module should have less user dependencies

Currently, satellite_ak_subscription depends on the caller/user/author to call the appropriate API calls, then pass those json payloads to the module. This is error prone and potentially hard to maintain.

Proposal: using the get_url python library that is included with Ansible, modify the module to orchestrate the needed API calls, thus reducing a user dependency.

Various typos and TODO in documentation

Various typos and TODOs that should probably be bulked updated.

  • In the list of instance types, should read m4.large instead of m5.large.
  • In the list of variables to override, should probably list the SKU override variables (referenced in the satellite-server defaults variables)
  • Need a link to Red Hat Ansible Tower
  • Last run took about 3.5 hrs - should probably update that.

Convert the check_capsules module into a real python module

Currently the check_capsules is a shell script that makes several assumptions (which is the bug):

  • it assumes the user/pass of the satellite is the defaults
  • it assumes the API endpoint hostname is the default of satellite.example.com

It's also lacking some functionality that is replaced with an Ansible do-util loop and jq (which is the enhancement):

  • It should take a timeout value, and should automatically sleep/wait for the capsules synchronization to be complete.
  • it uses jq to expose the variable data - thus it cannot be delegated to a remote host.

Satellite facts role/task should be a module

It's bulky and takes a lot of time to query the Satellite facts role/task. It's a helper task to lookup the primary IDs of the Organization name, Location, and some other data into variables that can be easily referenced by the API calls.

The reason for the API calls instead of hammer in these instances is that it's much quicker to call the API than to call hammer.

Proposal: Use the get_url helper python library included with Ansible to craft a module that orchestrates the API calls and combines the results into a single data structure.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.