Giter VIP home page Giter VIP logo

plumbing's Introduction

Plumbing

This module contains utility scripts for voxpupuli admins to do various management tasks.

Setting up a gem for rubygems deploys

Add the voxpupuli rubygems account to the owners list of the gem.

gem owner <gemname> -a [email protected]

Github Actions secrets

The secrets for the Puppet Forge and Rubygems are set via organization secrets. This means they only need to be set once.

gpg

to decrypt:

./bin/decrypt_credentials.sh

to encrypt:

./bin/encrypt_credentials.sh

Repository topics

Hacktoberfest

Hacktoberfest wants to have the hacktoberfest topic applied to a repository before it counts. To see what's missing:

./bin/hacktoberfest

Then to actually run:

./bin/hacktoberfest --set-topics

Metadata-based topics

There are labels such as linux-puppet-module, bsd-puppet-module and windows-puppet-module that can be set based on metadata. To see what would be done:

./bin/github_topics

Then to actually run:

./bin/github_topics --set-topics

syncing labels

In the past we used a nodejs script to sync labels. This feature moved to vox-pupuli-tasks

The rails app has a list of labels and ensures that they are configured on our repositories. Just update the yaml file to update all labels / their colours or descriptions.

The feature is documented in the application.

Contribution

We currently require all commits in this repo to be signed with gpg, so please configure your git client properly. Let us know if you need some help. We're also reachable via our IRC channel #voxpupuli on freenode.

plumbing's People

Contributors

alexjfisher avatar bastelfreak avatar bbriggs avatar binford2k avatar blkperl avatar brutus333 avatar daenney avatar dhollinger avatar dhoppe avatar ekohl avatar eputnam avatar hunner avatar igalic avatar juniorsysadmin avatar llowder avatar misseuropa avatar mkrakowitzer avatar nibalizer avatar rnelson0 avatar roidelapluie avatar rwaffen avatar sacres avatar sebastianrakel avatar smortex avatar stankevich avatar tobru avatar tragiccode avatar traylenator avatar tuxmea avatar wyardley avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

plumbing's Issues

use vault for storing credentials

we should use vault for storing our forge credentials

@daenney offered to write a small service for retrieving a travis secret from the vault
so anyone could use it to update their secret.

Undefined role 'github Administrator'

Hi,
right now every PMC member as the Administrator status. Besides that there are 4 other people with that permission set:

It isn't clearly defined in our governance.md who is allowed to have these permission. Possible solutions:

  • Only PMC role is allowed to have admin perms
  • We create another role in our governance document
  • something else

@voxpupuli/project-maintainers how do you think about that?


I'm fine with the first solution. Admin perms are rarely needed, also this seems to be the simplest solution.

voxpupuli CHANGELOG.md

Organisations go through changes

most changes have reasons

We have this repository (and its history) and mailing lists and so on, but it would be nice to have have an easily accessible summary of what happened, when, and why.

The background to this idea can be read in this blog post

puppet-gitlab migration

  • @tobru added to the modules/admin team
  • Repository transferred
  • Github issues enabled
  • Verify that all webhooks except travis are disabled
  • Update the README.md with a description of the deprecation and a link to the new module location - to be done by @tobru
  • Release a copy of your module to the ‘puppet’ forge account - to be done by @tobru
  • Add the module to our modulesync setup
  • Add the module to our plumbing repository(handles travis secrets)
  • Ask one of the admins to add the module to the collaborators Team on Github.
  • Execute modulesync for this module
  • Create a Jira issue at https://tickets.puppetlabs.com and ask to deprecate the old module (and approve the new one if the old one was approved as well)

puppet-python migration

  • @stankevich added to the modules/admin team
  • Repository transferred
  • Github issues enabled
  • Verify that all webhooks except travis are disabled
  • Update the README.md with a description of the deprecation and a link to the new module location - to be done by @stankevich
  • Release a copy of your module to the ‘puppet’ forge account - to be done by @stankevich
  • Add the module to our modulesync setup
  • Add the module to our plumbing repository(handles travis secrets)
  • Ask an admin to add the collaborators team to the module’s Collaborators & Teams ‘Teams’ list with Write permissions
  • Execute modulesync for this module
  • Create a Jira issue at https://tickets.puppetlabs.com and ask to deprecate the old module (and approve the new one if the old one was approved as well)
  • Sync labels

Elevate me to 'administrator'

Giving me admin rights to the VP organisation will enable my work as Security Officer.

Here is a small list of tasks that require those rights:

  • Revoking compromised user account
  • Accessing the security logs
  • Pushing to all repositories, reverting malicious commits
  • ...

Backfill old releases

When migrating a module into puppet community, upload old releases to the puppet-community namespace.

Define a process to handle money

We are now part of opencollective. We already have several sponsors, which worked quicker than I expected. Now we need to decide what we do with the money and how we handle it in the future. I would like to use it cover reoccurring costs like our domains, but we got way more money. We had some ideas during configuration management camp:

  • get swag for new joiners
  • get swag once for existing members
  • reimburse travel costs for conferences
  • buy stickers/our foxes for conferences
  • work together with nitrokey (open source yubikey alternative) to get one hardware dongle per contributor, then enforce 2FA at github (arch linux recently did that)

We need to figure out how all the opencollective stuff works, how we handle requests for expenses in the community and how we decide what we pay for.

Also we maybe want to introduce an M & M officer role - Money and/or Marketing. There is a bunch of text that should/could be added to our github aponaors page, our organisation readme and the open collective page, and someone should have an overview of incoming/outgoing money.

Fix broken puppet-lint-releases (missing rubygems.org key)

We've a few gems that got migrated into our namespace. We currently can't deploy them because we don't have a rubygems key in their travis config (or they don't even have a travis config)


https://github.com/voxpupuli/puppet-lint-reference_on_declaration_outside_of_class-check comes from @tuxmea, it currently has no travis config


https://github.com/voxpupuli/puppet-lint-resource_reference_syntax comes from @tuxmea too, also has no travis file.


https://github.com/voxpupuli/puppet-lint-appends-check comes from @glarizza, has a travis file but no rubygems key

Add support for Ubuntu 16.04

Most of the modules does not have support for Ubuntu 16.04. We should check if everything works as expected.

Do not forget to modify the metadata.json.

Enhance RSpec tests

We should use the results of Coveralls to get the RSpec tests fully covered.

Maybe we should check if the overrides at spec_helper.rb are still necessary.

RFC: Support Puppet 4 only

RFC: Support Puppet 4 only

This is going to be a bit controversial because I'm raising it this early. This is not a proposal to move to Puppet 4 only now now now. Instead it is meant for us to find a good way on how to move to Puppet 4 and define some metrics at which point we deem it acceptable to move to a Puppet 4 only version of our modules.

Reasons for moving to Puppet 4

Puppet 4 brings a vast number of improvements but besides the runtime speedup it's mostly language features and sanitisation that we, the developers, can benefit from allowing us to write more concise, bug-free, testable and compact manifests. This in turn raises the quality of our modules which is good for everyone that uses them. Some of the features deemed most important:

  • the type system (and thus being able to avoid all the validate_* and is_* functions) which gets us decent error reporting for free
  • native iteration avoiding the need for all kinds of "creative" iteration hacks (and therefor no more create_resources too)
  • the protected, read-only, $facts hash (instead of all the $::facts) which will facilitate migrating to a strict_variables enabled future

Additionally a new way to deal with data in modules is starting to emerge in Puppet 4 that in due time we'll be able to take advantage of too (though perhaps not before Puppet 5 depending on when the feature is deemed finalised).

Collecting data

The most important thing to do in order to decide when the time is right is to collect as much data as we can on Puppet 4 deployments and their ratio to Puppet 3.

Data that could help us make this decision:

  • Internal survey with Vox Pupuli members to figure out how far along our "avant-garde" is
  • Public survey (join forces with Puppet Labs) to get some deployment figures on Puppet 4 (perhaps every half year?)
  • Download statistics from the official tarballs and repositories, both FOSS and PE
  • Download statistics from the Forge (if they can tell us anything wrt Puppet 4 deployment)

When to move

The problem with all of this is once we move to Puppet 4 only the code will simply not work anymore on older versions of Puppet, most notably Puppet 3 which is still widely deployed. Though Puppet 4 has been out for a year now and multiple community members are holding talks at PuppetConf, Puppet Camp's and Configuration Management Camp on how to migrate to Puppet 4 and what new features it brings adoption is still slow. Puppet Labs has successfully been pushing Puppet 4 through PE upgrades.

As such we need to define a threshold at which point, based on the data that we decide to collect, we believe the time is right to move to Puppet 4. However, when we do decide to do so we should still provide critical bug fix releases for Puppet 3 compatible versions of our modules for a number of months to not unnecessarily complicate other people's lives.

Therefor I would propose a transition somewhere along these lines:

  • An announcement goes out through all our channels a month before this change goes into effect to give everyone some time to prepare (pin version requirements etc.)
  • A Puppet 3 release of the module must be available which is "perfect", i.e it doesn't generate deprecation warnings or other problems when run on the latest version of Puppet 3 with and without the future parser enabled
  • A major version release for every module needs to happen (preferably paired with some validation cleanup by leveraging the type system instead) and accordingly reflected in metadata.json
    • _suggestion_: We should try and keep other module changes to a minimum during that upgrade. Though a major version upgrade would be traditionally where we change the API it's going to be hard enough with the language transition already that we shouldn't add too much baggage.
  • For a period of 3-6 months, at the discretion of the module maintainers, bug fixes and releases for Puppet 3 compatible versions of the module will still be made but no new features will be accepted

The one question that remains: how do we decide it is time?

List of tasks / bugs that we need to do/fix prior to being able to move

  • Puppet 4 (future parser) style guide

puppet module tool

rspec-puppet

puppet-lint

Update metadata.json

We need to update the metadata.json to get a higher score at Puppet Forge.

Maybe we should also check the dependencies and .fixtures.yml.

RFC: new release process

RFC: New release process

While discussing what the default permissions people should get for repositories in our organisation the problem of release integrity showed up.

Current situation

Most members of our organisation are members of the collaborators team. This grants them write access to well over 30 repositories and is one of the main reasons we can move so fast. Many eyes on the same code with many people being able to push/merge code. However, having write permissions on Github equates to being able to create and push tags. These tags are then picked up by Travis which rolls them into Forge release. Herein lies a problem.

Any collaborator can now, for whatever reason, push and release a version of a module. Since we hand out these permissions freely and not based on a "you have to prove your worth and gain our trust first" principle we risk that someone with malicious intent, or majorly inebriated after a tech conference, pushes a release of a module that does severe damage to the end user's system.

When people instruct their environments to fetch modules directly from Github this is not something we can prevent. Simply put, in order to not be able to push tags we need to revoke 'write' permission for most people and only hand that out to a select few we trust. This has the undesirable side-effect that is severely slows down progress on any module. However, when people fetch modules from the Forge a certain amount of trust is placed on the fact that if it is fetched from the Forge it should not harm the system. A degree of curation and quality control is expected there which we currently cannot guarantee.

Future situation

I would like to get to a place where people still have write access to our repositories and can even push tags but that this does not automatically result in a release published on the forge. This has the following benefits:

  • People can continue working at their own speed on modules
  • We no longer need to encrypt and sync the Forge credentials into every .travis.yml
  • We can, with a high degree of confidence, know that release on the Forge are "good"

I would also like us to gain the ability to:

  • Have a full audit trail between who committed, tagged and released to Github and Forge

Proposed solution

As such we need a solution that allows us to "gate" releases to the Forge. I propose a workflow where:

  • People work and commit to their hearts intent on a module
  • People tag releases as they see fit
  • Travis uploads the release tarbal to an upload queue of our "release service"
  • Someone from the releases team then approves the upload for delivery to the Forge

This is very much like how larger FOSS projects such as Debian handle the release process of packages. Though it does create a small blocker on a team of people that have the release power, as long as this team is big enough and geographically diverse it should not cause any issues. It also automatically denotes a number of people that can be contacted should issues with a release be uncovered once it's made it out to the Forge.

This is not the place to go into implementation details of such release service. It is merely a place to discuss possible workflows.

Vox Pupuli Heroku Account

A Heroku account could prove useful, it's a nice easy way of deploying apps quickly to the ☁️

The plumbing that comes with that: Login, auto-deploy from Travis/Github etc.

Currently used Heroku apps:

https://github.com/voxpupuli/open-prs

Which is currently hosted on my account, but that means it's a SPOF 😢

«About VoxPupuli»

I would like for puppet-corosync to have a «about vox pupuli» section with a short intro and some links at the end of the README.

It would make sense to have that in this repo so we can copy that to any VP repository that wants that.

Supporting Puppet 6

We want to start marking our modules as puppet 6 compatible as soon as possible. We should probably not change all modules en masse, but as soon as someone has at least done some light testing or if the module has acceptance tests, contributors need to be aware of what their PRs should and should not include. The puppet 6 release notes and blog don't make this very clear. Hopefully these notes will help.

metadata.json

version_requirement

Update the version_requirement. We will likely start dropping puppet 4 support in the new year, but right now, the majority of modules will have a requirements section like this.

"requirements": [
    {
      "name": "puppet",
      "version_requirement": ">= 4.10.0 < 7.0.0"
    }
  ]

dependencies

Do not add any of the new puppetlabs/X_core "Supported types in modules" to the dependencies section. These modules are already included with the puppet 6 agent packages. Importantly, these modules "are available during compilation, but won't be pluginsync'ed.". This is fortunate as they are all marked as supporting puppet 6 only, and at least one module, (yumrepo_core), definitely breaks puppet 5 as it uses the sensitive DSL which isn't available before puppet 6.

Testing

fixtures.yml

core modules used need to be added to fixtures.yml as (unlike the agent packages), the puppet gem doesn't vendor the modules. On the assumption that puppetlabs/puppetlabs_spec_helper#258 will be merged and a new release made soon, mark any core modules as only being needed on puppet 6 as follows.

---
fixtures:
  repositories:
    stdlib: 'https://github.com/puppetlabs/puppetlabs-stdlib.git'
    yumrepo_core:
      repo: 'https://github.com/puppetlabs/puppetlabs-yumrepo_core.git'
      puppet_version: ">= 6.0.0"

beaker

TODO

Questions we'd like to ask Puppet and/or things we're not sure about

RFC: Release schedule

Figure out which modules need to be released or have not been released yet.

Maybe the Puppet Forge badge helps. As far as I know the color changes if the last release is a long time ago.

Execute Markdown lint

Even if Markdown lint can not be invoked via Rakefile, we should execute this at least once to get some structure into the README.md.

Maybe this can be done, when we add the badges to the README.md.

Replace the password storage with gopass

Today a large encrypted file is used. However, a tool like gopass can do the same thing. The only consideration is that the filenames do show up in git. It is my suggestion to create a new private repository to store the passwords using gopass and remove the credentials (and related files) from this repository.

Add page explaining about no 1.8.7 support

There's been a few PR's re-adding or reverting changes that removed 1.8.7 support (ie. hashes in Puppet providers/types). These changes are normally the result of Rubocop changes.

A page on https://voxpupuli.org/ explaining 1.8.7 with something:

  • Why support has been dropped, emphasising that 1.8.7 has long been EOL and we don't have the resources (or really, the patience) to support a very dead version of Ruby
  • How to get upgraded Ruby packages for LTS distros, such as Software Collections for proper packages or a tool like Rbenv to install particular versions.
  • Recommendation to upgrade to Puppet 4 so you get the AIO package with the puppet vendored Ruby
  • Workarounds if people still require 1.8.7 support in an emergency (fork repos etc.)

With something like this it gives us an easy copy-pasteable Closed, won't fix message for 1.8.7 issues in modules

voxpupuli/puppet-archive#156
voxpupuli/puppet-iis#111
voxpupuli/puppet-mcollective#300
voxpupuli/puppet-mcollective#297
voxpupuli/puppet-nodejs#217
voxpupuli/puppet-jira#136

Governance

In an effort to make the Vox Pupuli governance structure less opaque, we should come up with a charter that outlines:

  • what the community managers do
  • how to become a community manager (election, hand raising, bribing)
  • what a community manager is empowered to do on behalf of VP
  • when the community managers need to agree
  • what agreement means in this context
  • how to resolve disagreements between community managers
  • when the community managers are required to meet
  • how to remove a bad community manager (bad as in bad acting)
  • whether community managers have terms and term limits
  • how to stop being a community manager

This is not an exhaustive list. The term 'community manager' is a new one, it loosely refers to what are now project maintainers.

Please share your thoughts/suggestions/rants/etc here.

The AC for this issue:

  • a draft of a charter that answers the questions above (and any others raised during discussion)
  • a process for adopting the charter

Only support Puppet 3.8 latest

Our current default testing matrix tests against Puppet 3 and 4 latest only.
Make this a policy on our website somewhere, and adjust the metadata.json for all modules accordingly.

Discussion on migrating remote_file module to voxpupuli

Hey Guys,

I ran into this amazing module and noticed @reidmv reached out the to maintainer who is no longer actively working on this module. He suggested transferring ownership to voxpupuli so this approved module which is cross-platform and built with types and providers ( which also has 1 million downloads ) doesn't die. It also does some advanced things like automatically creating file resources and embedding them in the catalog during compilation.

https://forge.puppet.com/lwf/remote_file

While we already have a "download_file" module ( which isn't cross platform ) and the archive module ( which just does alot ). I would like to get some feedback about the fate of this module.

Thanks

Implement Beaker tests

We should implement acceptance tests based on Docker. Do we have to modify the Docker nodesets regarding the list of packages? Why does Beaker 3.1.0 make so much trouble?

Terraform + Github Provider?

I think it would be awesome if we utilized terraform's github provider in order to manage all of the vox github repos

This can help.
1.) Prevent drift of settings on repositories
2.) Help version control changes made to github repos
3.) This would also replace the syncing of labels.

Anyone feel this would be a useful thing to try?

@voxpupuli/collaborators

RFC: Stop accepting persistent forks as-is

From time to time, we have accepted persistent forks of modules, either as actual forks of modules or because the upstream maintainer has not responded. An example is puppet-grafana. This has caused us some problems:

  • PRs by default go to the upstream repo. Very easy to put it in and not notice for hours.
  • PR/Issues history is preserved in the commit log, but not as PRs and Issues in the GitHub API. This breaks github_changelog_generator among other things.

I am proposing that when we perform such a fork, we create a new repo, rather than a persistent fork. i.e. git clone [email protected]:bfraser/puppet-grafana.git; cd puppet-grafana; git remote rm origin; git remote add origin [email protected]:voxpupuli/puppet-grafana.git; git push --mirror origin. This breaks the tie to the previous fork and assists with PR creation. The GitHub history is lost but that happens anyway.

This policy would of course apply to permanent forks. If we intend to push changes and return those changes to the original maintainer, this policy would in no way apply

I'd like to have a discussion to weigh any other pros and cons of the current policy of accepting persistent forks vs the proposal to create all repos fresh.

UPDATE

Maintaining Forks
Pros:

  • It is obvious then where the repo comes from
  • We can check upstream regularly and contribute fixes to the original or pull updates from it
  • It is more respectful for original makers, it lets the door open to a real transfer, looks less aggressive

Cons:

  • Opening a PR for a branch on the fork (i.e. modulesync) by default is a PR against the upstream repo
  • Code/issue/pr searches on the fork are disabled (unless the fork has more Stars than the parent)
  • Issue/PR numbering starts over, only original IDs are tracked in history without any reference to the actual event

Starting Fresh Repos
Pros:

  • All PRs default to being against our repo
  • Search works, all the time

Cons:

  • Signals an inability to migrate module, for whatever actual reason there may be
  • Issue/PR numbering starts over, only original IDs are tracked in history without any reference to the actual event

Change from mocha to rspec-mocks

tl;dr I propose we move Vox Pupuli Puppet modules to use rspec-mocks instead of Mocha. It's not super difficult, and only really affects modules with Ruby specific testing, rather than rspec-puppet. So custom facts, functions or type and provider setups. We can investigate if there is existing tooling to do this automatically, similar to transpec.

Context

In ye olden Ruby days, Mocha was the mocking framework of choice for Ruby testing. There were a lot of things that people liked Mocha for, one of them being stubing of any instance of a class:

# rspec-mock
stub_car = mock(Car)
stub_car.stub!(:speed).and_return(100)
Car.stub!(:new).and_return(stub_car)

# mocha
Car.any_instance.stubs(:speed).returns(100)

Taken from http://stackoverflow.com/questions/1406692/what-are-the-advantages-of-mocha-over-rspecs-built-in-mocking-framework

Since then, rspec-mock has come leaps and bounds, more documented and maintained by the core rspec team. It also has 1:1 feature parity with mocha at this stage it seems.

Mocha has some issues with Ruby 2.3.1 that has caused some travis failures (although it appears to be a bug in Ruby rather than a mocha issue causing these failures, more context: voxpupuli/puppet-nginx#917)

Puppet Engineering is aiming to move completely away from mocha to rspec-mock: https://tickets.puppetlabs.com/browse/MODULES-3958

I propose we move Vox Pupuli Puppet modules to use rspec-mocks instead of Mocha. It's not super difficult, and only really affects modules with Ruby specific testing, rather than rspec-puppet. So custom facts, functions or type and provider setups.

An example of this in action is shown in the nginx module: voxpupuli/puppet-nginx#920

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.