adobe / ops-cli Goto Github PK
View Code? Open in Web Editor NEWOps - cli wrapper for Terraform, Ansible, Helmfile and SSH for cloud automation
License: Apache License 2.0
Ops - cli wrapper for Terraform, Ansible, Helmfile and SSH for cloud automation
License: Apache License 2.0
I have noticed quite a few implementations in Python that is not quite pythonic or that there are other ways to optimize its behavior. I have opened this issue to track changes in the code to enforce a uniform and readable style for the existing Python scripts.
https://www.python.org/dev/peps/pep-0008/
elif type(value) == type(None)
The above is problematic for two reasons. First, isinstance()
is better than comparing types directly, and secondly, for singletons like None
,
elif value is None
is sufficient and cleaner.
This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.
These updates are currently rate-limited. Click on a checkbox below to force their creation now.
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
Dockerfile
python 3.11.4-alpine3.18
python 3.11.4-alpine3.18
.github/workflows/codeql-analysis.yml
actions/checkout v4
github/codeql-action v2
github/codeql-action v2
github/codeql-action v2
.github/workflows/release.yml
actions/checkout v4
actions/setup-python v5
pypa/gh-action-pypi-publish c12cc61414480c03e10ea76e2a0a1a17d6c764e2
actions/checkout v4
docker/login-action v3
docker/metadata-action v4
docker/build-push-action v5
requirements.txt
simpledi ==0.4.1
awscli ==1.29.12
boto3 ==1.28.12
boto ==2.49.0
ansible ==8.5.0
azure-common ==1.1.28
azure ==4.0.0
msrestazure ==0.6.4
Jinja2 ==3.1.2
hvac ==1.1.1
inflection ==0.5.1
kubernetes ==26.1.0
himl ==0.15.0
GitPython ==3.1.*
Helmfile generated config shouldn't use hardcoded filters and doesn't have an option to exclude keys.
I think we should introduce the shared resource concept between clusters . I'm thinking here of hosts that are deployed in admin cluster and need to be used in the product cluster. For example Prometheus servers even if they are part of admin cluster when deploying the product artefacts we should find a way to deploy new alerts/recording rules as well without changing the cluster/playbook or run it twice against 2 different clusters.
Useful for cases where the tf states are stored remotely (ie. in an S3 bucket) with DynamoDB locking. If something happens (eg. credentials expire, terraform crashes), it would be useful to have a way to remove the lock that was acquired. Currently, this can only be done manually, by going to DynamoDB and removing the entry (the lock). However, terraform has a command to remove the lock. We should add it in ops
as well.
ops mycluster.yaml terraform force-unlock <LOCK_ID>
should work.
https://www.terraform.io/docs/commands/force-unlock.html
Not implemented yet.
v0.23
First of all congratulation for OpsCli !
I think i might be relevent to capture all the user base that build cluster with kops today, even if EKS exist there is still a lot of user that stick with kops because more controle more customisation, less features or options on eks side and so on and so forth...
i would love to see ops support kops cluster that i just discover today searching for something completly different , indeed i was searching for templating solution for my Spinnaker pipeline .
Anyway Kudos for the promising tool.
Cheers
Running multiple helmfile releases in parallel can cause concurrency issues.
E.g. roboll/helmfile#690
Ops-cli should be able to add --concurrency
flag for helmlfile commands. This cannot be achieved using extra_args
. The extra_args
option adds args only for the main command (helmfile)
requirements.txt
: ansible==v2.7.13
requirements.txt
: ansible==v2.7.12
N/A
Enable TF_PLUGIN_CACHE_DIR in docker container to speed up tf commands.
e.g.
export TF_PLUGIN_CACHE_DIR="$HOME/.terraform.d/plugin-cache"
mkdir -p $TF_PLUGIN_CACHE_DIR
Currently, when using ops-cli to drive EKS clusters there are multiple tf
commands ran.
For each of this run, tf downloads aws tf provider slowing the overall command.
- Downloading plugin for provider "aws" (hashicorp/aws) 2.30.0...
@costimuraru do you see any issue in enabling this?
The 2.3.1.0 version of ansible is quite old and reported as having a security vulnerability by github.
We should upgrade to latest version as of today: 2.7.10
Similar to the Vault integration: https://github.com/adobe/ops-cli#secrets-management
Make it possible to define something like this in the cluster.yaml file:
db_password: "{{ '/my/ssm/path/' | read_ssm(aws_profile='myprofile') }}"
https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html
Could you please take some time in the next few days to make changes to some terminology in your repos and content as much as is possible:
If you cannot remove the term because the writing for example reflects the UI or the code, please make a note and send me an email to [email protected] so we can bring it that teamβs attention. Thanks for your efforts in this matter.
ops cluster.yaml ssh 10.0.0.1
ssh'es into bastion--10.0.0.1 even when the ip
is not part of the inventory
This is needed in environments where the hosts are not part of the inventory
but reachable via bastion
Description:
Enhance Vault module, now it will not muffle Exceptions and exposed another function called check_vault() that will return a Jinja2 friendly boolean ("true" or "false")
Added support for enforcing a minimum ops version in Terraform clusters ("ops_min_version")
Motivation and Context
Aims to solve scenarios where reading a non-existent Vault secret always returned "None" and never threw an exception.
Aims to make it easier to enforce a minimum version of ops across larger teams.
Files get synced
Loading cached inventory info from: ~/.ops/cache/432fa75899488401313802367b97c24b
Loading cached inventory info from: ~/.ops/cache/432fa75899488401313802367b97c24b
Looking for hosts for pattern 'tabcontroller'
Traceback (most recent call last):
File "/Users/complexsplit/.pyenv/versions/cns/bin/ops", line 10, in
sys.exit(run())
File "/Users/complexsplit/.pyenv/versions/2.7.10/envs/cns/lib/python2.7/site-packages/ops/main.py", line 151, in run
output = app_container.run()
File "/Users/complexsplit/.pyenv/versions/2.7.10/envs/cns/lib/python2.7/site-packages/ops/main.py", line 144, in run
return runner_instance.run(self.console_args)
File "/Users/complexsplit/.pyenv/versions/2.7.10/envs/cns/lib/python2.7/site-packages/ops/cli/sync.py", line 84, in run
ssh_user = self.cluster_config.get('ssh_user') or self.ops_config.get('ssh.user') or getpass.getuser()
NameError: global name 'getpass' is not defined
ops cluster.yaml sync 'foomachine01:/tmp/bar.log' /Users/complexsplit/Downloads
MacOS 10.14.6. ops-cli 1.8
After ansible upgrade sync stopped working. Investigation showed that ansible inventory no longer has get_vars, instead we can use the method in the Host object
Add the helmfile CLI in the ops docker image: https://github.com/roboll/helmfile
Right now, remove_prometheus_alert_rules.yml
always reloads Prometheus after deleting the rules. There are cases though when we don't want this behaviour (e.g. we want to do some more operations on the rules before finally reloading Prometheus at the end).
Currently helmfile composition generates kubernetes cluster context information for eks clusters via aws eks update-kubeconfig
commands.
Previously if required variables to generate were missing the tool would run with default kubernetes context configured.
Since this has proven to be dangerous running against default context has been removed.
To be able to run helmfile composition against non eks clusters support for explicit context selection needs to be added in ops-cli
Even though terraform.remove_local_cache: True
is present in .opsconfig.yaml
, when running terraform apply, the .terraform
folder is not being removed, before doing the terraform init
.
Terraform v0.12 disables module-depth
option for all operations except graph (as of 0.v12-alpha3
):
hashicorp/terraform@178ec8f#diff-aa88524f69c6514b7d5568e0b5e03bad
pip install https://github.com/adobe/ops-cli/releases/download/0.30/ops-0.30-py2-none-any.whl
brew install terraform
# CURRENT VERSION v0.12ops environment/path.yaml terraform plan
Terraform dumps usage.
Have Terraform v0.12 installed.
Install Python
pip install https://github.com/adobe/ops-cli/releases/download/0.30/ops-0.30-py2-none-any.whl
brew install terraform
# CURRENT VERSION v0.12
git clone https://github.com/adobe/ops-cli.git
`cd
ops environment/sample-cluster.yaml terraform plan
Generate a valid plan file.
MacOS 10.14.5
Terraform Version: v0.12.2 (or >v0.12-alpha3)
OPS-CLI Version: v0.30
Any valid operational config.
cd ${HOME}/terraform && terraform get -update && terraform init && terraform refresh -input=false -var 'cluster=aaron-kulick-dev' -state=terraform.aaron-kulick-dev.tfstate && terraform plan -out=terraform.aaron-kulick-dev.plan -refresh=false -module-depth=1 -input=false -var 'cluster=aaron-kulick-dev' -state=terraform.aaron-kulick-dev.tfstate
N/A - c.f. CLI execution as debug output.
When ops path helmfile command
is executed the EKS cluster kubeconfing needs to be generated.
if cluster.fqdn
key is missing from cluster configuration kubeconfig file is not generated and helmfile command runs in the existing kubernetes context.
This can lead to helmfile execution agains the wrong cluster.
Currently, installing all prerequisites for ops-cli
can be a bit cumbersome (ie. nailing the python2 version). We're trying to simplify this. In the mean time, it would be nice to have the possibility to try out ops-cli with one click. To that end, having ops-cli and its prerequisites in a docker image would offer a quick setup.
$ docker run -it costimuraru/ops-cli:0.25 bash
[linuxbrew@docker ~]$ ops help
# usage: ops [-h] [--root-dir ROOT_DIR] [--verbose] [-e EXTRA_VARS]
# cluster_config_path
# {inventory,terraform,packer,ssh,play,run,sync,noop} ...
Successfully install ops-cli in development mode on a fresh python virtual env. Command to run: env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/include" python setup.py develop
Installation is crashing because of multiple pip package version conflicts.
Step 1
run env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/include" python setup.py develop
Result:
error: s3transfer 0.2.0 is installed but s3transfer<0.2.0,>=0.1.12 is required by set(['awscli'])
Step 2
Add s3transfer==0.1.13
to requitements.txt and rerun de install command.
Result:
error: s3transfer 0.1.13 is installed but s3transfer<0.3.0,>=0.2.0 is required by set(['boto3'])
Step 3
Set boto3==1.9.88
in reqirements.txt which fixed the s3transfer issue but generated another one
Results:
error: botocore 1.12.94 is installed but botocore==1.12.87 is required by set(['awscli'])
Step 4:
Add botocore==1.12.87
in req which lead to the following error:
Result:
error: botocore 1.12.87 is installed but botocore<1.13.0,>=1.12.88 is required by set(['boto3'])
Step 5:
Set boto3==1.9.87
in req which leat to the final error:
Result:
error: PyYAML 4.2b4 is installed but PyYAML<=3.13,>=3.10 is required by set(['awscli'])
Step 6:
set PyYAML==3.13 in req and the installation was completed successfully
macOS 10.13.4
[Migrated]
For use cases that have a static bastion host hosted in corp network, it would be useful to have a way to specify that in the cluster.yaml file in the inventory section.
Useful when you want to pass a JSON in a terraform file.
resource "kubernetes_secret" "{{repo.name}}-docker-repo-secret" {
metadata {
name = "{{repo.name}}-docker-repo-secret"
}
data {
".dockerconfigjson" = "{{ macros.generate_docker_json_config(repo.repository, repo.credentials) | escape_json }}"
}
type = "kubernetes.io/dockerconfigjson"
}
[Migrated]
The upgrade to Boto3 is part of an ongoing initiative of having a single instance which coordinates deployments - with support for automatic deployments in pre-production environments.
Boto3 has built-in support for the default AWS Credentials Providers stack.
We need to set the cross-account trusting policies for our accounts, being then able to specify automatic role assumption in a cross-account context via ~/.aws/config.
It might now work out of the box, as it doesn't for aws-cli (see: aws/aws-cli#1604 and aws/aws-cli#1390).
In that case, we would need to do programatic role assumption - I was not been able to fully test this yet (permissions).
Note
This change is intended to be backward-compatible with the current setup - in other words, temporary/static credentials should work as before.
[Migrated]
Python 2 is EOL in 1 yr 5 months - as such, all of my team's development is being done in Python 3. Mixing Opstool into python 3 projects is a bit cumbersome and requires virtualenvs and other nastiness - maybe it's time to port opstool to python 3? FWIW I'd be willing to do a bunch of the legwork for this, just want to gauge interest before I undertake such a task.
Current bastion tag is hard-coded to Adobe:Class
.
https://github.com/adobe/ops-cli/blob/master/src/ops/inventory/plugin/azr.py#L94
We should be able to customize this tag, in the cluster configuration.
Decrease the size of the docker image basing it on alpine base image
Current image is a bit too large (2GB)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.