getsimpl / cloudlift Goto Github PK
View Code? Open in Web Editor NEWCloudlift makes it easier to launch dockerized services in AWS ECS
License: MIT License
Cloudlift makes it easier to launch dockerized services in AWS ECS
License: MIT License
At a top-level cloudlift has packages cloudlift
, deployment
, config
, version
and session
. When we use cloudlift as a library, it does not feel right to have imports like
from deployment.service_template_generator import ServiceTemplateGenerator
It could be nice to namespace everything under cloudlift
package so that it can be written as
from cloudlift.deployment.service_template_generator import ServiceTemplateGenerator
Currently, when you look at cloudlift.egg-info/top_level.txt
, the packages are
cloudlift
config
deployment
session
version
It should become
cloudlift
task definition has the option to add systemControls which will be passed as sysctl options to containers.
it is especially helpful where you want to overwrite max_connections for having more Nginx works.
The config diff table is pretty nifty but seems to be inconsistent in what it shows.
Also it seems that 2 space pretty-printed json would look better.
NOTE: will follow up with a PR in a few minutes with a proposed fix.
Modifications to config:
┌────────┬───────────────────────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────┐
│ Type │ Config │ Old val │ New val │
├────────┼───────────────────────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────┤
│ add │ services.Blackjack │ │ udp_interface : {'container_port': 1025, 'internal': False, 'restrict_access_to': ['0.0.0.0/0']} │
│ remove │ services.Blackjack.http_interface │ {'internal': False, 'restrict_access_to': ['0.0.0.0/0'], 'container_port': 80, 'health_check_path': '/elb-check'} │ │
└────────┴───────────────────────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────┘
Do you want update the config? [y/N]:
Modifications to config:
┌────────┬────────────────────┬─────────────────────────────────────┬───────────────────────────┐
│ Type │ Config │ Old val │ New val │
├────────┼────────────────────┼─────────────────────────────────────┼───────────────────────────┤
│ add │ services.Blackjack │ │ udp_interface : { │
│ │ │ │ "container_port": 1025, │
│ │ │ │ "internal": false, │
│ │ │ │ "restrict_access_to": [ │
│ │ │ │ "0.0.0.0/0" │
│ │ │ │ ] │
│ │ │ │ } │
│ remove │ services.Blackjack │ http_interface : { │ │
│ │ │ "internal": false, │ │
│ │ │ "restrict_access_to": [ │ │
│ │ │ "0.0.0.0/0" │ │
│ │ │ ], │ │
│ │ │ "container_port": 80, │ │
│ │ │ "health_check_path": "/elb-check" │ │
│ │ │ } │ │
└────────┴────────────────────┴─────────────────────────────────────┴───────────────────────────┘
Do you want update the config? [y/N]:
When an exception happens in many of the cloudlift operations, we exit from the program itself (refer).
This makes it impossible to use cloudlift as a library. So a better idea would be to raise "UnrecoverableException" (or a different better name) and handle it from the command line wrapper area.
That way as a library it can be used and will not crash the application that has cloudlift as a library dependency
As of now, setup.py
directly imports VERSION
from cloudlift.version
.
As a result of that, cloudlift.__init__
runs and imports dependencies.
This is a circular dependency. To resolve this, we can directly read the version file instead of importing it.
Here is the error stack trace form a dockerized application that I was trying to build.
Cloning ssh://****@github.com/Rippling/cloudlift.git (to revision master) to ./src/cloudlift
Running command git clone -q 'ssh://****@github.com/Rippling/cloudlift.git' /container-runner-api/src/cloudlift
Warning: Permanently added the RSA host key for IP address '13.234.210.38' to the list of known hosts.
ERROR: Command errored out with exit status 1:
command: /usr/local/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/container-runner-api/src/cloudlift/setup.py'"'"'; __file__='"'"'/container-runner-api/src/cloudlift/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info
cwd: /container-runner-api/src/cloudlift/
Complete output (7 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/container-runner-api/src/cloudlift/setup.py", line 3, in <module>
from cloudlift.version import VERSION
File "/container-runner-api/src/cloudlift/cloudlift/__init__.py", line 3, in <module>
import boto3
ModuleNotFoundError: No module named 'boto3'
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
One simple solution would be to exec version/__init_.py
instead of importing it
inside setup.py
version = {}
with open("./cloudlift/version/__init__.py") as fp:
exec(fp.read())
Another solution could be creating a seperate version.py file and defining __version__
globally inside it.
inside version.py
__version__ = "1.3.1"
inside setup.py
version = {}
with open("./cloudlift/version.py") as fp:
exec(fp.read(), version)
# version can be accessed using version["__version__"]
Other packages could simply import this and access __version__
Steps to reproduce:
_pickle.PicklingError: Can't pickle <class 'botocore.client.ECS'>: attribute lookup ECS on botocore.client failed
Steps like editing/viewing configs work though
Currently, deploy_service forms the docker build command w/o any build-argument. This leaves cloudlift to be unusable in certain use cases. One example is passing build-time credentials like auth keys for private repos, as a build-arg to docker build.
In cloudlift create service we can support creating a CNAME record for the service. It can be config in service configuration itself. We need to ensure the certificate domain and the CNAME domain are the same.
boto throw's error of max limit reached incase we modify more than 10 variables, code should internally make multiple API calls to overcome this limit.
pybrake requires Python 3.4+.
pip install -U pybrake
(You can find your project ID and API key in your project's settings)
import pybrake
notifier = pybrake.Notifier(project_id=<Your project ID>,
project_key='<Your project API KEY>',
environment='production')
To test that you've installed Airbrake correctly, try triggering a test error.
try:
raise ValueError('hello')
except Exception as err:
notifier.notify(err)
For more information please visit our official GitHub repo.
when we use services like s3 or Dynamo DB through instances in a private subnet, it uses public internet through nat gateway to access s3 or dynamo DB endpoints,
if we create a VPC endpoint, it uses endpoint as a network which reduces the massive bandwidth and data transfer cost.
moreover, vpc endpoints are free of cost.
https://medium.com/nubego/how-to-save-money-with-aws-vpc-endpoints-9bac8ae1319c
The ssm parameter path gets mangled if the service name is hyphenated with title cases.
For eg: ServiceA-UITest
-> service-a--u-i-test
This especially affects edit_config.
The culprit is here:
cloudlift/cloudlift/deployment/configs.py
Line 12 in 50b218f
The stringcase
library doesn't handle prefixed hyphens. This library in particular has couple of edgecases in the issues section.
To be backward compatible we can use regex replace and handled special cases like hyphen-caps, caps-caps continuously etc.
Right now every task creates an IAM role but it will not have any permission,
it would be helpful if I am policies required can be attached through the config.
Environment
creates its own missing table on the fly, but Service
doesn't.
@siliconsenthil - I can submit a PR with either approach.
or
def _ensure_table(self):
table_names = self.dynamodb_client.list_tables()['TableNames']
if SERVICE_CONFIGURATION_TABLE not in table_names:
log_warning("Could not find configuration table, creating one..")
self._create_configuration_table()
def _create_configuration_table(self):
self.dynamodb.create_table(
TableName=SERVICE_CONFIGURATION_TABLE,
KeySchema=[
{
'AttributeName': 'service_name',
'KeyType': 'HASH'
},
{
'AttributeName': 'environment',
'KeyType': 'RANGE'
}
],
AttributeDefinitions=[
{
'AttributeName': 'service_name',
'AttributeType': 'S'
},
{
'AttributeName': 'environment',
'AttributeType': 'S'
}
],
BillingMode='PAY_PER_REQUEST'
)
log_bold("Configuration table created!")
target group dimension is required for finding the number of unhealthy host, it is not present so unhealthy host alarm is not working.
NatGateway is not a regional service when we create an environment using cloudlift, It creates a single nat gateway in one availability zone and uses the same route table for two subnets created in different az's.
so incase the az which host the single nat goes down, the entire VPC goes down and the instances in 2nd az will not be able to access the internet because nat gateway is down.
from amazon documentation:
If you have resources in multiple Availability Zones and they share one NAT gateway, and if the NAT gateway’s Availability Zone is down, resources in the other Availability Zones lose internet access. To create an Availability Zone-independent architecture, create a NAT gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone
.
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.