My name is Max and I'm a freelance DevOps and cloud engineer from Germany.
I gain, apply and pass on knowledge about how to setup modern infrastructure on a daily basis.
Keep an eye on your AWS quotas before you hit their limits
Home Page: https://pypi.org/project/aws-quota-checker/
License: MIT License
My name is Max and I'm a freelance DevOps and cloud engineer from Germany.
I gain, apply and pass on knowledge about how to setup modern infrastructure on a daily basis.
I think I am correct in not seeing this in current checks.
Thanks
Right now, I can either run all
checks or specifically select them. It would be nice to be able to select them by scope, something like:
$> aws-quota-checker check all
# Run only account scope checks
$> aws-quota-checker check --scope account all
# or run everything *except* account scope checks
$> aws-quota-checker check all,!scope:account
This check is currently missing
Hey,
The ec2_on_demand_standard_count
metric gives me back the number of instances I have, and the awsquota_ec2_on_demand_standard_count_limit
give me back the number of CPUs I can use. I would like to get the number of CPUs I'm using.
Getting the following error while running it inside a kubernetes pod or even as a container
Traceback (most recent call last):
File "/usr/local/bin/aws-quota-checker", line 7, in
from aws_quota.cli import cli
File "/usr/local/lib/python3.7/site-packages/aws_quota/cli.py", line 2, in
from aws_quota.utils import get_account_id
File "/usr/local/lib/python3.7/site-packages/aws_quota/utils.py", line 6, in
def get_account_id(session: boto3.Session) -> str:
File "/usr/lib64/python3.7/functools.py", line 490, in lru_cache
raise TypeError('Expected maxsize to be an integer or None')
TypeError: Expected maxsize to be an integer or None
After implementing this tool. I noticed a few missing checks. This is one that would be super helpful for us.
Link to PR: #28
We're updating our alerts to make use of the metrics exposed by the prometheus-exporter
feature of aws-quota-checker
. We have a generic expression which aims to alert whenever we've breached 70% of any limit.
The expression is quite unweildly:
round( 100 *
label_replace({__name__=~"^awsquota_([3a-z_]+(_count|_instances))$",account=~".+"}, "resource", "$1", "__name__", "^awsquota_([3a-z_]+(_count|_instances))$")
/ on (resource)
label_replace({__name__=~"^awsquota_([3a-z_]+(_limit))$",account=~".+"}, "resource", "$1", "__name__", "^awsquota_([3a-z_]+)(_limit)$")
) > 70
A couple suggestions which would aid in crafting PromQL expressions:
awsquota_rds_instances{resource="rds_instances"}
awsquota_rds_instances_limit{resource="rds_instances"}
awsquota_usage{resource="rds_instances"}
awsquota_limit{resource="rds_instances"}
Feel free to disregard if this is too niche or opinionated in a direction you'd rather not take. A solution to those facing similar grievances could be through the use of recording rules.
Version: aws-quota-checker==1.12.0
when running
aws-quota-checker check cf_stack_count --profile tools --region $region
I'm seeing a maximum of 100 returned for regions that have more than 100 stacks. I suspect this is some kind of paging issue.
Hi! I'm getting incorrect rules count for specific SG (according to AWS console - this group have 230 inbound, 1 outbound rule).
Other SGs have (inbound+outbound)<10 and displayed correctly. Maybe it's some paging issue?
I got such results on latest master, fix-cf-stack-counting branches.
Best regards!
When checking rules count over aws cli
aws ec2 describe-security-group-rules --region us-east-1 --profile sso-prod --filter Name="group-id",Values="sg-xxxx" | jq -r '.SecurityGroupRules | length'
231
When checking rules count using aws-quota-checker
/usr/local/bin/aws-quota-checker check vpc_rules_per_sg
AWS profile: default | AWS region: us-east-1 | Active checks: vpc_rules_per_sg
Collecting checks [####################################] 100%
Rules per VPC security group [****/us-east-1/sg-xxxx]: 66/333 ✓
The AWS console is warning us about getting close to the limit for number of IAM roles (1000). Would like to have a check for this limit.
I configured the quota check with my prometheus, but it only brings me the default region of aws configure, but I need it to bring sa-east-1 and us-east-1, could you help me?
Please add:
Allocated storage / Total storage for all DB instances for RDS
I'm seeing
EBS Snapshots per region [default/eu-west-1]: 25780/100000 ✓
for an account with only a handful of actual snapshots.
Perhaps change
return len(self.boto_session.client('ec2').describe_snapshots()['Snapshots'])
To
return len(self.boto_session.client('ec2').describe_snapshots(OwnerIds=["self"])['Snapshots'])
Since I've been told you use an issue based workflow. This is to start the discussion about #24!
Would love to use this locally, however the Docker image is not build for linux/arm64/v8
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Probably https://github.com/marketplace/actions/build-and-push-docker-images will help as the following ran fine
docker buildx build --platform=linux/arm/v8 -t aws-quota-checker:macm1 .
Some AWS services have not integrated with the quota service so the current implementation cannot get the applied limit and falls back to the default limit. For example we have gotten an S3 bucket limit increase to 300 buckets, so this quota checker will always indicate we are at the limit for number of s3 buckets.
Do you have any plans to implement a way to override individual checks to handle this type of scenario? Asking since we have a need for it and might tackle it in our fork (https://github.com/northwoodspd/aws-quota-checker). If you have ideas about how you'd like to see that approached, we'd have a better chance of getting such a feature accepted into your repo.
Per aws support:
------------------------------------------------------------------------
% aws service-quotas get-service-quota --service-code s3 --quota-code L-DC2B2D3D
An error occurred (NoSuchResourceException) when calling the GetServiceQuota operation:
------------------------------------------------------------------------
I then contacted an S3 SME regarding the above - please note, S3 has not yet been onboarded onto the get-service-quota API. Therefore, you will only not be able to view any increased limits for S3 buckets.
[snip]
Therefore, for the time being, it will only be possible to use the get-aws-default-service-quota API until the get-service-quota API is onboarded with S3. Additionally, although this feature is not currently available, I recommend you to follow our news channels to be updated on all service releases from us (please note we cannot give timelines for new feature releases):
DynamoDB shows 100 when in reality you have more tables, looks like it's getting capped due to pagination.
Running check with a blacklist entry in 1.3.0 returns a usage error.
(aws-quota-checker) [(HEAD detached at 1.3.0)] aws-quota-checker $ aws-quota-checker check 'all,!vpc_count'
Usage: aws-quota-checker check [OPTIONS] [all|am_mesh_count|asg_count|cf_stack
_count|cw_alarm_count|dyndb_table_count|ebs_sna
pshot_count|ec2_eip_count|ec2_on_demand_f_count
|ec2_on_demand_g_count|ec2_on_demand_inf_count|
ec2_on_demand_p_count|ec2_on_demand_standard_co
unt|ec2_on_demand_x_count|ec2_spot_f_count|ec2_
spot_g_count|ec2_spot_inf_count|ec2_spot_p_coun
t|ec2_spot_standard_count|ec2_spot_x_count|ec2_
tgw_count|ec2_vpn_connection_count|ecs_count|ek
s_count|elasticbeanstalk_application_count|elas
ticbeanstalk_environment_count|elb_alb_count|el
b_clb_count|elb_listeners_per_alb|elb_listeners
_per_clb|elb_listeners_per_nlb|elb_nlb_count|el
b_target_group_count|iam_attached_policy_per_gr
oup|iam_attached_policy_per_role|iam_attached_p
olicy_per_user|iam_group_count|iam_policy_count
|iam_policy_version_count|iam_server_certificat
e_count|iam_user_count|ig_count|lc_count|ni_cou
nt|route53_health_check_count|route53_hosted_zo
ne_count|route53_records_per_hosted_zone|route5
3_reusable_delegation_set_count|route53_traffic
_policy_count|route53_traffic_policy_instance_c
ount|route53_vpcs_per_hosted_zone|route53resolv
er_endpoint_count|route53resolver_rule_associat
ion_count|route53resolver_rule_count|s3_bucket_
count|secretsmanager_secrets_count|sg_count|sns
_pending_subscriptions_count|sns_subscriptions_
per_topic|sns_topics_count|vpc_acls_per_vpc|vpc
_count|vpc_subnets_per_vpc]
Try 'aws-quota-checker check --help' for help.
Error: Invalid value for '[all|am_mesh_count|asg_count|cf_stack_count|cw_alarm_count|dyndb_table_count|ebs_snapshot_count|ec2_eip_count|ec2_on_demand_f_count|ec2_on_demand_g_count|ec2_on_demand_inf_count|ec2_on_demand_p_count|ec2_on_demand_standard_count|ec2_on_demand_x_count|ec2_spot_f_count|ec2_spot_g_count|ec2_spot_inf_count|ec2_spot_p_count|ec2_spot_standard_count|ec2_spot_x_count|ec2_tgw_count|ec2_vpn_connection_count|ecs_count|eks_count|elasticbeanstalk_application_count|elasticbeanstalk_environment_count|elb_alb_count|elb_clb_count|elb_listeners_per_alb|elb_listeners_per_clb|elb_listeners_per_nlb|elb_nlb_count|elb_target_group_count|iam_attached_policy_per_group|iam_attached_policy_per_role|iam_attached_policy_per_user|iam_group_count|iam_policy_count|iam_policy_version_count|iam_server_certificate_count|iam_user_count|ig_count|lc_count|ni_count|route53_health_check_count|route53_hosted_zone_count|route53_records_per_hosted_zone|route53_reusable_delegation_set_count|route53_traffic_policy_count|route53_traffic_policy_instance_count|route53_vpcs_per_hosted_zone|route53resolver_endpoint_count|route53resolver_rule_association_count|route53resolver_rule_count|s3_bucket_count|secretsmanager_secrets_count|sg_count|sns_pending_subscriptions_count|sns_subscriptions_per_topic|sns_topics_count|vpc_acls_per_vpc|vpc_count|vpc_subnets_per_vpc]': invalid choice: all,!vpc_count. (choose from all, am_mesh_count, asg_count, cf_stack_count, cw_alarm_count, dyndb_table_count, ebs_snapshot_count, ec2_eip_count, ec2_on_demand_f_count, ec2_on_demand_g_count, ec2_on_demand_inf_count, ec2_on_demand_p_count, ec2_on_demand_standard_count, ec2_on_demand_x_count, ec2_spot_f_count, ec2_spot_g_count, ec2_spot_inf_count, ec2_spot_p_count, ec2_spot_standard_count, ec2_spot_x_count, ec2_tgw_count, ec2_vpn_connection_count, ecs_count, eks_count, elasticbeanstalk_application_count, elasticbeanstalk_environment_count, elb_alb_count, elb_clb_count, elb_listeners_per_alb, elb_listeners_per_clb, elb_listeners_per_nlb, elb_nlb_count, elb_target_group_count, iam_attached_policy_per_group, iam_attached_policy_per_role, iam_attached_policy_per_user, iam_group_count, iam_policy_count, iam_policy_version_count, iam_server_certificate_count, iam_user_count, ig_count, lc_count, ni_count, route53_health_check_count, route53_hosted_zone_count, route53_records_per_hosted_zone, route53_reusable_delegation_set_count, route53_traffic_policy_count, route53_traffic_policy_instance_count, route53_vpcs_per_hosted_zone, route53resolver_endpoint_count, route53resolver_rule_association_count, route53resolver_rule_count, s3_bucket_count, secretsmanager_secrets_count, sg_count, sns_pending_subscriptions_count, sns_subscriptions_per_topic, sns_topics_count, vpc_acls_per_vpc, vpc_count, vpc_subnets_per_vpc)
Expected output (1.2.0):
(aws-quota-checker) [(HEAD detached at 1.2.0)] aws-quota-checker $ aws-quota-checker check 'all,!vpc_count'
AWS profile: default | AWS region: None | Active checks: am_mesh_count,asg_count,cf_stack_count,cw_alarm_count,dyndb_table_count,ebs_snapshot_count,ec2_eip_count,ec2_on_demand_f_count,ec2_on_demand_g_count,ec2_on_demand_inf_count,ec2_on_demand_p_count,ec2_on_demand_standard_count,ec2_on_demand_x_count,ec2_spot_f_count,ec2_spot_g_count,ec2_spot_inf_count,ec2_spot_p_count,ec2_spot_standard_count,ec2_spot_x_count,ec2_tgw_count,ec2_vpn_connection_count,ecs_count,eks_count,elasticbeanstalk_application_count,elasticbeanstalk_environment_count,elb_alb_count,elb_clb_count,elb_listeners_per_alb,elb_listeners_per_clb,elb_listeners_per_nlb,elb_nlb_count,elb_target_group_count,iam_attached_policy_per_group,iam_attached_policy_per_role,iam_attached_policy_per_user,iam_group_count,iam_policy_count,iam_policy_version_count,iam_server_certificate_count,iam_user_count,ig_count,lc_count,ni_count,route53_health_check_count,route53_hosted_zone_count,route53_records_per_hosted_zone,route53_reusable_delegation_set_count,route53_traffic_policy_count,route53_traffic_policy_instance_count,route53_vpcs_per_hosted_zone,route53resolver_endpoint_count,route53resolver_rule_association_count,route53resolver_rule_count,s3_bucket_count,secretsmanager_secrets_count,sg_count,sns_pending_subscriptions_count,sns_subscriptions_per_topic,sns_topics_count,vpc_acls_per_vpc,vpc_subnets_per_vpc
I was thinking about allowing to expose the results of all quota monitors on a Prometheus /metrics endpoint. This would allow to constantly monitor all quotas and visualize them in e.g. Grafana.
If you'd like to see such a feature leave a thumbs up on this issue.
That would be nice to have.
Thanks for this awesome tool.
Hi there,
Good stuff, I've just deployed this in my environment.
Would you be able to please provide an export of your Grafana dashboard json?
Hey, I think that adding the Storage for General Purpose SSD (gp3) volumes, in TiB
and Storage for General Purpose SSD (gp2) volumes, in TiB
limit and utilization will be beneficial.
Subnets that can be shared with an account : L-44499CD2
Number of Resource shares : L-595828F9
Security group per network interface : L-2AFB9258
Thanks
This seems to be checking the current number of instances, when it should be checking the number of vCPUs:
$ aws-quota-checker check 'ec2_on_demand_standard_count'
AWS profile: default | AWS region: us-east-2 | Active checks: ec2_on_demand_standard_count
Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) EC2 instances [************]: 178/980 ✓
This shows 178/980, which does not match Trusted Advisor:
And does not match Service Quotas:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.