Giter VIP home page Giter VIP logo

cool-assessment-terraform's Introduction

cool-assessment-terraform

GitHub Build Status

This project is used to create an operational assessment environment in the COOL environment.

Pre-requisites

  • Terraform installed on your system.

  • An accessible AWS S3 bucket to store Terraform state (specified in backend.tf).

  • An accessible AWS DynamoDB database to store the Terraform state lock (specified in backend.tf).

  • Access to all of the Terraform remote states specified in remote_states.tf.

  • Accept the terms for any AWS Marketplace subscriptions to be used by the operations instances in your assessment environment (must be done in the AWS account hosting the assessment environment):

  • Access to AWS AMIs for Guacamole and any other operations instance types used in your assessment.

  • OpenSSL server certificate and private key for the Guacamole instance in your assessment environment, stored in an accessible AWS S3 bucket; this can be easily created via certboto-docker or a similar tool.

  • A Terraform variables file customized for your assessment environment, for example:

    assessment_account_name = "env0"
    private_domain          = "env0"
    
    vpc_cidr_block               = "10.224.0.0/21"
    operations_subnet_cidr_block = "10.224.0.0/24"
    private_subnet_cidr_blocks   = ["10.224.1.0/24", "10.224.2.0/24"]
    
    tags = {
      Team        = "VM Fusion - Development"
      Application = "COOL - env0 Account"
      Workspace   = "env0"
    }

Building the Terraform-based infrastructure

  1. Create a Terraform workspace (if you haven't already done so) for your assessment by running terraform workspace new <workspace_name>.

  2. Create a <workspace_name>.tfvars file with all of the required variables (see Inputs below for details).

  3. Run the command terraform init.

  4. Add all necessary permissions by running the command:

    terraform apply -var-file=<workspace_name>.tfvars --target=aws_iam_policy.provisionassessment_policy --target=aws_iam_role_policy_attachment.provisionassessment_policy_attachment
  5. Create all remaining Terraform infrastructure by running the command:

    terraform apply -var-file=<workspace_name>.tfvars

Examples

Requirements

Name Version
terraform ~> 1.5
aws ~> 4.9
cloudinit ~> 2.0
null ~> 3.0

Providers

Name Version
aws ~> 4.9
aws.dns_sharedservices ~> 4.9
aws.organizationsreadonly ~> 4.9
aws.parameterstorereadonly ~> 4.9
aws.provisionassessment ~> 4.9
aws.provisionparameterstorereadrole ~> 4.9
aws.provisionsharedservices ~> 4.9
cloudinit ~> 2.0
null ~> 3.0
terraform n/a

Modules

Name Source Version
cw_alarms_assessor_workbench github.com/cisagov/instance-cw-alarms-tf-module n/a
cw_alarms_debiandesktop github.com/cisagov/instance-cw-alarms-tf-module n/a
cw_alarms_egressassess github.com/cisagov/instance-cw-alarms-tf-module n/a
cw_alarms_gophish github.com/cisagov/instance-cw-alarms-tf-module n/a
cw_alarms_guacamole github.com/cisagov/instance-cw-alarms-tf-module n/a
cw_alarms_kali github.com/cisagov/instance-cw-alarms-tf-module n/a
cw_alarms_nessus github.com/cisagov/instance-cw-alarms-tf-module n/a
cw_alarms_pentestportal github.com/cisagov/instance-cw-alarms-tf-module n/a
cw_alarms_samba github.com/cisagov/instance-cw-alarms-tf-module n/a
cw_alarms_teamserver github.com/cisagov/instance-cw-alarms-tf-module n/a
cw_alarms_terraformer github.com/cisagov/instance-cw-alarms-tf-module n/a
cw_alarms_windows github.com/cisagov/instance-cw-alarms-tf-module n/a
email_sending_domain_certreadrole github.com/cisagov/cert-read-role-tf-module n/a
guacamole_certreadrole github.com/cisagov/cert-read-role-tf-module n/a
read_terraform_state github.com/cisagov/terraform-state-read-role-tf-module n/a
session_manager github.com/cisagov/session-manager-tf-module n/a
vpc_flow_logs trussworks/vpc-flow-logs/aws ~>2.0

Resources

Name Type
aws_default_route_table.operations resource
aws_ebs_volume.assessorworkbench_docker resource
aws_ebs_volume.gophish_docker resource
aws_ec2_transit_gateway_route.assessment_route resource
aws_ec2_transit_gateway_route_table_association.association resource
aws_ec2_transit_gateway_vpc_attachment.assessment resource
aws_efs_access_point.access_point resource
aws_efs_file_system.persistent_storage resource
aws_efs_mount_target.target resource
aws_eip.egressassess resource
aws_eip.gophish resource
aws_eip.kali resource
aws_eip.nat_gw resource
aws_eip.nessus resource
aws_eip.pentestportal resource
aws_eip.teamserver resource
aws_eip_association.egressassess resource
aws_eip_association.gophish resource
aws_eip_association.kali resource
aws_eip_association.nessus resource
aws_eip_association.pentestportal resource
aws_eip_association.teamserver resource
aws_iam_instance_profile.assessorworkbench resource
aws_iam_instance_profile.debiandesktop resource
aws_iam_instance_profile.egressassess resource
aws_iam_instance_profile.gophish resource
aws_iam_instance_profile.guacamole resource
aws_iam_instance_profile.kali resource
aws_iam_instance_profile.nessus resource
aws_iam_instance_profile.pentestportal resource
aws_iam_instance_profile.samba resource
aws_iam_instance_profile.teamserver resource
aws_iam_instance_profile.terraformer resource
aws_iam_instance_profile.windows resource
aws_iam_policy.efs_mount_policy resource
aws_iam_policy.guacamole_parameterstorereadonly_policy resource
aws_iam_policy.nessus_parameterstorereadonly_policy resource
aws_iam_policy.provisionassessment_policy resource
aws_iam_policy.provisionssmsessionmanager_policy resource
aws_iam_policy.terraformer_permissions_boundary_policy resource
aws_iam_policy.terraformer_policy resource
aws_iam_role.assessorworkbench_instance_role resource
aws_iam_role.debiandesktop_instance_role resource
aws_iam_role.egressassess_instance_role resource
aws_iam_role.gophish_instance_role resource
aws_iam_role.guacamole_instance_role resource
aws_iam_role.guacamole_parameterstorereadonly_role resource
aws_iam_role.kali_instance_role resource
aws_iam_role.nessus_instance_role resource
aws_iam_role.nessus_parameterstorereadonly_role resource
aws_iam_role.pentestportal_instance_role resource
aws_iam_role.samba_instance_role resource
aws_iam_role.teamserver_instance_role resource
aws_iam_role.terraformer_instance_role resource
aws_iam_role.terraformer_role resource
aws_iam_role.windows_instance_role resource
aws_iam_role_policy.gophish_assume_delegated_role_policy resource
aws_iam_role_policy.guacamole_assume_delegated_role_policy resource
aws_iam_role_policy.kali_assume_delegated_role_policy resource
aws_iam_role_policy.nessus_assume_delegated_role_policy resource
aws_iam_role_policy.teamserver_assume_delegated_role_policy resource
aws_iam_role_policy.terraformer_assume_delegated_role_policy resource
aws_iam_role_policy_attachment.cloudwatch_agent_policy_attachment_assessorworkbench resource
aws_iam_role_policy_attachment.cloudwatch_agent_policy_attachment_debiandesktop resource
aws_iam_role_policy_attachment.cloudwatch_agent_policy_attachment_egressassess resource
aws_iam_role_policy_attachment.cloudwatch_agent_policy_attachment_gophish resource
aws_iam_role_policy_attachment.cloudwatch_agent_policy_attachment_guacamole resource
aws_iam_role_policy_attachment.cloudwatch_agent_policy_attachment_kali resource
aws_iam_role_policy_attachment.cloudwatch_agent_policy_attachment_nessus resource
aws_iam_role_policy_attachment.cloudwatch_agent_policy_attachment_pentestportal resource
aws_iam_role_policy_attachment.cloudwatch_agent_policy_attachment_samba resource
aws_iam_role_policy_attachment.cloudwatch_agent_policy_attachment_teamserver resource
aws_iam_role_policy_attachment.cloudwatch_agent_policy_attachment_terraformer resource
aws_iam_role_policy_attachment.cloudwatch_agent_policy_attachment_windows resource
aws_iam_role_policy_attachment.ec2_read_only_policy_attachment_guacamole resource
aws_iam_role_policy_attachment.efs_mount_policy_attachment_assessorworkbench resource
aws_iam_role_policy_attachment.efs_mount_policy_attachment_debiandesktop resource
aws_iam_role_policy_attachment.efs_mount_policy_attachment_gophish resource
aws_iam_role_policy_attachment.efs_mount_policy_attachment_kali resource
aws_iam_role_policy_attachment.efs_mount_policy_attachment_pentestportal resource
aws_iam_role_policy_attachment.efs_mount_policy_attachment_samba resource
aws_iam_role_policy_attachment.efs_mount_policy_attachment_teamserver resource
aws_iam_role_policy_attachment.efs_mount_policy_attachment_terraformer resource
aws_iam_role_policy_attachment.guacamole_parameterstorereadonly_policy_attachment resource
aws_iam_role_policy_attachment.nessus_parameterstorereadonly_policy_attachment resource
aws_iam_role_policy_attachment.provisionassessment_policy_attachment resource
aws_iam_role_policy_attachment.provisionssmsessionmanager_policy_attachment resource
aws_iam_role_policy_attachment.read_only_policy_attachment resource
aws_iam_role_policy_attachment.ssm_agent_policy_attachment_assessorworkbench resource
aws_iam_role_policy_attachment.ssm_agent_policy_attachment_debiandesktop resource
aws_iam_role_policy_attachment.ssm_agent_policy_attachment_egressassess resource
aws_iam_role_policy_attachment.ssm_agent_policy_attachment_gophish resource
aws_iam_role_policy_attachment.ssm_agent_policy_attachment_guacamole resource
aws_iam_role_policy_attachment.ssm_agent_policy_attachment_kali resource
aws_iam_role_policy_attachment.ssm_agent_policy_attachment_nessus resource
aws_iam_role_policy_attachment.ssm_agent_policy_attachment_pentestportal resource
aws_iam_role_policy_attachment.ssm_agent_policy_attachment_samba resource
aws_iam_role_policy_attachment.ssm_agent_policy_attachment_teamserver resource
aws_iam_role_policy_attachment.ssm_agent_policy_attachment_terraformer resource
aws_iam_role_policy_attachment.ssm_agent_policy_attachment_windows resource
aws_iam_role_policy_attachment.terraformer_policy_attachment resource
aws_instance.assessorworkbench resource
aws_instance.debiandesktop resource
aws_instance.egressassess resource
aws_instance.gophish resource
aws_instance.guacamole resource
aws_instance.kali resource
aws_instance.nessus resource
aws_instance.pentestportal resource
aws_instance.samba resource
aws_instance.teamserver resource
aws_instance.terraformer resource
aws_instance.windows resource
aws_internet_gateway.assessment resource
aws_nat_gateway.nat_gw resource
aws_network_acl.operations resource
aws_network_acl.private resource
aws_network_acl_rule.operations_egress_to_anywhere_via_any_port resource
aws_network_acl_rule.operations_ingress_from_anywhere_else_vnc resource
aws_network_acl_rule.operations_ingress_from_anywhere_else_winrm resource
aws_network_acl_rule.operations_ingress_from_anywhere_via_allowed_ports resource
aws_network_acl_rule.operations_ingress_from_anywhere_via_icmp resource
aws_network_acl_rule.operations_ingress_from_anywhere_via_ports_1024_thru_3388 resource
aws_network_acl_rule.operations_ingress_from_anywhere_via_ports_3390_thru_50049 resource
aws_network_acl_rule.operations_ingress_from_anywhere_via_ports_50051_thru_65535 resource
aws_network_acl_rule.operations_ingress_from_private_via_http resource
aws_network_acl_rule.operations_ingress_from_private_via_https resource
aws_network_acl_rule.operations_ingress_from_private_via_ssh resource
aws_network_acl_rule.operations_ingress_from_private_via_vnc resource
aws_network_acl_rule.operations_ingress_from_private_via_winrm resource
aws_network_acl_rule.private_egress_to_anywhere_via_http resource
aws_network_acl_rule.private_egress_to_anywhere_via_https resource
aws_network_acl_rule.private_egress_to_anywhere_via_ssh resource
aws_network_acl_rule.private_egress_to_cool_via_ipa_ports resource
aws_network_acl_rule.private_egress_to_cool_via_tcp_ephemeral_ports resource
aws_network_acl_rule.private_egress_to_cool_via_udp_ephemeral_ports resource
aws_network_acl_rule.private_egress_to_local_vm_ips_via_all_ports resource
aws_network_acl_rule.private_egress_to_operations_via_ephemeral_ports resource
aws_network_acl_rule.private_egress_to_operations_via_ssh resource
aws_network_acl_rule.private_egress_to_operations_via_vnc resource
aws_network_acl_rule.private_egress_to_operations_via_winrm resource
aws_network_acl_rule.private_ingress_from_anywhere_else_efs resource
aws_network_acl_rule.private_ingress_from_anywhere_else_services resource
aws_network_acl_rule.private_ingress_from_anywhere_via_tcp_ephemeral_ports resource
aws_network_acl_rule.private_ingress_from_cool_vpn_services resource
aws_network_acl_rule.private_ingress_from_local_vm_ips_via_all_ports resource
aws_network_acl_rule.private_ingress_from_operations_efs resource
aws_network_acl_rule.private_ingress_from_operations_mattermost_web resource
aws_network_acl_rule.private_ingress_from_operations_smb resource
aws_network_acl_rule.private_ingress_from_operations_via_https resource
aws_network_acl_rule.private_ingress_to_tg_attachment_via_ipa_ports resource
aws_network_acl_rule.private_ingress_to_tg_attachment_via_udp_ephemeral_ports resource
aws_route.cool_operations resource
aws_route.cool_private resource
aws_route.external_operations resource
aws_route.external_private resource
aws_route53_record.assessorworkbench_A resource
aws_route53_record.debiandesktop_A resource
aws_route53_record.egressassess_A resource
aws_route53_record.gophish_A resource
aws_route53_record.guacamole_A resource
aws_route53_record.guacamole_PTR resource
aws_route53_record.kali_A resource
aws_route53_record.nessus_A resource
aws_route53_record.pentestportal_A resource
aws_route53_record.samba_A resource
aws_route53_record.teamserver_A resource
aws_route53_record.terraformer_A resource
aws_route53_record.windows_A resource
aws_route53_vpc_association_authorization.assessment_private resource
aws_route53_zone.assessment_private resource
aws_route53_zone.private_subnet_reverse resource
aws_route53_zone_association.assessment_private resource
aws_route_table.private_route_table resource
aws_route_table_association.private_route_table_associations resource
aws_security_group.assessorworkbench resource
aws_security_group.cloudwatch_agent_endpoint resource
aws_security_group.cloudwatch_agent_endpoint_client resource
aws_security_group.debiandesktop resource
aws_security_group.dynamodb_endpoint_client resource
aws_security_group.ec2_endpoint resource
aws_security_group.ec2_endpoint_client resource
aws_security_group.efs_client resource
aws_security_group.efs_mount_target resource
aws_security_group.egressassess resource
aws_security_group.gophish resource
aws_security_group.guacamole resource
aws_security_group.guacamole_accessible resource
aws_security_group.kali resource
aws_security_group.nessus resource
aws_security_group.pentestportal resource
aws_security_group.s3_endpoint_client resource
aws_security_group.scanner resource
aws_security_group.smb_client resource
aws_security_group.smb_server resource
aws_security_group.ssm_agent_endpoint resource
aws_security_group.ssm_agent_endpoint_client resource
aws_security_group.ssm_endpoint resource
aws_security_group.ssm_endpoint_client resource
aws_security_group.sts_endpoint resource
aws_security_group.sts_endpoint_client resource
aws_security_group.teamserver resource
aws_security_group.terraformer resource
aws_security_group.windows resource
aws_security_group_rule.allow_nfs_inbound resource
aws_security_group_rule.allow_nfs_outbound resource
aws_security_group_rule.assessorworkbench_egress_to_anywhere_via_http_and_https resource
aws_security_group_rule.debiandesktop_egress_to_anywhere_via_http_and_https resource
aws_security_group_rule.debiandesktop_egress_to_nessus_via_web_ui resource
aws_security_group_rule.egress_from_cloudwatch_agent_endpoint_client_to_cloudwatch_agent_endpoint_via_https resource
aws_security_group_rule.egress_from_ec2_endpoint_client_to_ec2_endpoint_via_https resource
aws_security_group_rule.egress_from_ssm_agent_endpoint_client_to_ssm_agent_endpoint_via_https resource
aws_security_group_rule.egress_from_ssm_endpoint_client_to_ssm_endpoint_via_https resource
aws_security_group_rule.egress_from_sts_endpoint_client_to_sts_endpoint_via_https resource
aws_security_group_rule.egress_to_dynamodb_endpoint_via_https resource
aws_security_group_rule.egress_to_s3_endpoint_via_https resource
aws_security_group_rule.egressassess_egress_to_anywhere_via_http_and_https resource
aws_security_group_rule.guacamole_egress_to_cool_via_ipa_ports resource
aws_security_group_rule.guacamole_egress_to_hosts_via_ssh resource
aws_security_group_rule.guacamole_egress_to_hosts_via_vnc resource
aws_security_group_rule.guacamole_ingress_from_trusted_via_https resource
aws_security_group_rule.ingress_from_anywhere_to_assessorworkbench_via_allowed_ports resource
aws_security_group_rule.ingress_from_anywhere_to_debiandesktop_via_allowed_ports resource
aws_security_group_rule.ingress_from_anywhere_to_egressassess_via_all_icmp resource
aws_security_group_rule.ingress_from_anywhere_to_egressassess_via_allowed_ports resource
aws_security_group_rule.ingress_from_anywhere_to_gophish_via_allowed_ports resource
aws_security_group_rule.ingress_from_anywhere_to_kali_via_allowed_ports resource
aws_security_group_rule.ingress_from_anywhere_to_nessus_via_allowed_ports resource
aws_security_group_rule.ingress_from_anywhere_to_pentestportal_via_allowed_ports resource
aws_security_group_rule.ingress_from_anywhere_to_teamserver_via_allowed_ports resource
aws_security_group_rule.ingress_from_anywhere_to_windows_via_allowed_ports resource
aws_security_group_rule.ingress_from_cloudwatch_agent_endpoint_client_to_cloudwatch_agent_endpoint_via_https resource
aws_security_group_rule.ingress_from_ec2_endpoint_client_to_ec2_endpoint_via_https resource
aws_security_group_rule.ingress_from_guacamole_via_ssh resource
aws_security_group_rule.ingress_from_guacamole_via_vnc resource
aws_security_group_rule.ingress_from_kali_via_ssh resource
aws_security_group_rule.ingress_from_ssm_agent_endpoint_client_to_ssm_agent_endpoint_via_https resource
aws_security_group_rule.ingress_from_ssm_endpoint_client_to_ssm_endpoint_via_https resource
aws_security_group_rule.ingress_from_sts_endpoint_client_to_sts_endpoint_via_https resource
aws_security_group_rule.ingress_from_teamserver_to_gophish_via_ssh_and_smtp resource
aws_security_group_rule.kali_egress_to_gophish_via_ssh resource
aws_security_group_rule.kali_egress_to_nessus_via_web_ui resource
aws_security_group_rule.kali_egress_to_pentestportal_via_ssh_and_web resource
aws_security_group_rule.kali_egress_to_teamserver_instances_via_5000_to_5999 resource
aws_security_group_rule.kali_egress_to_teamserver_via_ssh_imaps_and_cs resource
aws_security_group_rule.kali_egress_to_windows_instances resource
aws_security_group_rule.kali_ingress_from_teamserver_instances_via_5000_to_5999 resource
aws_security_group_rule.kali_ingress_from_windows_instances resource
aws_security_group_rule.kali_to_kali_via_ssh resource
aws_security_group_rule.nessus_ingress_from_debiandesktop_via_web_ui resource
aws_security_group_rule.nessus_ingress_from_kali_via_web_ui resource
aws_security_group_rule.nessus_ingress_from_windows_via_web_ui resource
aws_security_group_rule.pentestportal_egress_to_anywhere_via_http_and_https resource
aws_security_group_rule.pentestportal_ingress_from_kali_via_ssh_and_web resource
aws_security_group_rule.pentestportal_ingress_from_windows_via_web resource
aws_security_group_rule.scanner_egress_to_anywhere_via_any_port resource
aws_security_group_rule.scanner_ingress_from_anywhere_via_icmp resource
aws_security_group_rule.smb_client_egress_to_smb_server resource
aws_security_group_rule.smb_server_ingress_from_smb_client resource
aws_security_group_rule.teamserver_egress_to_gophish_via_ssh_and_smtp resource
aws_security_group_rule.teamserver_egress_to_kali_instances_via_5000_to_5999 resource
aws_security_group_rule.teamserver_egress_to_windows_instances_via_5000_to_5999_tcp resource
aws_security_group_rule.teamserver_ingress_from_kali_instances_via_5000_to_5999 resource
aws_security_group_rule.teamserver_ingress_from_kali_via_ssh_imaps_and_cs resource
aws_security_group_rule.teamserver_ingress_from_windows_instances_via_5000_to_5999_tcp resource
aws_security_group_rule.terraformer_egress_anywhere_via_http resource
aws_security_group_rule.terraformer_egress_anywhere_via_https resource
aws_security_group_rule.terraformer_egress_anywhere_via_ssh resource
aws_security_group_rule.terraformer_egress_to_operations_via_winrm resource
aws_security_group_rule.windows_egress_to_kali_instances resource
aws_security_group_rule.windows_egress_to_nessus_via_web_ui resource
aws_security_group_rule.windows_egress_to_pentestportal_via_web resource
aws_security_group_rule.windows_egress_to_teamserver_instances_via_5000_to_5999_tcp resource
aws_security_group_rule.windows_ingress_from_kali_instances resource
aws_security_group_rule.windows_ingress_from_teamserver_instances_via_5000_to_5999_tcp resource
aws_subnet.operations resource
aws_subnet.private resource
aws_volume_attachment.assessorworkbench_docker resource
aws_volume_attachment.gophish_docker resource
aws_vpc.assessment resource
aws_vpc_dhcp_options.assessment resource
aws_vpc_dhcp_options_association.assessment resource
aws_vpc_endpoint.dynamodb resource
aws_vpc_endpoint.ec2 resource
aws_vpc_endpoint.ec2messages resource
aws_vpc_endpoint.kms resource
aws_vpc_endpoint.logs resource
aws_vpc_endpoint.monitoring resource
aws_vpc_endpoint.s3 resource
aws_vpc_endpoint.ssm resource
aws_vpc_endpoint.ssmmessages resource
aws_vpc_endpoint.sts resource
aws_vpc_endpoint_route_table_association.s3_operations resource
aws_vpc_endpoint_route_table_association.s3_private resource
aws_vpc_endpoint_subnet_association.ec2 resource
aws_vpc_endpoint_subnet_association.ec2messages resource
aws_vpc_endpoint_subnet_association.kms resource
aws_vpc_endpoint_subnet_association.logs resource
aws_vpc_endpoint_subnet_association.monitoring resource
aws_vpc_endpoint_subnet_association.ssm resource
aws_vpc_endpoint_subnet_association.ssmmessages resource
aws_vpc_endpoint_subnet_association.sts resource
null_resource.break_association_with_default_route_table resource
null_resource.validate_assessment_account_name_matches_workspace resource
null_resource.validate_assessment_artifact_export_map resource
null_resource.validate_assessment_id resource
null_resource.validate_assessment_type resource
aws_ami.assessorworkbench data source
aws_ami.debiandesktop data source
aws_ami.docker data source
aws_ami.egressassess data source
aws_ami.gophish data source
aws_ami.guacamole data source
aws_ami.kali data source
aws_ami.nessus data source
aws_ami.samba data source
aws_ami.teamserver data source
aws_ami.terraformer data source
aws_ami.windows data source
aws_caller_identity.assessment data source
aws_caller_identity.current data source
aws_default_tags.assessment data source
aws_iam_policy_document.ec2_service_assume_role_doc data source
aws_iam_policy_document.efs_mount_policy_doc data source
aws_iam_policy_document.gophish_assume_delegated_role_policy_doc data source
aws_iam_policy_document.guacamole_assume_delegated_role_policy_doc data source
aws_iam_policy_document.guacamole_parameterstore_assume_role_doc data source
aws_iam_policy_document.guacamole_parameterstorereadonly_doc data source
aws_iam_policy_document.kali_assume_delegated_role_policy_doc data source
aws_iam_policy_document.nessus_assume_delegated_role_policy_doc data source
aws_iam_policy_document.nessus_assume_role_doc data source
aws_iam_policy_document.nessus_parameterstorereadonly_doc data source
aws_iam_policy_document.provisionassessment_policy_doc data source
aws_iam_policy_document.provisionssmsessionmanager_policy_doc data source
aws_iam_policy_document.teamserver_assume_delegated_role_policy_doc data source
aws_iam_policy_document.terraformer_assume_delegated_role_policy_doc data source
aws_iam_policy_document.terraformer_assume_role_doc data source
aws_iam_policy_document.terraformer_permissions_boundary_policy_doc data source
aws_iam_policy_document.terraformer_policy_doc data source
aws_iam_policy_document.users_account_assume_role_doc data source
aws_organizations_organization.cool data source
aws_ssm_parameter.artifact_export_access_key_id data source
aws_ssm_parameter.artifact_export_bucket_name data source
aws_ssm_parameter.artifact_export_region data source
aws_ssm_parameter.artifact_export_secret_access_key data source
aws_ssm_parameter.samba_username data source
aws_ssm_parameter.vnc_public_ssh_key data source
aws_ssm_parameter.vnc_username data source
cloudinit_config.assessorworkbench_cloud_init_tasks data source
cloudinit_config.debiandesktop_cloud_init_tasks data source
cloudinit_config.egressassess_cloud_init_tasks data source
cloudinit_config.gophish_cloud_init_tasks data source
cloudinit_config.guacamole_cloud_init_tasks data source
cloudinit_config.kali_cloud_init_tasks data source
cloudinit_config.nessus_cloud_init_tasks data source
cloudinit_config.pentestportal_cloud_init_tasks data source
cloudinit_config.samba_cloud_init_tasks data source
cloudinit_config.teamserver_cloud_init_tasks data source
cloudinit_config.terraformer_cloud_init_tasks data source
terraform_remote_state.dns_certboto data source
terraform_remote_state.dynamic_assessment data source
terraform_remote_state.images data source
terraform_remote_state.images_parameterstore data source
terraform_remote_state.master data source
terraform_remote_state.sharedservices data source
terraform_remote_state.sharedservices_networking data source
terraform_remote_state.terraform data source

Inputs

Name Description Type Default Required
assessment_account_name The name of the AWS account for this assessment (e.g. "env0"). string n/a yes
assessment_artifact_export_enabled Whether or not to enable the export of assessment artifacts to an S3 bucket. If this is set to true, then the following variables should also be configured appropriately: assessment_artifact_export_map, ssm_key_artifact_export_access_key_id, ssm_key_artifact_export_secret_access_key, ssm_key_artifact_export_bucket_name, and ssm_key_artifact_export_region. bool false no
assessment_artifact_export_map A map whose keys are assessment types and whose values are the prefixes for what an assessment artifact will be named when it is exported to the S3 bucket contained in the SSM parameter specified by the ssm_key_artifact_export_bucket_name variable (e.g. { "PenTest" : "pentest/PT", "Phishing" : "phishing/PH", "RedTeam" : "redteam/RT" }). Note that prefixes can include a path within the bucket. For example, if the prefix is "pentest/PT" and the assessment ID is "ASMT1234", then the corresponding artifact will be exported to "bucket-name/pentest/PT-ASMT1234.tgz" when the archive-artifact-data-to-bucket.sh script is run. map(string) {} no
assessment_id The identifier for this assessment (e.g. "ASMT1234"). string "" no
assessment_type The type of this assessment (e.g. "PenTest"). string "" no
assessmentfindingsbucketwrite_sharedservices_policy_description The description to associate with the IAM policy that allows assumption of the role in the Shared Services account that is allowed to write to the assessment findings bucket. string "Allows assumption of the role in the Shared Services account that is allowed to write to the assessment findings bucket." no
assessmentfindingsbucketwrite_sharedservices_policy_name The name to assign the IAM policy that allows assumption of the role in the Shared Services account that is allowed to write to the assessment findings bucket. string "SharedServices-AssumeAssessmentFindingsBucketWrite" no
assessor_account_role_arn The ARN of an IAM role that can be assumed to create, delete, and modify AWS resources in a separate assessor-owned AWS account. string "arn:aws:iam::123456789012:role/Allow_It" no
aws_availability_zone The AWS availability zone to deploy into (e.g. a, b, c, etc.). string "a" no
aws_region The AWS region where the non-global resources for this assessment are to be provisioned (e.g. "us-east-1"). string "us-east-1" no
cert_bucket_name The name of the AWS S3 bucket where certificates are stored. string "cisa-cool-certificates" no
cool_domain The domain where the COOL resources reside (e.g. "cool.cyber.dhs.gov"). string "cool.cyber.dhs.gov" no
dns_ttl The TTL value to use for Route53 DNS records (e.g. 86400). A smaller value may be useful when the DNS records are changing often, for example when testing. number 60 no
efs_access_point_gid The group ID that should be used for file-system access to the EFS share (e.g. 2048). Note that this value should match the GID of any group given ownership of the EFS share mount point. number 2048 no
efs_access_point_root_directory The non-root path to use as the root directory for the AWS EFS access point that controls EFS access for assessment data sharing. string "/assessment_share" no
efs_access_point_uid The user ID that should be used for file-system access to the EFS share (e.g. 2048). Note that this value should match the UID of any user given ownership of the EFS share mount point. number 2048 no
efs_users_group_name The name of the POSIX group that should have ownership of a mounted EFS share (e.g. "efs_users"). string "efs_users" no
email_sending_domains The list of domains to send emails from within the assessment environment (e.g. [ "example.com" ]). Teamserver and Gophish instances will be deployed with each sequential domain in the list, so teamserver0 and gophish0 will get the first domain, teamserver1 and gophish1 will get the second domain, and so on. If there are more Teamserver or Gophish instances than email-sending domains, the domains in the list will be reused in a wrap-around fashion. For example, if there are three Teamservers and only two email-sending domains, teamserver0 will get the first domain, teamserver1 will get the second domain, and teamserver2 will wrap-around back to using the first domain. Note that all letters in this variable must be lowercase or else an error will be displayed. list(string) [ "example.com" ] no
findings_data_bucket_name The name of the AWS S3 bucket where findings data is to be written. The default value is not a valid string for a bucket name, so findings data cannot be written to any bucket unless a value is specified. string "" no
guac_connection_setup_path The full path to the dbinit directory where initialization files must be stored in order to work properly. (e.g. "/var/guacamole/dbinit") string "/var/guacamole/dbinit" no
inbound_ports_allowed An object specifying the ports allowed inbound (from anywhere) to the various instance types (e.g. {"assessorworkbench" : [], "debiandesktop" : [], "egressassess" : [], "gophish" : [], "kali": [{"protocol": "tcp", "from_port": 443, "to_port": 443}, {"protocol": "tcp", "from_port": 9000, "to_port": 9009}], "nessus" : [], "pentestportal" : [], "samba" : [], "teamserver" : [], "terraformer" : [], "windows" : [], }). object({ assessorworkbench = list(object({ protocol = string, from_port = number, to_port = number })), debiandesktop = list(object({ protocol = string, from_port = number, to_port = number })), egressassess = list(object({ protocol = string, from_port = number, to_port = number })), gophish = list(object({ protocol = string, from_port = number, to_port = number })), kali = list(object({ protocol = string, from_port = number, to_port = number })), nessus = list(object({ protocol = string, from_port = number, to_port = number })), pentestportal = list(object({ protocol = string, from_port = number, to_port = number })), samba = list(object({ protocol = string, from_port = number, to_port = number })), teamserver = list(object({ protocol = string, from_port = number, to_port = number })), terraformer = list(object({ protocol = string, from_port = number, to_port = number })), windows = list(object({ protocol = string, from_port = number, to_port = number })), }) { "assessorworkbench": [], "debiandesktop": [], "egressassess": [], "gophish": [], "kali": [], "nessus": [], "pentestportal": [], "samba": [], "teamserver": [], "terraformer": [], "windows": [] } no
nessus_activation_codes The list of Nessus activation codes (e.g. ["AAAA-BBBB-CCCC-DDDD"]). The number of codes in this list should match the number of Nessus instances defined in operations_instance_counts. list(string) [] no
nessus_web_server_port The port on which the Nessus web server should listen (e.g. 8834). number 8834 no
operations_instance_counts A map specifying how many instances of each type should be created in the operations subnet (e.g. { "assessorworkbench" : 0, "debiandesktop" : 0, "egressassess" : 0,"gophish" : 0, "kali": 1, "nessus" : 0, "pentestportal" : 0, "samba" : 0, "teamserver" : 0, "terraformer" : 0, "windows" : 1, }). object({ assessorworkbench = number, debiandesktop = number, egressassess = number, gophish = number, kali = number, nessus = number, pentestportal = number, samba = number, teamserver = number, terraformer = number, windows = number }) { "assessorworkbench": 0, "debiandesktop": 0, "egressassess": 0, "gophish": 0, "kali": 1, "nessus": 0, "pentestportal": 0, "samba": 0, "teamserver": 0, "terraformer": 0, "windows": 1 } no
operations_subnet_cidr_block The operations subnet CIDR block for this assessment (e.g. "10.10.0.0/24"). string n/a yes
private_domain The local domain to use for this assessment (e.g. "env0"). If not provided, local.private_domain will be set to the base of the assessment account name. For example, if the account name is "env0 (Staging)", local.private_domain will default to "env0". Note that local.private_domain should be used in place of var.private_domain throughout this project. string "" no
private_subnet_cidr_blocks The list of private subnet CIDR blocks for this assessment (e.g. ["10.10.1.0/24", "10.10.2.0/24"]). list(string) n/a yes
provisionaccount_role_name The name of the IAM role that allows sufficient permissions to provision all AWS resources in the assessment account. string "ProvisionAccount" no
provisionassessment_policy_description The description to associate with the IAM policy that allows provisioning of the resources required in the assessment account. string "Allows provisioning of the resources required in the assessment account." no
provisionassessment_policy_name The name to assign the IAM policy that allows provisioning of the resources required in the assessment account. string "ProvisionAssessment" no
provisionssmsessionmanager_policy_description The description to associate with the IAM policy that allows sufficient permissions to provision the SSM Document resource and set up SSM session logging in this assessment account. string "Allows sufficient permissions to provision the SSM Document resource and set up SSM session logging in this assessment account." no
provisionssmsessionmanager_policy_name The name to assign the IAM policy that allows sufficient permissions to provision the SSM Document resource and set up SSM session logging in this assessment account. string "ProvisionSSMSessionManager" no
publish_egress_ip_addresses A boolean value that specifies whether EC2 instances in the operations subnet should be tagged to indicate that their public IP addresses may be published. This is useful for deconfliction purposes. Publishing these addresses can be done via the code in cisagov/publish-egress-ip-lambda and cisagov/publish-egress-ip-terraform. bool false no
read_terraform_state_role_name The name to assign the IAM role (as well as the corresponding policy) that allows read-only access to the cool-assessment-terraform state in the S3 bucket where Terraform state is stored. The %s in this name will be replaced by the value of the assessment_account_name variable. string "ReadCoolAssessmentTerraformTerraformState-%s" no
session_cloudwatch_log_group_name The name of the log group into which session logs are to be uploaded. string "/ssm/session-logs" no
ssm_key_artifact_export_access_key_id The AWS SSM Parameter Store parameter that contains the AWS access key of the IAM user that can write to the assessment artifact export bucket (e.g. "/assessment_artifact_export/access_key_id"). string "/assessment_artifact_export/access_key_id" no
ssm_key_artifact_export_bucket_name The AWS SSM Parameter Store parameter that contains the name of the assessment artifact export bucket (e.g. "/assessment_artifact_export/bucket"). string "/assessment_artifact_export/bucket" no
ssm_key_artifact_export_region The AWS SSM Parameter Store parameter that contains the region of the IAM user (specified via ssm_key_artifact_export_access_key_id) that can write to the assessment artifact export bucket (e.g. "/assessment_artifact_export/region"). string "/assessment_artifact_export/region" no
ssm_key_artifact_export_secret_access_key The AWS SSM Parameter Store parameter that contains the AWS secret access key of the IAM user that can write to the assessment artifact export bucket (e.g. "/assessment_artifact_export/secret_access_key"). string "/assessment_artifact_export/secret_access_key" no
ssm_key_nessus_admin_password The AWS SSM Parameter Store parameter that contains the password of the Nessus admin user (e.g. "/nessus/assessment/admin_password"). string "/nessus/assessment/admin_password" no
ssm_key_nessus_admin_username The AWS SSM Parameter Store parameter that contains the username of the Nessus admin user (e.g. "/nessus/assessment/admin_username"). string "/nessus/assessment/admin_username" no
ssm_key_samba_username The AWS SSM Parameter Store parameter that contains the username of the Samba user (e.g. "/samba/username"). string "/samba/username" no
ssm_key_vnc_ssh_public_key The AWS SSM Parameter Store parameter that contains the SSH public key that corresponds to the private SSH key of the VNC user (e.g. "/vnc/ssh/ed25519_public_key"). string "/vnc/ssh/ed25519_public_key" no
ssm_key_vnc_username The AWS SSM Parameter Store parameter that contains the username of the VNC user (e.g. "/vnc/username"). string "/vnc/username" no
tags Tags to apply to all AWS resources created map(string) {} no
terraformer_permissions_boundary_policy_description The description to associate with the IAM permissions boundary policy attached to the Terraformer instance role in order to protect the foundational resources deployed in this account. string "The IAM permissions boundary policy attached to the Terraformer instance role in order to protect the foundational resources deployed in this account." no
terraformer_permissions_boundary_policy_name The name to assign the IAM permissions boundary policy attached to the Terraformer instance role in order to protect the foundational resources deployed in this account. string "TerraformerPermissionsBoundary" no
terraformer_role_description The description to associate with the IAM role (and policy) that allows Terraformer instances to create appropriate AWS resources in this account. string "Allows Terraformer instances to create appropriate AWS resources in this account." no
terraformer_role_name The name to assign the IAM role (and policy) that allows Terraformer instances to create appropriate AWS resources in this account. string "Terraformer" no
valid_assessment_id_regex A regular expression that specifies valid assessment identifiers (e.g. "^ASMT[[:digit:]]{4}$"). string "" no
valid_assessment_types A list of valid assessment types (e.g. ["PenTest", "Phishing", "RedTeam"]). If this list is empty (i.e. []), then any value used for assessment_type will trigger a validation error. list(string) [ "" ] no
vpc_cidr_block The CIDR block to use this assessment's VPC (e.g. "10.224.0.0/21"). string n/a yes
windows_with_docker A boolean to control the instance type used when creating Windows instances to allow Docker Desktop support. Windows instances require the metal instance type to run Docker Desktop because of nested virtualization, but if Docker Desktop is not needed then other instance types are fine. bool false no

Outputs

Name Description
assessment_private_zone The private DNS zone for this assessment.
assessor_workbench_instance_profile The instance profile for the Assessor Workbench instances.
assessor_workbench_instances The Assessor Workbench instances.
assessor_workbench_security_group The security group for the Assessor Workbench instances.
aws_region The AWS region where this assessment environment lives.
certificate_bucket_name The name of the S3 bucket where certificate information is stored for this assessment.
cloudwatch_agent_endpoint_client_security_group A security group for any instances that run the AWS CloudWatch agent. This security groups allows such instances to communicate with the VPC endpoints that are required by the AWS CloudWatch agent.
debian_desktop_instance_profile The instance profile for the Debian desktop instances.
debian_desktop_instances The Debian desktop instances.
debian_desktop_security_group The security group for the Debian desktop instances.
dynamodb_endpoint_client_security_group A security group for any instances that wish to communicate with the DynamoDB VPC endpoint.
ec2_endpoint_client_security_group A security group for any instances that wish to communicate with the EC2 VPC endpoint.
efs_access_points The access points to control file-system access to the EFS file share.
efs_client_security_group A security group that should be applied to all instances that will mount the EFS file share.
efs_mount_targets The mount targets for the EFS file share.
egressassess_instance_profile The instance profile for the Egress-Assess instances.
egressassess_instances The Egress-Assess instances.
egressassess_security_group The security group for the Egress-Assess instances.
email_sending_domain_certreadroles The IAM roles that allow for reading the certificate for each email-sending domain.
gophish_instance_profiles The instance profiles for the Gophish instances.
gophish_instances The Gophish instances.
gophish_security_group The security group for the Gophish instances.
guacamole_accessible_security_group A security group that should be applied to all instances that are to be accessible from Guacamole.
guacamole_server The AWS EC2 instance hosting Guacamole.
kali_instance_profile The instance profile for the Kali instances.
kali_instances The Kali instances.
kali_security_group The security group for the Kali instances.
nessus_instance_profile The instance profile for the Nessus instances.
nessus_instances The Nessus instances.
nessus_security_group The security group for the Nessus instances.
operations_subnet The operations subnet.
operations_subnet_acl The access control list (ACL) for the operations subnet.
pentest_portal_instance_profile The instance profile for the Pentest Portal instances.
pentest_portal_instances The Pentest Portal instances.
pentest_portal_security_group The security group for the Pentest Portal instances.
private_subnet_acls The access control lists (ACLs) for the private subnets.
private_subnet_cidr_blocks The private subnet CIDR blocks. These are used to index into the private_subnets and efs_mount_targets outputs.
private_subnet_nat_gateway The NAT gateway for the private subnets.
private_subnets The private subnets.
read_terraform_state_module The IAM policies and role that allow read-only access to the cool-assessment-terraform workspace-specific state in the Terraform state bucket.
remote_desktop_url The URL of the remote desktop gateway (Guacamole) for this assessment.
s3_endpoint_client_security_group A security group for any instances that wish to communicate with the S3 VPC endpoint.
samba_client_security_group The security group that should be applied to all instance types that wish to mount the Samba file share being served by the Samba file share server instances.
samba_instance_profile The instance profile for the Samba file share server instances.
samba_instances The Samba file share server instances.
samba_server_security_group The security group for the Samba file share server instances.
scanner_security_group A security group that should be applied to all instance types that perform scanning. This security group allows egress to anywhere as well as ingress from anywhere via ICMP.
ssm_agent_endpoint_client_security_group A security group for any instances that run the AWS SSM agent. This security group allows such instances to communicate with the VPC endpoints that are required by the AWS SSM agent.
ssm_endpoint_client_security_group A security group for any instances that wish to communicate with the SSM VPC endpoint.
ssm_session_role An IAM role that allows creation of SSM SessionManager sessions to any EC2 instance in this account.
sts_endpoint_client_security_group A security group for any instances that wish to communicate with the STS VPC endpoint.
teamserver_instance_profiles The instance profiles for the Teamserver instances.
teamserver_instances The Teamserver instances.
teamserver_security_group The security group for the Teamserver instances.
terraformer_instances The Terraformer instances.
terraformer_permissions_boundary_policy The permissions boundary policy for the Terraformer instances.
terraformer_security_group The security group for the Terraformer instances.
vpc The VPC for this assessment environment.
vpn_server_cidr_block The CIDR block for the COOL VPN.
windows_instance_profile The instance profile for the Windows instances.
windows_instances The Windows instances.
windows_security_group The security group for the Windows instances.

Notes

Running pre-commit requires running terraform init in every directory that contains Terraform code. In this repository, this is only the main directory.

Contributing

We welcome contributions! Please see CONTRIBUTING.md for details.

License

This project is in the worldwide public domain.

This project is in the public domain within the United States, and copyright and related rights in the work worldwide are waived through the CC0 1.0 Universal public domain dedication.

All contributions to this project will be released under the CC0 dedication. By submitting a pull request, you are agreeing to comply with this waiver of copyright interest.

cool-assessment-terraform's People

Contributors

arcsector avatar dav3r avatar dependabot[bot] avatar felddy avatar fleixius avatar hillaryj avatar jasonodoom avatar jmorrowomni avatar jsf9k avatar m1j09830 avatar mcdonnnj avatar michaelsaki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

cool-assessment-terraform's Issues

Fetch postfix TLS certificate at Gophish instance startup

๐Ÿ’ก Summary

Gophish instances (which contain a Dockerized postfix container) should fetch the TLS certificate for their specified domain from our S3 certificates bucket (if available) and place it in the appropriate location for postfix to use it.

Motivation and context

This will allow postfix to run with a valid and appropriate certificate for the instance. It is also one of the acceptance criteria specified in https://github.com/cisagov/cool-system/issues/90.

Implementation notes

This issue does NOT include figuring out how the certificate is created and put in our S3 bucket. Initially, that will be a manual process that we want to automate in the future.

Acceptance criteria

  • When a Gophish instance is deployed, the fullchain.pem and privkey.pem files for the appropriate TLS certificate (i.e. the one that corresponds to the email_sending_domain Terraform variable) are deployed to /var/pca/pca-gophish-composition/secrets/postfix/ before postfix starts up.
  • A mail client (e.g. Thunderbird) is able to securely connect to postfix without encountering any security exceptions (assuming that the TLS certificate is valid and unexpired).
  • If a certificate for the specified domain is NOT found in S3, a warning is logged and the default certificate data (from cisagov/postfix-docker) is untouched.

Support multiple email-sending domains per assessment

๐Ÿ’ก Summary

Enable an assessment to have more than one email-sending domain.

Motivation and context

Some types of assessments require multiple instances (e.g. Teamservers), where each instance sends email from a different domain. Currently, our email_sending_domain variable only supports a single domain.

Implementation notes

Convert the email_sending_domain variable from a string to a list.

Acceptance criteria

  • Terraform and cloud-init function correctly with no email-sending domains, a single Teamserver, and a single Gophish instance
  • Terraform and cloud-init function correctly with a single email-sending domain, a single Teamserver, and a single Gophish instance
  • Terraform and cloud-init function correctly with two email-sending domains, two Teamservers, and two Gophish instances
  • Documentation explains how list of email-sending domains is applied to instances that send mail (teamserver, gophish).

Allow Teamserver instances to talk to port 587 on Gophish instances

๐Ÿ’ก Summary

Allow Teamserver instances to talk to port 587 on Gophish instances.

Motivation and context

This change will enable mail to be sent from Cobalt Strike (on the Teamservers) via the SMTP mail-submission port published by the Postfix Docker container (587) on the Gophish instances.

Implementation notes

None

Acceptance criteria

  • Any Teamserver in an assessment environment can send mail via Postfix running on any Gophish instance in the same environment.

PCA Teamserver Elastic IP

@bjb28 commented on Thu May 14 2020

๐Ÿš€ Feature Proposal

Request to have an Elastic IP assigned to the PCA Teamserver to prevent us from needing to change DNS records during an assessment.

Motivation

Limit the chances that an IP change happens which would require us to update DNS records. If this were to happen we would need to update the customer with the new IP and change DNS records.

Example

Same IP through entire assessment.

Pitch

An elastic IP will help to avoid additional work and possible missed clicks during an assessment.

Globally rename Assessor Portal to Assessor Workbench

๐Ÿ› Summary

The cisagov/assessor-portal-packer repo was renamed to cisagov/assessor-workbench-packer, and the name of the resulting AMI was changed in cisagov/assessor-workbench-packer#31 from assessor-portal-hvm-${local.timestamp}-x86_64-ebs to assessor-workbench-hvm-${local.timestamp}-x86_64-ebs. The new name more closely matches what folks are calling the tool. The AMI name was unfortunately not updated in this repository, so the refreshed AMIs with the newer name are not being used.

The AMI name should be updated, and all resource names, etc. that reference the old "Assessor Portal" language should be updated to instead reference "Assessor Workbench". Note that the name of the instance type corresponding to the Assessor Workbench would need to be updated in the inbound_ports_allowed and operations_instance_counts input variables, so this is a breaking change.

Separate Guacamole and Terraformer instances into separate subnets

๐Ÿ’ก Summary

Instead of having the Guacamole instance and the Terraformer instances in the same subnets, we should split the private subnets into separate sets of Guacamole and Terraformer subnets.

Motivation and context

The Guacamole server is touched by traffic that comes from outside the assessment environment, so the subnet in which it resides should not contain any instances that do not touch such traffic. The Guacamole and Terraformer use cases are also quite different, so it is difficult for them to share NACL rules without making them overly broad for both types of instances.

In addition, I found that I can't ssh via SSM to a Terraformer instance unless I put it in the first private subnet alongside Guacamole. I believe this has something to do with the NACLs that are in place for that subnet specifically. This renders the second private subnet useless for our purposes.

Acceptance criteria

  • The Terraformer instance(s) are placed in a subnet (or set of subnets) separate from the operations subnet or the subnet(s) where Guacamole resides.
  • Dev team members are able to ssh into the Terraformer instances via AWS SSM.
  • The Terraformer instance(s) are also reachable via Guacamole.
  • The Terraformer instance(s) can still be used to create, destroy, and modify all appropriate AWS resources.

Modify default Guacamole admin password

Currently, when the Guacamole instance is deployed, it has default credentials for the admin user. Modify this so that a new password is applied (probably should be pulled from the SSM Parameter Store).

Improve incoming port indexing

๐Ÿ› Summary

The indexing created here and used here works, but it causes problems if ports are added or removed from the inbound_ports_allowed input variable. This is for the same reason that it is preferable to use a map instead of a list when creating multiple resources via a single resource block: if you add a single entry in the middle then multiple resources must be destroyed because the indexing changes.

Do we want to assign policies to the VPC endpoints?

๐Ÿ’ก Summary

It is possible to assign policies to the VPC endpoints. These policies restrict what can queries can be sent via the VPC endpoint.

Motivation and context

This would provide yet another layer of security, albeit at the cost of more maintenance.

Acceptance criteria

  • Policies are created and assigned to the VPC endpoints.
  • The assessment environment continues to function as it did before.

Control EFS share permissions using EFS access points

๐Ÿ’ก Summary

We should add EFS access points to control file-system access to the EFS share.

Motivation and context

Currently there is no mechanism to enforce file-system permissions on access to the EFS share. This means that files and directories with uids/gids different from those owning the EFS share mount point can be created. This can result in difficulties interacting with EFS share contents on systems that cannot sudo (like Windows instances).

Implementation notes

There should be an access point for each EFS mount target created (there is only one currently). The efs-mount cloud-config will need to be updated to support this functionality as well.

Acceptance criteria

How do we know when this work is done?

  • EFS share file-system permissions are enforced

May need larger Guacamole instance

It looks like the postgres service (in the guacamole Docker composition) ran out of memory today, after being up for less than 24 hours:

postgres              | 2020-04-02 22:10:20.341 UTC [1] LOG:  database system is ready to accept connections
postgres              | 2020-04-03 20:23:05.774 UTC [1] LOG:  could not fork autovacuum worker process: Cannot allocate memory
postgres              | 2020-04-03 20:23:12.364 UTC [1] LOG:  could not fork autovacuum worker process: Cannot allocate memory

I ssh'd to the instance and restarted the service with:
systemctl restart guacamole-composition.service

However, we probably want to move our Guacamole instance to something with a bit more memory to avoid this in the future.

Increase size of root disk for Guacamole instances

๐Ÿ› Summary

Guacamole was temporarily unavailable in env22-production yesterday because the logs associated with the Docker composition filled up the root disk on the Guacamole instance. Giving the Guacamole instance a larger root disk will help alleviate this.

See also cisagov/guacamole-composition#54.

To reproduce

Steps to reproduce the behavior:

  1. Spin up an RTA environment (many instances accessible via guacamole).
  2. Wait a few months.
  3. Observe that the root disk on the Guacamole instance fills up.

Expected behavior

There should be no interruption in availability of the Guacamole service.

Any helpful log output or screenshots

root@guac:/usr/bin# df -h
Filesystem       Size  Used Avail Use% Mounted on
udev             1.9G     0  1.9G   0% /dev
tmpfs            388M  756K  387M   1% /run
/dev/nvme0n1p1   7.7G  6.4G  936M  88% /
tmpfs            1.9G     0  1.9G   0% /dev/shm
tmpfs            5.0M     0  5.0M   0% /run/lock
/dev/nvme0n1p15  124M  5.9M  118M   5% /boot/efi
overlay          7.7G  6.4G  936M  88% /var/lib/docker/overlay2/5453e28a36f9ce890a760c86e1f2d5e94f8c6659ad3d229c7c964e370d5ea2a3/merged
shm               64M     0   64M   0% /var/lib/docker/containers/3f5bd5504fc149f9fe74c3abe28c85f3f6e8bfcd7d8bded026d3125efe005060/mounts/shm
overlay          7.7G  6.4G  936M  88% /var/lib/docker/overlay2/570012723f1dbb24eaa5504a75c3d2a346437e6d271d99dca3cfc1f7b91031a8/merged
shm               64M     0   64M   0% /var/lib/docker/containers/16cca99e59f0c08e0bdaef4d79ec6785c2322b9a916c6c79fb62a31929778775/mounts/shm
overlay          7.7G  6.4G  936M  88% /var/lib/docker/overlay2/4f4cba0ebff605b2300ad541baec5879cd00f892a08d9527299d736a31b8b3d6/merged
shm               64M     0   64M   0% /var/lib/docker/containers/9538eabf5828278a1c24c420a700a658b580b0542213a10977e1ef680221781d/mounts/shm
overlay          7.7G  6.4G  936M  88% /var/lib/docker/overlay2/f650455c4ce1035e78ad84f61c6373e68ac123b600067bbbf538788f52f425ce/merged
shm               64M   16K   64M   1% /var/lib/docker/containers/176e7686b05da94828c46c0419398dd7df2b3699bda952447cc312b52f571c1d/mounts/shm
tmpfs            388M     0  388M   0% /run/user/0

Simplify security groups

๐Ÿ’ก Summary

Now that I know that the number of security groups that can be applied to an ENI is not a hard limit, I'd like to create a single cloudwatch_endpoint_client_sg security group, for example. That will allow me to simplify the corresponding security group rules by dropping the new CIDR-based rule added in #143 and getting rid of all the other SG-based rules. I will do something similar for the other VPC endpoint security groups. Instances spun up via the Terraformer instance can then simply add themselves to those security groups as necessary to gain access to the necessary VPC endpoints.

This change will also allow me to get rid of the existing instance type SGs, except for those that actually require an explicit mention. For example, the Debian desktop SG will probably be rendered unnecessary as it isn't mentioned explicitly outside of these endpoint SGs, but the Teamserver instance will still be becessary because certain other need to be instance types are allowed ingress to the Teamservers.

Motivation and context

"Simplify, simplify, simplify!" - Henry David Thoreau

Guacamole instance inaccessible with no Kali and no allowed inbound ports

๐Ÿ› Bug Report

When an assessment environment is created without a Kali instance and with no operations_subnet_inbound_tcp_ports_allowed and no operations_subnet_inbound_udp_ports_allowed, the resulting Guacamole instance starts up in a messed-up state and is not accessible via SSH.

To Reproduce

Steps to reproduce the behavior:

  • Create a terraform variables file that includes:
operations_instance_counts = {
  "debiandesktop" : 1,
  "kali" : 0,
}
operations_subnet_inbound_tcp_ports_allowed = []
operations_subnet_inbound_udp_ports_allowed = []
  • Run terraform apply

Expected behavior

The Guacamole instance should start up correctly and be accessible by SSH (via SSM).

Any helpful log output

Here is the section of the log that shows that the Docker bridge fails to come up, which prevents the Guacamole composition from starting successfully:

[   73.135967] cloud-init[579]: Cloud-init v. 20.2 running 'modules:config' at Tue, 03 Nov 2020 21:34:11 +0000. Up 71.06 seconds.
[   79.777026] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
[   79.809832] Bridge firewalling registered
[   80.781409] Initializing XFRM netlink socket
[   80.872609] IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
[   83.996722] IPv6: ADDRCONF(NETDEV_UP): br-4516774645af: link is not ready
[  100.326197] IPv6: ADDRCONF(NETDEV_UP): br-f26879dd3117: link is not ready
[  116.666879] IPv6: ADDRCONF(NETDEV_UP): br-fee1ac643615: link is not ready
[  132.850902] IPv6: ADDRCONF(NETDEV_UP): br-1375b6d8322d: link is not ready
[  149.434985] IPv6: ADDRCONF(NETDEV_UP): br-c351ac01eb7e: link is not ready
[  165.578743] IPv6: ADDRCONF(NETDEV_UP): br-724ec8dcc51f: link is not ready
[  182.214864] IPv6: ADDRCONF(NETDEV_UP): br-16a24fb24d6a: link is not ready
[  198.574740] IPv6: ADDRCONF(NETDEV_UP): br-a70d667a1eb2: link is not ready
[  214.986790] IPv6: ADDRCONF(NETDEV_UP): br-e21ddb74d51b: link is not ready
[  231.342529] IPv6: ADDRCONF(NETDEV_UP): br-f1cc3b89432f: link is not ready
[  247.735274] IPv6: ADDRCONF(NETDEV_UP): br-a95cdf69980f: link is not ready

It is unclear why this would happen based on the lack of a Kali instance or allowing inbound ports. Our current suspicion is that it has something to do with the cloud-init scripts that generate the Guacamole connections:

We think that those scripts may be failing in such a way that cloud-init gets hung up and cannot continue, but more research is needed.

It's also worth noting that this message in the log (bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.) sounds bad, but we have confirmed that it shows up on instances where Guacamole starts up successfully, so we don't believe it is related to this issue.

Automate cross-account association of VPC and hosted zone

๐Ÿš€ Feature Proposal

Right now there is some manual AWS CLI foo that takes place to associate the assessment VPC with the private hosted zone of the Shared Services VPC. Terraform does not yet support this kind of cross-account association. We should make use of a module, such as this one so that association need not take place manually.

Motivation

We should eschew manual steps, since

  • It is difficult to automate a process that has manual steps.
  • Manual steps are error-prone.

Consider updating EFS to use elastic throughput mode

๐Ÿ’ก Summary

Consider updating the EFS to use elastic throughput mode.

Motivation and context

I heard from an RTA entity that the EFS is very slow in their assessment environment. I also recently received an email from AWS that stated that one or more EFS shares had been throttled because the current mode (bursting throughput mode) is insufficient to handle the traffic. The email suggested switching to elastic throughput mode. (The EFS can also apparently switch between the two modes on demand.)

Contents of the email from AWS:

Hello,

You are receiving this message because, at least once in the past 30 days, one or more Amazon EFS file systems in your account has been constrained by the performance limits of EFSโ€™s Bursting throughput mode. Specifically, these file systems have exhausted the Bursting modeโ€™s available throughput and/or IOPS performance, which could result in a degraded experience for your application or end users.

Your file system is currently configured to use EFS's Bursting Throughput mode. Bursting file systems' throughput capacity is determined by a credit model (โ€œburst creditsโ€) and IOPS capacity is up to 35,000 read IOPS and up to 7,000 write IOPS [1]. When your file system exhausts its โ€œburst creditsโ€ or reaches its IOPS limit, throttling of additional I/O can result in unexpected slowness, timeouts, or other application failures for your end users or workloads.

We have identified the following impacted Amazon EFS file system(s) that have been throttled in the past 30 days and require more throughput or IOPS than what Bursting Throughput provides.

A list of your affected resource(s) can be found in the 'Affected resources' tab in the AWS Health Dashboard.

#Action Recommended:
To help your end users and applications to achieve their required throughput and IOPS performance, we recommend that you update your file system configuration to Elastic Throughput mode [2]. When in Elastic Throughput mode, your file system can achieve up to 10 GiB/s of read throughput or 3 GiB/s of write throughput โ€” depending on the AWS Region [3] โ€” and up to 55,000 read IOPS and 25,000 write IOPS. Elastic Throughput charges a pay-as-you-go rate for data transfer to/from your file system, which means you only pay for what you use [4]. You can update your file system configuration to switch between Elastic and Bursting throughput modes on demand.

If you have already updated your file system configuration to one of EFSโ€™s enhanced throughput modes - Elastic or Provisioned Throughput - you can disregard this message.

If you have questions or concerns, please contact AWS Support [5].

[1] https://urldefense.us/v3/__https://docs.aws.amazon.com/efs/latest/ug/performance.html*performance-overview__;Iw!!BClRuOV5cvtbuNI!EogXGg2V2HigA96RB39oMQc433wJ7T0KegTJXaXvs7PGLdiHUkFlAVIMAxzCJ-jhRN8h35x-sfds1IBoiSEYoa6H12FwpyVgFhNv5TjW07-ahyg$
[2] https://urldefense.us/v3/__https://docs.aws.amazon.com/efs/latest/ug/performance.html*throughput-modes__;Iw!!BClRuOV5cvtbuNI!EogXGg2V2HigA96RB39oMQc433wJ7T0KegTJXaXvs7PGLdiHUkFlAVIMAxzCJ-jhRN8h35x-sfds1IBoiSEYoa6H12FwpyVgFhNv5TjWygC6JEY$
[3] https://urldefense.us/v3/__https://docs.aws.amazon.com/efs/latest/ug/limits.html__;!!BClRuOV5cvtbuNI!EogXGg2V2HigA96RB39oMQc433wJ7T0KegTJXaXvs7PGLdiHUkFlAVIMAxzCJ-jhRN8h35x-sfds1IBoiSEYoa6H12FwpyVgFhNv5TjWNjU9s3U$
[4] https://urldefense.us/v3/__https://aws.amazon.com/efs/pricing/__;!!BClRuOV5cvtbuNI!EogXGg2V2HigA96RB39oMQc433wJ7T0KegTJXaXvs7PGLdiHUkFlAVIMAxzCJ-jhRN8h35x-sfds1IBoiSEYoa6H12FwpyVgFhNv5TjWXjBwRUY$
[5] https://urldefense.us/v3/__https://aws.amazon.com/support__;!!BClRuOV5cvtbuNI!EogXGg2V2HigA96RB39oMQc433wJ7T0KegTJXaXvs7PGLdiHUkFlAVIMAxzCJ-jhRN8h35x-sfds1IBoiSEYoa6H12FwpyVgFhNv5TjW_TGYkJ0$

Sincerely,
Amazon Web Services

Amazon Web Services, Inc. is a subsidiary of Amazon.com, Inc. Amazon.com is a registered trademark of Amazon.com, Inc. This message was produced and distributed by Amazon Web Services Inc., 410 Terry Ave. North, Seattle, WA 98109-5210

Implementation notes

None.

Acceptance criteria

How do we know when this work is done?

  • The EFS is switched to use elastic throughput mode.
  • The assessors verify increased performance of the EFS.

CommandoVM Communication with TeamServer

๐Ÿ’ก Summary

In order to leverage proxychains between our CommandoVM and TeamServer we need to have port range 5000-5999 TCP open between these instances.

Motivation and context

This will allow us to tunnel traffic from the CommandoVM into the target networks. Some tools leveraged on our engagements are run from Windows and in our current design we are unable to leverage those tools during the External week.

Implementation notes

  • CommandoVM would allow inbound traffic from both TeamServer instances over ports 5000-5999 TCP
  • TeamServer would allow inbound traffic from any deployed CommandoVMs over ports 5000-5999 TCP

Acceptance criteria

How do we know when this work is done?

  • CommandoVM would allow inbound traffic from both TeamServer instances over ports 5000-5999 TCP
  • TeamServer would allow inbound traffic from any deployed CommandoVMs over ports 5000-5999 TCP

Create a separate security group for each operations instance type

๐Ÿ’ก Summary

We should give each operations instance type its own security group instead of lumping several instance types into the operations security group. We actually already do this with some instance types, but not others.

Motivation and context

The Nessus instances are assigned to the operations security group, but they and only they require access to the AWS STS VPC endpoint. By splitting the nessus instances out into their own security group we could avoid needlessly giving other operations instance types permissions they do not need.

It might also make sense to have an all_egress security group for the Kali and Nessus instance types, then also create Kali- and Nessus-specific security groups that give only the extra (VPC endpoint) access they need.

Acceptance criteria

  • Nessus instances are placed in their own security group that assigns them permission to use the STS VPC endpoint.
  • Nessus instances can still perform their scanning function as before.

Add and use an email-sending domain input variable

๐Ÿ’ก Summary

Make an input variable for the domain where emails will be sent from and configure the Postfix container on the Gophish instance to correctly use that domain.

Motivation and context

Many of our assessments have the need to send emails from a specific domain, which currently requires several manual configuration steps. This would automate part of that process.

This is part of https://github.com/cisagov/cool-system/issues/90.

Implementation notes

Ensure that the PRIMARY_DOMAIN is set for Postfix before the container starts up. As long as this is done by cloud-init, I think we should be fine, since the pca-gophish-composition SystemD service doesn't start up until after cloud-final.service.

Acceptance criteria

When a fresh assessment environment containing a Gophish instance is deployed:

  • The Postfix container's primary domain is correctly set based on the value of the Terraform input variable for the email-sending domain
  • The Postfix container and the pca-gophish-composition SystemD service on the Gophish instance start up correctly
  • Email can be successfully sent via Postfix from the email-sending domain

Automate disassociation of default Transit Gateway route table association

๐Ÿš€ Feature Proposal

Automatically disassociate the default Transit Gateway route table association before Terraform attempts to perform the association for the assessment environment.

Motivation

We want to avoid this Terraform error (which we currently get):

Error: error associating EC2 Transit Gateway Route Table (tgw-rtb-04f86cffc92794d4b) association (tgw-attach-0e1e8577d46e6cd93): Resource.AlreadyAssociated: Transit Gateway Attachment tgw-attach-0e1e8577d46e6cd93 is already associated to a route table.
        status code: 400, request id: 35096b88-77b9-46af-8507-0254bce8a190

When we get this error now, we have to manually disassociate the default Transit Gateway route table association, then re-run the terraform apply.

Pitch

We want to remove as many manual steps as possible from our Terraforming process.

Set up Thunderbird email account autoconfiguration support

๐Ÿ’ก Summary

Customize the Gophish instance so that Thunderbird will automatically configure email accounts for the specified assessment email-sending domain.

Motivation and context

In https://github.com/cisagov/cool-system/issues/90, the RVA team has requested an email client that requires minimal configuration to use during assessments.

Implementation notes

To meet this need, two things must be done:

  • Deploy a Thunderbird autoconfiguration file to the Gophish instance (at /usr/lib/thunderbird/isp/myemailsendingdomain.gov.xml) containing the following (customized for the appropriate domain):
<?xml version="1.0"?>
<clientConfig version="1.1">
  <emailProvider id="myemailsendingdomain.gov">
    <domain>myemailsendingdomain.gov</domain>
    <displayName>myemailsendingdomain.gov</displayName>
    <displayShortName>myemailsendingdomain.gov</displayShortName>
    <incomingServer type="imap">
      <hostname>myemailsendingdomain.gov</hostname>
      <port>993</port>
      <socketType>SSL</socketType>
      <authentication>password-cleartext</authentication>
      <username>%EMAILLOCALPART%</username>
    </incomingServer>
    <outgoingServer type="smtp">
      <hostname>myemailsendingdomain.gov</hostname>
      <port>587</port>
      <socketType>STARTTLS</socketType>
      <authentication>password-cleartext</authentication>
      <username>%EMAILLOCALPART%</username>
    </outgoingServer>
  </emailProvider>
</clientConfig>
  • Add a line to /etc/hosts for the email sending domain, e.g.:
    127.0.0.1 myemailsendingdomain.gov

These two things (combined with a successfully-deployed TLS certificate) will enable a simple, seamless setup of email accounts for the specified domain in Thunderbird.

Acceptance criteria

  • /usr/lib/thunderbird/isp/myemailsendingdomain.gov.xml is deployed at instance startup
  • /etc/hosts contains an entry like 127.0.0.1 myemailsendingdomain.gov
  • When using the Thunderbird "new email account" wizard to create accounts for the assessment email-sending domain, new accounts can be added without any errors and with the correct SMTP/IMAP configuration
  • Emails can be sent using the new account in Thunderbird
  • Emails sent from the new account can be viewed in Thunderbird via IMAPS

Use narrower role(s) than Shared Services "provision account"

๐Ÿ’ก Summary

Use a narrower role (or roles) than the Shared Services "provision account" role.

Motivation and context

We currently use a provider based on the very powerful "provision account" role in the Shared Services account to provision assessment environments. To reduce risk, a narrower, more tailored role (or roles) should be created with only the permissions necessary to accomplish the goal.

Implementation notes

Note that we currently have two providers based on the same Shared Services role- this duplication should be eliminated:

Acceptance criteria

  • A less-powerful role (or roles) is used to manipulate resources in the Shared Services account.
  • Assessment environments can still be successfully created, updated, and destroyed.

Support Multiple Availability Zones

Currently, all infrastructure in this project is deployed to a single AWS availability zone, as defined by the aws_region and aws_availability_zone variables. To improve our fault-tolerance, we should come up with a reasonable way to spread assessment resources across multiple availability zones so that assessment operations can continue if a particular availability zone becomes unavailable.

Our solution should also consider the usage of multiple AWS regions as well.

Move persistent Gophish data from EFS volume to EBS volume

๐Ÿ’ก Summary

Move the persistent Gophish data currently stored on the EFS volume over to the new EBS volume created in #119.

Motivation and context

Since Gophish data lives within Docker, it can be automatically persisted on the EBS volume similar to how our Dockerized postfix data is stored (see ). Now that this EBS volume exists, there is no good reason to store the Gophish data on the EFS volume.

Implementation notes

I think this change will pretty much be deleting this file and the code that calls it.

Note that this issue is dependent on cisagov/pca-gophish-composition-packer#48 being completed first.

Acceptance criteria

Gophish data persists in all of the following cases:

  • After a Gophish instance is destroyed and redeployed
  • After a Gophish instance is stopped and started
  • After the PCA-Gophish composition is restarted

Make Nessus listen on port 443

Modify the Nessus installation so that it listens on port 443, rather than the default of 8834. Even better would be to make the port a Terraform variable with a default of 8834.

See this page to see how to easily change the xmlrpc_listen_port via a nessuscli command.

Terraformer root disk too small

๐Ÿ› Summary

A nameless assessor reports that he needs to store some Docker images on the Terraformer instance's root disk, but it is not large enough to do so. I suggested storing them on the EFS share, but he states that "EFS is very slow". He further states that a Terraformer root disk of 128GB would suffice.

Create SSH Guacamole connections for each instance

๐Ÿš€ Feature Proposal

For each instance that currently gets a VNC connection, create an additional Guacamole SSH-only connection.

Motivation

If an assessment user does not require GUI access to an instance, they can use the SSH-only connection, which should (in theory) have less latency and be more responsive to keystrokes, thus improving the overall user experience.

Pitch

If you don't need all the bells and whistles of a VNC connection to an instance, this gives our users a more lightweight and potentially more responsive option.

Automate association of a VPC with a private DNS zone (cross-account)

Terraform is currently unable to automate the process of associating a VPC in one AWS account with a private DNS zone in a different AWS account. This feature request details the issue.

This process can be done via the AWS CLI, so that is our current manual workaround:

aws route53 create-vpc-association-authorization \
--hosted-zone-id COOLPRIVATEZONEID \
--vpc VPCRegion=us-east-1,VPCId=vpc-MY_ENV_VPC_ID \
--profile cool-sharedservices-provisionaccount

aws route53 associate-vpc-with-hosted-zone \
--hosted-zone-id COOLPRIVATEZONEID \
--vpc VPCRegion=us-east-1,VPCId=vpc-MY_ENV_VPC_ID \
--profile cool-MYENV-provisionaccount

It may be possible to use a Terraform module (such as this one) to accomplish this automatically until Terraform is able to natively perform this action.

1-based indexing for PentestPortal instances

๐Ÿš€ Feature Proposal

A Fed lead requests that the PentestPortal instances be labeled via 1-based indexing instead of 0-based indexing.

Motivation

This will make the PentestPortal instance indexing agree with some internal document they use that lists their customers.

Use VPC gateway and interface endpoints to remove HTTPS access and resources where possible

๐Ÿ’ก Summary

The Guacamole server in the private subnet of an assessment environment requires permission to send outgoing HTTPS requests in order to:

  1. Obtain temporary credentials for its instance role via STS, and
  2. Download its TLS certificate from an S3 bucket

It should be possible to do away with the external internet access and instead access STS and S3 via a VPC interface endpoint and a VPC gateway endpoint, respectively.

We should really be using these sorts of endpoints everywhere we can, since it keeps some of our AWS traffic from ever leaving AWS.

Motivation and context

This is a good idea because:

  • It will keep some of our AWS traffic from ever leaving AWS, and therefore increase security
  • Once we do that we will no longer require certain NAT gateway resources
  • By virtue of the previous bullet, it will simplify our OmniGraffle network diagrams for the COOL. (@cisagov/team-ois - does anyone have a link to these diagrams in Google Docs?)

Acceptance criteria

  • The STS interface endpoint is created
  • The S3 gateway endpoint is created
  • Outgoing access to the internet via HTTPS is removed, and the Guacamole server is still able to perform its functions.
  • All cloud-init scripts are able to complete successfully.
  • The staging environment is verified to function just as it did before.

Add https-certificate block to two Cobalt Strike profiles at Teamserver instance startup

๐Ÿ’ก Summary

Teamserver instances should fetch the TLS certificate for their specified domain from our S3 certificates bucket (if available) and then appropriately modify the amazon.profile and ocsp.profile files to include a valid https-certificate section.

Motivation and context

In https://github.com/cisagov/cool-system/issues/90, we are working to remove the need to use ServerSetup.sh. One of the acceptance criteria for this is to replicate the functionality of the httpsc2doneright() function from ServerSetup.sh.

Implementation notes

This issue does NOT include figuring out how the certificate is created and put in our S3 bucket. Initially, that will be a manual process that we want to automate in the future.

Acceptance criteria

  • When a Teamserver instance is deployed, the amazon.profile and ocsp.profile files each contain a valid https-certificate section.
  • Cobalt Strike is able to successfully utilize the updated profiles for HTTPS (probably have to work with an assessment operator to validate this).
  • If a certificate for the specified domain is NOT found in S3, a warning is logged and the Cobalt Strike profiles are not touched.

Convert `email_sending_domains` to lowercase before use

๐Ÿ› Summary

If any items in the email_sending_domains Terraform variable contain uppercase letters, their corresponding certificates will not be found in the S3 bucket because they are always lowercase in the bucket.

To reproduce

Steps to reproduce the behavior:

  1. Create a certificate for xyz.com and store it in the certificates bucket
  2. Define email_sending_domains = ["XYZ.com"] in a .tfvars` file
  3. Run a terraform apply with that .tfvars file

Note that the certificate cannot be pulled via our install-certificates.py cloud-init - it will fail with an error like this:

Error fetching 'mycertificatesbucket/live/XYZ.com/fullchain.pem' from S3: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
Exiting script!

Expected behavior

The uppercase domain XYZ.com should be converted to lowercase before we attempt to fetch the certificate from the S3 bucket, so that install-certificates.py runs successfully.

CC: @m1j09830

Syntax change in Monterey

๐Ÿ› Summary

What's wrong? Please be specific.

When running the terraform_apply.sh script, errors are generated relating to the sleep command.

Steps to reproduce the behavior:

From a VM Mac already configured to run the COOL Terraform builds, updated to Monterey OS...

  1. Run terraform_apply.sh with the given options
  • AWS_SHARED_CREDENTIALS_FILE=~/.aws/credentials AWS_DEFAULT_REGION=us-east-1 ./terraform_apply.sh -auto-approve -var-file=envX-production.tfvars
  1. The following errors are generated:
Error: local-exec provisioner error
โ”‚
โ”‚   with null_resource.break_association_with_default_route_table,
โ”‚   on transitgateway.tf line 41, in resource "null_resource" "break_association_with_default_route_table":
โ”‚   41:   provisioner "local-exec" {
โ”‚
โ”‚ Error running command 'aws --profile cool-sharedservices-provisionaccount --region us-east-1 ec2
โ”‚ disassociate-transit-gateway-route-table --transit-gateway-route-table-id tgw-rtb-05fb5dc2866fe06e5
โ”‚ --transit-gateway-attachment-id tgw-attach-0969c580aa7eee9b3 && while aws --profile cool-sharedservices-provisionaccount
โ”‚ --region us-east-1 ec2 get-transit-gateway-route-table-associations --transit-gateway-route-table-id
โ”‚ tgw-rtb-05fb5dc2866fe06e5 | grep --quiet vpc-0251fecd0fb37eb96; do sleep 5s; done': exit status 1. Output: {
โ”‚     "Association": {
โ”‚         "TransitGatewayRouteTableId": "tgw-rtb-05fb5dc2866fe06e5",
โ”‚         "TransitGatewayAttachmentId": "tgw-attach-0969c580aa7eee9b3",
โ”‚         "ResourceId": "vpc-0251fecd0fb37eb96",
โ”‚         "ResourceType": "vpc",
โ”‚         "State": "disassociating"
โ”‚     }
โ”‚ }
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds
โ”‚ usage: sleep seconds

Expected behavior

Successful completion of the terraform_apply.sh script

Any helpful log output or screenshots

After discussion with @dav3r, it was determined to come from a change in the 'sleep' command resulting from upgrading to the Monterey version of MacOS. The command sleep Xs does not apply anymore and instead sleep X is used. After modifying this command under the directory, the script completed as expected.

Paste the results here:

This is another error that will appear on subsequent attempts:

โ”‚ Error: local-exec provisioner error
โ”‚
โ”‚   with null_resource.break_association_with_default_route_table,
โ”‚   on transitgateway.tf line 41, in resource "null_resource" "break_association_with_default_route_table":
โ”‚   41:   provisioner "local-exec" {
โ”‚
โ”‚ Error running command 'aws --profile cool-sharedservices-provisionaccount --region us-east-1 ec2
โ”‚ disassociate-transit-gateway-route-table --transit-gateway-route-table-id tgw-rtb-05fb5dc2866fe06e5
โ”‚ --transit-gateway-attachment-id tgw-attach-0969c580aa7eee9b3 && while aws --profile cool-sharedservices-provisionaccount
โ”‚ --region us-east-1 ec2 get-transit-gateway-route-table-associations --transit-gateway-route-table-id
โ”‚ tgw-rtb-05fb5dc2866fe06e5 | grep --quiet vpc-0251fecd0fb37eb96; do sleep 5s; done': exit status 254. Output:
โ”‚ An error occurred (InvalidAssociation.NotFound) when calling the DisassociateTransitGatewayRouteTable operation:
โ”‚ Association tgw-attach-0969c580aa7eee9b3 does not exists in Transit Gateway Route Table tgw-rtb-05fb5dc2866fe06e5.

Add any screenshots of the problem here.

Support more Cobalt Strike C2 profiles

๐Ÿ’ก Summary

In #118 we add an https-certificate block to the Amazon and OCSP Cobalt Strike C2 profiles from rsmudge/Malleable-C2-Profiles. It would be a nice improvement to instead allow the user to provide a list of C2 profiles to which an https-certificate block should be added; alternatively, the user could provide as an input one or more directories containing C2 profiles and we could add an https-certificate block to each *.profile files in those directories.

It also makes sense to allow the user to specify the location of the Java keystore.

Motivation and context

@dav3r mentioned in #118 that this would be a nice improvement to this repository. It would support a more general use case beyond just the Amazon and OCSP Cobalt Strike C2 profiles.

Acceptance criteria

  • The Terraform template add-https-certificate-block-to-cs-profiles.tpl.sh takes as an input a list of Cobalt Strike C2 profile files (or, alternatively, a list of directories containing such profiles) to which an https-certificate block should be added.
  • add-https-certificate-block-to-cs-profiles.tpl.sh adds an https-certificate block to each of the Cobalt Strike C2 profiles in the previous list item.
  • The Terraform template add-https-certificate-block-to-cs-profiles.tpl.sh takes as an input the location of the Java keystore to be created.
  • add-https-certificate-block-to-cs-profiles.tpl.sh uses the Java keystore location when populating the https-certificate blocks in the Cobalt Strike C2 profiles.

Addressing issues reported by `checkov` prior to integration

๐Ÿ’ก Summary

This is related to cisagov/skeleton-generic#172. After having run checkov scans locally against cool-assessment-terraform the scan shows: passed checks: 873, Failed checks: 176, Skipped checks: 0.

These checks need to be addressed in a systematic way as we are planning to integrate checkov into the pre-commit linting jobs. Since this ticket is made from just cool-assessment-terraform there might be other checks that fail on other Terraform repositories in https://github.com/cisagov, but this will be a good start.

NOTE: Each failed check might not necessarily need to be fixed. There could be some cases of false flags in which we don't want to adhere to the policies that checkov enforces. In these cases we can setup configurations to bypass these checks but they will need to be approved before bypassing.

Here is the file from a full scan of cool-assessment-terraform: checkov_results.txt

Each check will have a guide linked to it for applying a fix. The checks that failed for cool-assessment-terraform are as follows:

  • CKV_AWS_1: Ensure IAM policies that allow full "-" administrative privileges are not created.
  • CKV_AWS_8: Ensure all data stored in the EBS is securely encrypted.
  • CKV_AWS_23: Ensure every security groups rule has a description.
  • CKV_AWS_49: Ensure no IAM policies documents allow "*" as a statement's actions.
  • CKV_AWS_88: EC2 instance should not have a public IP.
  • CKV_AWS_111: Ensure IAM policies do not allow write access without constraints.
  • CKV_AWS_126: Ensure that detailed monitoring is enabled for EC2 instances.
  • CKV_AWS_135: Ensure that EC2 is EBS optimized.
  • CKV_AWS_184: Ensure resource is encrypted by KMS using a customer managed Key (CMK).
  • CKV_AWS_231: Ensure no NACL allow ingress from 0.0.0.0:0 to port 3389.
  • CKV_AWS_277: Ensure no security groups allow ingress from 0.0.0.0:0 to port -1.
  • CKV_AWS_352: Ensure NACL ingress does not allow all Ports.
  • CKV_AWS_356: Ensure no IAM policies documents allow "*" as a statement's resource for restrictable actions.
  • CKV2_AWS_11: Ensure VPC flow logging is enabled in all VPCs.
  • CKV2_AWS_12: Ensure the default security group of every VPC restricts all traffic.
  • CKV2_AWS_19: Ensure that all EIP addresses allocated to a VPC are attached to EC2 instances.
  • CKV2_AWS_23: Route53 A Record has Attached Resource.
  • CKV2_AWS_38: Ensure DNSSEC signing is enabled for Amazon Route 53 public hosted zones.
  • CKV2_AWS_39: Ensure DNS query logging is enabled for Amazon Route 53 hosted zones.
  • CKV_TF_1: Ensure Terraform module sources use a commit hash.
  • CKV2_GHA_1: Ensure top-level permissions are not set to write-all.

Create VPC

Write the Terraform for the basic assessment VPC.

Make private domain default to assessment account name

Make var.private_domain optional, then set up a local variable for private_domain:

  • If var.private_domain is not empty: local.private_domain is set to the value of var.private_domain
  • If var.private_domain is empty: local.private_domain is set to the value of var.assessment_account_name

EFS mount inconsistent with PentestPortal EC2 instances

๐Ÿ› Bug Report

When PentestPortal EC2 instances are spun up, the EFS mount sometimes fails.

To Reproduce

Steps to reproduce the behavior:

  • Spin up several PentestPortal EC2 instances
  • Check to see if any have an attempted EFS mount that failed

Expected behavior

The EFS mounts should always be available on PentestPortal instances.

Make null_resource code use aws sts to assume role

We currently use a profile name to make the AWS CLI command in the null_resource resource assume the correct role. This means that the success of this Terraform code relies on the user having a local setup with cisagov/aws-profile-sync using the expected credentials files. It would be better to pluck the desired role ARN out of remote state and simply assume the role for the duration of the command. (Best would be if we could specify a provider for a null_resource, but this is not something currently supported by Terraform and/or the AWS provider.)

As great as this sounds, it is actually a fair amount of work to make it happen. I have found a couple of Terraform modules that do almost what we want, but not quite:

I think what we actually need is two resources. The first resource would be defined from the Terraform external provider. It would take a few parameters as JSON input and output JSON containing the assumed role credentials. This could be as simple filling in the JSON generated by aws sts assume-role --generate-cli-skeleton, passing it to the AWS CLI via aws sts assume-role --cli-input-json, and returning the output.

The second resource is also defined from the Terraform external provider. It would take the output of the resource described in the previous paragraph as input and then use those credentials to execute the desired AWS CLI commands. It would not export the relevant AWS environment variables, since that clobbers any environment variables we may have set in order to run terraform.

The first resource could be a Terraform module, except that Terraform 0.12 does not allow a module to appear in a depends_on list. (This restriction has been lifted in Terraform 0.13.) The second resource would have to be created where needed, since it would be customized each time it is used.

I will put a comment in the code linking to the issue, and we can revisit this once we have taken the plunge into Terraform 0.13.

Originally posted by @jsf9k in #72 (comment)

Modify COOL Archival Artifact Script for FASTs, RTAs

๐Ÿ’ก Summary

The COOL Operations Team requests modification of the COOL Archival Script (archive-artifact-data-to-bucket.sh) to enable it to run on terraformer instances.

Motivation and context

Recent discussions with the FAST Team Lead (Mike Rocha) indicate that the team will require a lone terraformer instance for future environment builds. The FAST Team Lead (this week 8/28/23) requested zero kali instances be provisioned, by the Ops Team, as they have developed their own IAC from which to deploy the kalis.

To run the archival script, current processes require the Ops Team to manually deploy a kali instance from which to run it in assessment environments without existing kali instances (RTAs and now FASTs). Since FASTs comprise a significant portion of assessments in the COOL, this additional step of provisioning a kali adds time and complexity to the archiving and data transfer process and is inefficient. Modifying the archive-artifact-data-to-bucket.sh script to run on terraformer instances (in addition to kali instances) bypasses the need for provisioning a new instance simply to perform the archival and transfer of data from the COOL to the AE.

Implementation notes

Please provide details for implementation, such as:

Ability to run the archive-artifact-data-to-bucket.sh script on terraformer instances within RTA and FAST environments using the existing runbook (https://confluence.ceil.cyber.dhs.gov/pages/viewpage.action?pageId=124977211).

Acceptance criteria

When the Ops Team can successfuly run the archive-artifact-data-to-bucket.sh script on the terraformer instances within RTA and FAST environments and verifies that the data transferred to the AE. Ideally, acceptance testing would encompass both one FAST and one RTA at a minimum.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.