Giter VIP home page Giter VIP logo

boundary-reference-architecture's Introduction

Boundary Reference Architectures

This repo contains community-supported examples for deploying Boundary on different platforms - including AWS, Microsoft Azure, Google Cloud Platform, Kubernetes, and Docker Compose. Most examples use Terraform for provisioning and configuring Boundary.

If you are looking for a simple example for HCP/OSS Boundary, see the quickstart folder

Disclaimer: the examples in this repository are for demonstration purposes only to convey how to get Boundary up and running on popular cloud and container platforms. They're not officially supported modules or designed to be "production" ready. They're here as a starting point and assume end-users have experience with each example platform.

Contributing

Community contributions to this repository are encouraged and can be added to deployment/ and configuration/.

Contents

  • deployment/: Contains example configurations for deploying and configuring Boundary.
  • configuration/: Contains examples for configuring Boundary resources assuming you already have a deployed environment, such as for HCP Boundary and Dev mode.

Reference

boundary-reference-architecture's People

Contributors

adambouhmad avatar calebalbers avatar chuckyz avatar covetocove avatar dragotic avatar grantorchard avatar harleyb123 avatar hashicorp-copywrite[bot] avatar irenarindos avatar jfreeland avatar johanbrandhorst avatar keisukeyamashita avatar lilic avatar louisruch avatar malnick avatar marigbede avatar markchristopherwest avatar ned1313 avatar omkensey avatar rjshrjndrn avatar stevenzamborsky avatar thierryturpin avatar tjarriault avatar xingluw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

boundary-reference-architecture's Issues

Hard coded region was incorrect and cannot destroy.

Thanks for your work on this project. I had a shot at spinning this up today from a cloud 9 instance.

When I realised that the region was hard coded incorrectly for me I tried to destroy the deployment, but it produced an error:

terraform destroy --auto-approve --force
module.aws.random_pet.test: Refreshing state... [id=seagull]
module.aws.tls_private_key.boundary: Refreshing state... [id=d4e86eac6ab6f8cc40a67a9dc608ff99a0888fa1]
module.aws.tls_self_signed_cert.boundary: Refreshing state... [id=139052485185789327861699823900579129878]
module.aws.data.aws_ami.ubuntu: Refreshing state... [id=ami-0885b1f6bd170450c]
module.aws.data.aws_availability_zones.available: Refreshing state... [id=us-east-1]
module.aws.aws_kms_key.recovery: Refreshing state... [id=c5fbfce0-77d6-410f-8aa2-acdb3a5d0f2e]
module.aws.aws_eip.nat[0]: Refreshing state... [id=eipalloc-088179ba390495ba3]
module.aws.aws_acm_certificate.cert: Refreshing state... [id=arn:aws:acm:us-east-1:972620357255:certificate/6ff24d3c-f1c6-4ef7-b49a-793cedb10e3a]
module.aws.aws_vpc.main: Refreshing state... [id=vpc-0d98b13027cd220f1]
module.aws.aws_iam_role.boundary: Refreshing state... [id=boundary-test-seagull]
module.aws.aws_kms_key.worker_auth: Refreshing state... [id=ee61adfa-226f-4c04-9db6-c89cb6341095]
module.aws.aws_key_pair.boundary: Refreshing state... [id=boundary-demo]
module.aws.aws_kms_key.root: Refreshing state... [id=1bb785ef-9ae0-4b35-92b3-6c3e9da0637f]
module.aws.aws_eip.nat[1]: Refreshing state... [id=eipalloc-0ff3712704713ba6c]
module.aws.aws_iam_instance_profile.boundary: Refreshing state... [id=boundary-test-seagull]
module.aws.aws_iam_role_policy.boundary: Refreshing state... [id=boundary-test-seagull:boundary-test-seagull]
module.aws.aws_security_group.db: Refreshing state... [id=sg-070a0f2f79f1c53fb]
module.aws.aws_subnet.private[0]: Refreshing state... [id=subnet-0d96352c9d6fc7ce3]
module.aws.aws_security_group.controller: Refreshing state... [id=sg-021a99d9915b6228d]
module.aws.aws_security_group.controller_lb: Refreshing state... [id=sg-0cfad2770698bf096]
module.aws.aws_lb_target_group.controller: Refreshing state... [id=arn:aws:elasticloadbalancing:us-east-1:972620357255:targetgroup/boundary-test-controller-seagull/dc69529c131d9d82]
module.aws.aws_security_group.worker: Refreshing state... [id=sg-0166f8004846c9d9c]
module.aws.aws_internet_gateway.this: Refreshing state... [id=igw-07df196a77202280a]
module.aws.aws_route_table.public[0]: Refreshing state... [id=rtb-0a8139125a39cdc8c]
module.aws.aws_route_table.public[1]: Refreshing state... [id=rtb-07f160b12287b43ab]
module.aws.aws_subnet.private[1]: Refreshing state... [id=subnet-097a9ed7a14f3ff17]
module.aws.aws_route_table.private[0]: Refreshing state... [id=rtb-00ea923aee4b66b72]
module.aws.aws_route_table.private[1]: Refreshing state... [id=rtb-0618ad35843aefe36]
module.aws.aws_subnet.public[0]: Refreshing state... [id=subnet-0c7b679f78e504742]
module.aws.aws_security_group_rule.allow_egress_worker: Refreshing state... [id=sgrule-3974255132]
module.aws.aws_subnet.public[1]: Refreshing state... [id=subnet-0c0d3a846acfe6f05]
module.aws.aws_security_group_rule.allow_ssh_worker: Refreshing state... [id=sgrule-1690260771]
module.aws.aws_security_group_rule.allow_9202_worker: Refreshing state... [id=sgrule-359892429]
module.aws.aws_security_group_rule.allow_web_worker: Refreshing state... [id=sgrule-2643754924]
module.aws.aws_security_group_rule.allow_any_ingress: Refreshing state... [id=sgrule-353322513]
module.aws.aws_security_group_rule.allow_9200_controller: Refreshing state... [id=sgrule-3678610507]
module.aws.aws_security_group_rule.allow_9201_controller: Refreshing state... [id=sgrule-962697288]
module.aws.aws_security_group_rule.allow_ssh_controller: Refreshing state... [id=sgrule-3054007421]
module.aws.aws_security_group_rule.allow_egress_controller: Refreshing state... [id=sgrule-2930513684]
module.aws.aws_instance.target[0]: Refreshing state... [id=i-0b7b777989709f1ec]
module.aws.aws_nat_gateway.private[0]: Refreshing state... [id=nat-0da2cf472f772a602]
module.aws.aws_security_group_rule.allow_controller_sg: Refreshing state... [id=sgrule-81022040]
module.aws.aws_nat_gateway.private[1]: Refreshing state... [id=nat-0c6c18ea3777d0a55]
module.aws.aws_security_group_rule.allow_9200: Refreshing state... [id=sgrule-2617946843]
module.aws.aws_route.public_internet_gateway[0]: Refreshing state... [id=r-rtb-0a8139125a39cdc8c1080289494]
module.aws.aws_route.public_internet_gateway[1]: Refreshing state... [id=r-rtb-07f160b12287b43ab1080289494]
module.aws.aws_db_subnet_group.boundary: Refreshing state... [id=boundary]
module.aws.aws_route_table_association.public_subnets[1]: Refreshing state... [id=rtbassoc-05378ca20589f069f]
module.aws.aws_lb.controller: Refreshing state... [id=arn:aws:elasticloadbalancing:us-east-1:972620357255:loadbalancer/net/boundary-test-controller-seagull/4f1cb71e72a38bf9]
module.aws.aws_route_table_association.public_subnets[0]: Refreshing state... [id=rtbassoc-02cd098d15c5cac41]
module.aws.aws_route_table_association.private[0]: Refreshing state... [id=rtbassoc-0a9d7bd9863aec5ca]
module.aws.aws_route_table_association.private[1]: Refreshing state... [id=rtbassoc-02cff575b2831a2af]
module.aws.aws_route.nat_gateway[0]: Refreshing state... [id=r-rtb-00ea923aee4b66b721080289494]
module.aws.aws_route.nat_gateway[1]: Refreshing state... [id=r-rtb-0618ad35843aefe361080289494]
module.aws.aws_db_instance.boundary: Refreshing state... [id=terraform-20201101074638756100000005]
module.aws.aws_lb_listener.controller: Refreshing state... [id=arn:aws:elasticloadbalancing:us-east-1:972620357255:listener/net/boundary-test-controller-seagull/4f1cb71e72a38bf9/7d7590aef3ef4dfe]
module.aws.aws_instance.controller[1]: Refreshing state... [id=i-010d9cd27beeb4872]
module.aws.aws_instance.controller[0]: Refreshing state... [id=i-0e6c678ed56332c54]
module.aws.aws_lb_target_group_attachment.controller[0]: Refreshing state... [id=arn:aws:elasticloadbalancing:us-east-1:972620357255:targetgroup/boundary-test-controller-seagull/dc69529c131d9d82-20201101075545095200000002]
module.aws.aws_lb_target_group_attachment.controller[1]: Refreshing state... [id=arn:aws:elasticloadbalancing:us-east-1:972620357255:targetgroup/boundary-test-controller-seagull/dc69529c131d9d82-20201101075545060100000001]
module.aws.aws_instance.worker[0]: Refreshing state... [id=i-0af6f561a1d5a8cdc]

Error: error reading wrappers from "recovery_kms_hcl": Error configuring kms: error fetching AWS KMS wrapping key information: NotFoundException: Key 'arn:aws:kms:ap-southeast-2:972620357255:key/c5fbfce0-77d6-410f-8aa2-acdb3a5d0f2e' does not exist

admin:~/environment/boundary-reference-architecture/deployment/aws ((37c8703...)) $ terraform --version
Terraform v0.13.5
+ provider registry.terraform.io/hashicorp/aws v3.13.0
+ provider registry.terraform.io/hashicorp/boundary v0.1.0
+ provider registry.terraform.io/hashicorp/random v3.0.0
+ provider registry.terraform.io/hashicorp/tls v3.0.0

That was a later attempt, I also got this from an earlier attempt:

module.aws.aws_db_instance.boundary: Refreshing state... [id=terraform-20201101074638756100000005]
module.aws.aws_iam_instance_profile.boundary: Refreshing state... [id=boundary-test-seagull]

Error: Error retrieving ALB: ValidationError: 'arn:aws:elasticloadbalancing:us-east-1:972620357255:loadbalancer/net/boundary-test-controller-seagull/4f1cb71e72a38bf9' is not a valid load balancer ARN
        status code: 400, request id: f655c458-1b75-4c72-8c93-068f7badeb5d

Error: NotFoundException: Key 'arn:aws:kms:ap-southeast-2:972620357255:key/ee61adfa-226f-4c04-9db6-c89cb6341095' does not exist

Error: NotFoundException: Key 'arn:aws:kms:ap-southeast-2:972620357255:key/c5fbfce0-77d6-410f-8aa2-acdb3a5d0f2e' does not exist

Error: Error retrieving Target Group: ValidationError: 'arn:aws:elasticloadbalancing:us-east-1:972620357255:targetgroup/boundary-test-controller-seagull/dc69529c131d9d82' is not a valid target group ARN
        status code: 400, request id: e3f4c305-726d-456b-8b6a-745fe4617a0b

Error: NotFoundException: Key 'arn:aws:kms:ap-southeast-2:972620357255:key/1bb785ef-9ae0-4b35-92b3-6c3e9da0637f' does not exist

Is there another way to force destruction of all the resources that were created?

destroy error var.target_ips is set of string with 1 element

Trying to destroy I receive this error:

│ Error: Invalid for_each argument
│
│   on boundary/hosts.tf line 9, in resource "boundary_host" "backend_servers":
│    9:   for_each        = var.target_ips
│     ├────────────────
│     │ var.target_ips is set of string with 1 element
│
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how
│ many instances will be created. To work around this, use the -target argument to first apply only the resources that the
│ for_each depends on.

boundary/vars.tf

variable "target_ips" {
  type    = set(string)
  default = []
}

boundary/hosts.tf

resource "boundary_host" "backend_servers" {
  for_each        = var.target_ips
  type            = "static"
  name            = "backend_server_${each.value}"
  description     = "Backend server #${each.value}"
  address         = each.key
  host_catalog_id = boundary_host_catalog.backend_servers.id
}

Do I need to configure something before destroy?

Azure - Unsupported argument

Doing terraform apply -target module.azure the deploy shows the following errors:

│ Error: Unsupported argument

│ on boundary\targets.tf line 8, in resource "boundary_target" "backend_servers_ssh":
│ 8: host_set_ids = [

│ An argument named "host_set_ids" is not expected here.


│ Error: Unsupported argument

│ on boundary\targets.tf line 20, in resource "boundary_target" "backend_servers_website":
│ 20: host_set_ids = [

│ An argument named "host_set_ids" is not expected here.

Unable to connect to targets from other machines

Seems like you are only able to connect to the target instances from the machine that has spun up Boundary. Have tried to connect using a different machine and cannot reach targets, this machine has Boundary installed and can successfully authenticate via the CLI and read targets, just not connect.

I have only tried this issue using SSH via the CLI, but suspect that RDP is also unavailable.

terraform plan failure using Azure deployment

Getting the following error from terraform plan:

│ Error: no suitable auth method information found

│ with module.boundary.provider["registry.terraform.io/hashicorp/boundary"],
│ on boundary/main.tf line 10, in provider "boundary":
│ 10: provider "boundary" {

Healthcheck `unhealthy` for boundary container

The healthcheck for the boundary container is returning unhealthy because curl is not installed in the container.

For version 0.1.1 and 0.1.2

Returns healthy after running: docker-compose exec -u root boundary /bin/sh -c "apk add --no-cache curl"

variable boundary_bin is not declared

While executing command as per the guide:

terraform apply -target module.aws -var boundary_bin=/usr/bin/boundary
will result:

Error: Value for undeclared variable

A variable named "boundary_bin" was assigned on the command line, but the root
module does not declare a variable of that name. To use this value, add a
"variable" block to the configuration

Guess users are required to create variables.tf locally and declare boundary_bin as per suggestion.

Azure deployment needs updating.

It looks like there may have been significant changes to both the boundary provider as well as the azurerm provider. When running terraform validate in the deployment/azure directory, it returns the following Errors and Warnings:

Warning: Argument is deprecated

  with module.azure.azurerm_lb_rule.controller,
  on azure/lb.tf line 74, in resource "azurerm_lb_rule" "controller":
  74:   backend_address_pool_id        = azurerm_lb_backend_address_pool.pools["controller"].id

This property has been deprecated by `backend_address_pool_ids` and will be removed in the next major version of the provider

(and 7 more similar warnings elsewhere)

Warning: Deprecated Resource

  with module.boundary.boundary_host_catalog.backend_servers,
  on boundary/hosts.tf line 1, in resource "boundary_host_catalog" "backend_servers":
   1: resource "boundary_host_catalog" "backend_servers" {

Deprecated: use `resource_host_catalog_static` instead.

(and 2 more similar warnings elsewhere)

Error: Unsupported argument

  on boundary/targets.tf line 8, in resource "boundary_target" "backend_servers_ssh":
   8:   host_set_ids = [

An argument named "host_set_ids" is not expected here.

Error: Unsupported argument

  on boundary/targets.tf line 20, in resource "boundary_target" "backend_servers_website":
  20:   host_set_ids = [

An argument named "host_set_ids" is not expected here.

Perhaps the host_set_ids is really supposed to be host_source_ids?

Thanks

Can not log into Web/UI, but cli works

Ciao Hashicorp team!

I am testing boundary, I followed the aws architecture and I executed the boundary setup that provisioned users/roles/auth...

I was able to login with a backend member (jim:foofoofoo) via boundary cli, but I can not log into the Web/UI.

$ boundary authenticate password -addr https://boundary.example.com  -auth-method-id ampw_HhjcPlbbKu -login-name jim -password foofoofoo
Authentication information:
  Account ID:      apw_BpptvO4zWs
  Auth Method ID:  ampw_HhjcPlbbKu
  Expiration Time: Fri, 19 Mar 2021 13:35:58 CET
  Token:           thetokengoeshere
  User ID:         u_20dX40C4h1

But I can not log into the web UI.

Captura de pantalla 2021-03-12 a las 13 40 11

What role/s is required to access the UI?

Thanks!

Questions about ssh connection example

Does the target machine need to have the same key as the controller/worker machines? How do I tell Boundry which key to use when it wants to connect to a target machine? Thank you in advance

Unable to use the GCP reference architecture

When I try to deploy the reference architecture on GCP, the terraform CLI failed with the following error :
The provider hashicorp/google-beta does not support resource type "google_privateca_certificate_authority_iam_policy".

It seems forcing the provider to the 3.70.0 version fixes the issue.

Include Kubernetes Config Path or ENV variable in Kubernetes Demo

Most people will be running these repos as a test in their local environment hoping that everything works in one go.

The kubernetes demo requires the KUBE_CONFIG_PATH environment variable to be set, or the kubernetes.tf file to be modified to include the location of the kubeconfig file, like so:

  config_context_cluster = "minikube"
  config_path            = "~/.kube/config"
}

Otherwise the terraform apply will fail with confusing errors as the kubernetes provider tries connecting to 127.0.0.1:80 with no certificates defined.

Please at least add this to the documentation so us newbies have a chance

AWS "host_set_ids" is not expected here

When use AWS terraform, get error:

│ Error: Unsupported argument
│ 
│   on boundary/targets.tf line 8, in resource "boundary_target" "backend_servers_ssh":
│    8:   host_set_ids = [
│ 
│ An argument named "host_set_ids" is not expected here.

Readme clarification on login scope

I would suggest adding a note to make sure the selected scope is "primary" for login with the terraform generated usernames.

image

also that the scope id is in the URL for the UI portion.

Happy to make the PR if deemed helpful

AWS - connection refused error while running terraform apply

I have followed the steps in the Readme file for deploying to AWS. After performing the last step "terraform apply", I am encountering the below error. I tried to debug this and resolve it but was unable to find any solution. Please help.

│ Error: error calling read scope: error performing client request during Read call: Get http://DNS:9200/v1/scopes/global: dial tcp <IP_ADDRESS>:9200: connect: connection refused

│ with module.boundary.boundary_scope.global,
│ on boundary/scopes.tf line 1, in resource "boundary_scope" "global":
│ 1: resource "boundary_scope" "global" {

Fix jq string on Kubernetes Deployment instructions

Response syntax has been changed. Jq string needs to be updated in the Kubernetes Deployment readme

For the string
boundary auth-methods list -scope-id $(boundary scopes list -format json | jq -c ".[] | select(.name | contains(\"primary\")) | .[\"id\"]" | tr -d '"')

Change it to
boundary auth-methods list -scope-id $(boundary scopes list -format json | jq -c ".items[] | select(.name | contains(\"primary\")) | .[\"id\"]" | tr -d '"')

Furthermore in order to run this command, you need to make sure the docker image is a later version (such as 0.5.0) since the password syntax is different
boundary authenticate password -login-name=jeff -password=foofoofoo -auth-method-id=ampw_1234567890

AWS Deployment doesn't use HTTPS -- Provide warning in the documentation

There's AWS resources to create a self-signed ACM certificate, but given that there's no domain name system set up and the ACM certificate isn't used, any connections to the reference architecture can't use https (despite the constant plethora of example links that do use https, i.e.

The admin console will be available at https://boundary-test-controller-<random_name>-<random_sha>.elb.us-east-1.amazonaws.com:9200

Please update the documentation to warn the user that he/she needs to set up HTTPS on their own behalf, or advise to use HTTP if just testing the architecture.

Random pet can generate a name longer than 32 characters causing aws_lb error.

I ran terraform apply today and perhaps got unlucky with a long pet name 'hedgehog'

Error: "name" cannot be longer than 32 characters: "boundary-test-controller-hedgehog"

  on aws/lb.tf line 2, in resource "aws_lb" "controller":
   2:   name               = "${var.tag}-controller-${random_pet.test.id}"

Error: "name" cannot be longer than 32 characters

  on aws/lb.tf line 13, in resource "aws_lb_target_group" "controller":
  13:   name     = "${var.tag}-controller-${random_pet.test.id}"

Terraform also refused to destroy after this, returning:

module.aws.aws_instance.controller[0]: Refreshing state... [id=i-00cd48fdd6b683885]
module.aws.aws_instance.worker[0]: Refreshing state... [id=i-0cafe0b8f5f331d92]

Error: "name" cannot be longer than 32 characters: "boundary-test-controller-hedgehog"

  on aws/lb.tf line 2, in resource "aws_lb" "controller":
   2:   name               = "${var.tag}-controller-${random_pet.test.id}"



Error: "name" cannot be longer than 32 characters

  on aws/lb.tf line 13, in resource "aws_lb_target_group" "controller":
  13:   name     = "${var.tag}-controller-${random_pet.test.id}"

error reading wrappers from "recovery_kms_hcl": Error configuring kms: error fetching AWS KMS wrapping key information: NotFoundException

While following below step from the deployment README.md
Configure boundary using terraform apply (without the target flag), this will configure boundary per boundary/main.tf

Getting
Error: error reading wrappers from "recovery_kms_hcl": Error configuring kms: error fetching AWS KMS wrapping key information: NotFoundException: Key 'arn:aws:kms:eu-west-1:*****************:key/0b2ad402-*****-46ac-****-**************' does not exist

However from previous terraform apply for module.aws I could see, key was created with the given ID but can't find same in portal:
module.aws.aws_kms_key.recovery: Creation complete after 5s [id=*****-46ac-****-**************]

Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol status code: 400

Load balancer module, lb.tf has been defined with lb target group and stickiness of type: lb_cookie at line 20:

  stickiness {
    enabled = false
    type    = "lb_cookie"
}

With
terraform apply -target module.aws -var boundary_bin=/usr/bin/boundary

it results in an error:

Error: Error modifying Target Group Attributes: InvalidConfigurationRequest: Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol
        status code: 400, request id: 182d1591-765b-********************************

Similar to issues reported in here and here.

As mentioned in those issues, work around is to declare stickiness of type source_ip

Make dev failes with go: extracting/boundary/@v/v0.1.0.zip malformed file path "website/pages/api-docs/[[...page]].jsx

Make dev failes with go: extracting github.com/hashicorp/boundary v0.1.0
-> unzip /home/ubuntu/go/pkg/mod/cache/download/github.com/hashicorp/boundary/@v/v0.1.0.zip: malformed file path "website/pages/api-docs/[[...page]].jsx": double dot
build command-line-arguments: cannot load github.com/hashicorp/boundary/api: unzip /home/ubuntu/go/pkg/mod/cache/download/github.com/hashicorp/boundary/@v/v0.1.0.zip: malformed file path "website/pages/api-docs/[[...page]].jsx": double dot
make: *** [Makefile:36: dev] Error 1

~/terraform-provider-boundary$ go version
go version go1.13.8 linux/amd64
~/terraform-provider-boundary$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.1 LTS"

Uncertain what to specify in public_cluster_addr for k8s running in private subnet

Hi, I'm looking at boundary-reference-architecture/deployment/kube/kubernetes/boundary_config.tf, and I'm curious what to specify at public_cluster_addr for the controller, and the address, controllers, public_addr for worker configuration.

The configmap.yaml I'm using is as below. I'm running my kubernetes cluster in AWS private subnet, and thus have no idea what to specify at public_cluster_addr for controller. Also, I believe the example runs the controllers and workers in the same pod, and thought that the worker address, controllers, and public_addr should be localhost. Is it correct? (By the way, I am using Helm Chart I have made to implement the /kubernetes part, as the example is in Terraform. I prefer Helm)

apiVersion: v1
kind: ConfigMap
metadata:
  name: boundary-config
data:
  boundary.hcl: |
    disable_mlock = true
    controller {
        name = "kubernetes-controller"
        description = "A controller for a kubernetes demo!"
        database {
            url = "env://BOUNDARY_PG_URL"
        }
        public_cluster_addr = "boundary-controller.boundary.svc.cluster.local"
    }
    worker {
        name = "kubernete-worker"
        description = "A worker for a kubernetes demo"
        address = "localhost"
        controllers = ["localhost"]
        public_addr = "localhost"
    }
    listener "tcp" {
        address = "0.0.0.0"
        purpose = "api"
        tls_disable = true
    }
    listener "tcp" {
        address = "0.0.0.0"
        purpose = "cluster"
        tls_disable = true
    }
    listener "tcp" {
        address = "0.0.0.0"
        purpose = "proxy"
        tls_disable = true
    }
    kms "aead" {
        purpose = "root"
        aead_type = "aes-gcm"
        key = "sP1fnF5Xz85RrXyELHFeZg9Ad2qt4Z4bgNHVGtD6ung="
        key_id = "global_root"
    }
    kms "aead" {
        purpose = "worker-auth"
        aead_type = "aes-gcm"
        key = "8fZBjCUfN0TzjEGLQldGY4+iE9AkOvCfjh7+p0GtRBQ="
        key_id = "global_worker-auth"
    }
    kms "aead" {
        purpose = "recovery"
        aead_type = "aes-gcm"
        key = "8fZBjCUfN0TzjEGLQldGY4+iE9AkOvCfjh7+p0GtRBQ="
        key_id = "global_recovery"
    }

This configuration seems to be wrong, as I'm getting some kind of connection error as below when I try to access the redis using the example.

❯ boundary connect -exec redis-cli -target-id ttcp_er1Yy3ROiI -- -h http://boundary.dev.mydomain.cloud -p 80
Could not connect to Redis at http://boundary.dev.mydomain.cloud:80: nodename nor servname provided, or not known
not connected>

Error parsing KMS HCL: At 6:19: Unknown token: 6:19 IDENT file

I am trying to test the boundary high availability example for GCP; So I verified that the infrastructure is well deployed, and works.

But I have issues with the boundary configuration, i tried to pass directly the keys name and the keys path but I still have the same issue

provider "boundary" {
  addr             = var.url
  recovery_kms_hcl = <<EOT
  kms "gcpckms" {
    purpose     = "recovery"
    key_ring    = "${var.key_ring}"
    crypto_key  = ${var.recovery_key}"
    project     = "${var.gcp_project}"
    credentials = file("/PATH to .json file")
    region      = "global"
  }
  EOT
}

ERROR

│ Error: error reading wrappers from "recovery_kms_hcl": Error parsing KMS HCL: At 6:19: Unknown token: 6:19 IDENT file
│ 
│   with provider["registry.terraform.io/hashicorp/boundary"],
│   on main.tf line 10, in provider "boundary":
│   10: provider "boundary" {
│ 
``` bash

## VERSION

Terraform V1.0.0

run all -> ERROR: Version in "./docker-compose.yml" is unsupported.

/boundary-reference-architecture/deployment/docker$ ./run all
~/boundary-reference-architecture/deployment/docker/compose ~/boundary-reference-
ERROR: Version in "./docker-compose.yml" is unsupported. You might be seeing this error because you're using the wrong Compose file version. Either specify a supported version (e.g "2.2" or "3.3") and place your service definitions under the services key, or omit the version key and place your service definitions at the root of the file to use version 1.
docker version
Client:
Version: 19.03.8
API version: 1.40
Go version: go1.13.8
Git commit: afacb8b7f0
Built: Wed Oct 14 19:43:43 2020
OS/Arch: linux/amd64
Experimental: false
...

$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.1 LTS"

$ docker-compose -v
docker-compose version 1.25.0, build unknown

Getting error while following kubernetes deployment document

Error: error setting host sources on target: {"kind":"Unimplemented","message":"Method Not Allowed"}

│ with module.boundary.boundary_target.redis,
│ on boundary/targets.tf line 1, in resource "boundary_target" "redis":
│ 1: resource "boundary_target" "redis" {



│ Error: error setting host sources on target: {"kind":"Unimplemented","message":"Method Not Allowed"}

│ with module.boundary.boundary_target.postgres,
│ on boundary/targets.tf line 14, in resource "boundary_target" "postgres":
│ 14: resource "boundary_target" "postgres" {

EKS deployment failed: Disabling IPC_LOCK, please use --privileged or --cap-add IPC_LOCK

Hello, I tried to deploy the Boundary controller to EKS Kubernetes version 1.22, but container does not have enough privileges to chown the /boundary directory:

chown: /boundary/..2022_07_29_07_35_20.877353490/controller.hcl: Read-only file system
chown: /boundary/..2022_07_29_07_35_20.877353490: Read-only file system
chown: /boundary/..2022_07_29_07_35_20.877353490: Read-only file system
chown: /boundary/controller.hcl: Read-only file system
chown: /boundary/..data: Read-only file system
chown: /boundary: Read-only file system
chown: /boundary: Read-only file system
Could not chown /boundary (may not have appropriate permissions)
Couldn't start Boundary with IPC_LOCK. Disabling IPC_LOCK, please use --privileged or --cap-add IPC_LOCK

The boundary docker image: 0.9

I modified a bit example resources:
controller.tf:

resource "kubernetes_namespace" "boundary" {
  metadata {
    name = var.namespace
  }
}

resource "kubernetes_secret" "boundary_url" {
  depends_on = [
    kubernetes_namespace.boundary,
  ]
  metadata {
    name = "boundary-rds-url"
    labels = var.controller_labels
    namespace = var.namespace
  }
  data = {
    POSTGRESS_URL="postgresql://${var.database_username}:${var.database_password}@${var.database_address}:${var.database_port}/${var.database_name}"
  }
}

resource "kubernetes_deployment" "boundary" {
  depends_on = [
    kubernetes_namespace.boundary,
    kubernetes_secret.boundary_url
  ]
  metadata {
    name   = var.controller_deployment
    labels = var.controller_labels
    namespace = var.namespace
  }

  spec {
    replicas = 1

    selector {
      match_labels = var.controller_labels
    }

    template {
      metadata {
        labels = var.controller_labels
      }

      spec {
        volume {
          name = "controller-config"

          config_map {
            name = "controller-config"
          }
        }

        container {
          image = "hashicorp/boundary:${var.image_ver}"
          name  = "controller"

          image_pull_policy = var.image_pull_pilicy
          volume_mount {
            name       = "controller-config"
            mount_path = "/boundary"
            read_only  = false
          }

          args = [
            "server",
            "-config",
            "/boundary/controller.hcl"
          ]

          env {
            name  = "POSTGRESS_URL"
            value_from  {
              secret_key_ref {
                name = "boundary-rds-url"
                key  = "POSTGRESS_URL"
              }
            }
          }

          env {
            name  = "HOSTNAME"
            value = "controller"
          }

          port {
            container_port = 9200
          }
          port {
            container_port = 9201
          }
          port {
            container_port = 9202
          }

          liveness_probe {
            http_get {
              path = "/"
              port = 9200
            }
          }

          readiness_probe {
            http_get {
              path = "/"
              port = 9200
            }
          }
        }
      }
    }
  }
}

resource "kubernetes_config_map" "controller_config" {
  depends_on = [
    kubernetes_namespace.boundary,
  ]

  metadata {
    name = "controller-config"
    labels = var.controller_labels
    namespace = var.namespace
  }
  
  data = {
    "controller.hcl" = <<EOF

disable_mlok = true

controller {
  name = "scylla-cloud-boundary"
  description = "Boundary controller" 
  database {
    url = "env://POSTGRESS_URL"
  }
}

listener "tcp" {
  address = "0.0.0.0"
  purpose = "api"
  tls_disable = true
}

listener "tcp" {
  address = "0.0.0.0"
  purpose = "cluster"
  tls_disable = true
}

listener "tcp" {
  address = "0.0.0.0"
  purpose = "proxy"
  tls_disable = true
}

kms "awskms" {
  purpose    = "root"
  kms_key_id = aws_kms_alias.root.kms_id
}

kms "awskms" {
  purpose = "worker-auth"
  kms_key_id = aws_kms_alias.worker_auth.kms_id
}

kms "awskms" {
  purpose = "recovery"
  kms_key_id = aws_kms_alias.recovery.kms_id
}
EOF
  }

}

resource "kubernetes_service" "boundary_controller" {
  depends_on = [
    kubernetes_namespace.boundary,
  ]
  metadata {
    name   = var.controller_deployment
    labels = var.controller_labels
    namespace = var.namespace
  }

  spec {
    type = "ClusterIP"
    selector = var.controller_labels

    port {
      name        = "api"
      port        = 9200
      target_port = 9200
    }
    port {
      name        = "cluster"
      port        = 9201
      target_port = 9201
    }
    port {
      name        = "data"
      port        = 9202
      target_port = 9202
    }
  }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.