Giter VIP home page Giter VIP logo

terraform-google-vault's Introduction

Vault on GCE Terraform Module

Modular deployment of Vault on Google Compute Engine.

This module is versioned and released on the Terraform module registry. Look for the tag that corresponds to your version for the correct documentation.

  • Vault HA - Vault is configured to run in high availability mode with Google Cloud Storage. Choose a vault_min_num_servers greater than 0 to enable HA mode.

  • Production hardened - Vault is deployed according to applicable parts of the production hardening guide.

    • Traffic is encrypted with end-to-end TLS using self-signed certificates which can be generated or supplied (see Managing TLS below).

    • Vault is the main process on the VMs, and Vault runs as an unprivileged user (vault:vault) on the VMs under systemd.

    • Outgoing Vault traffic happens through a restricted NAT gateway through dedicated IPs for logging and monitoring. You can further restrict outbound access with additional firewall rules.

    • The Vault nodes are not publicly accessible. They do have SSH enabled, but require a bastion host on their dedicated network to access. You can disable SSH access entirely by setting ssh_allowed_cidrs to the empty list.

    • Swap is disabled (the default on all GCE VMs), reducing the risk that in-memory data will be paged to disk.

    • Core dumps are disabled.

    The following values do not represent Vault's best practices and you may wish to change their defaults:

    • Vault is publicly accessible from any IP through the load balancer. To limit the list of source IPs that can communicate with Vault nodes, set vault_allowed_cidrs to a list of CIDR blocks.

    • Auditing is not enabled by default, because an initial bootstrap requires you to initialize the Vault. Everything is pre-configured for when you're ready to enable audit logging, but it cannot be enabled before Vault is initialized.

    • Auto-unseal - Vault is automatically unsealed using the built-in Vault 1.0+ auto-unsealing mechanisms for Google Cloud KMS. The Vault servers are not automatically initialized, providing a clear separation.

    • Isolation - The Vault nodes are not exposed publicly. They live in a private subnet with a dedicated NAT gateway.

    • Audit logging - The system is setup to accept Vault audit logs with a single configuration command. Vault audit logs are not enabled by default because you have to initialize the system first.

Usage

  1. Add the module definition to your Terraform configurations:

    module "vault" {
      source         = "terraform-google-modules/vault/google"
      project_id     = var.project_id
      region         = var.region
      kms_keyring    = var.kms_keyring
      kms_crypto_key = var.kms_crypto_key
    }

    Make sure you are using version pinning to avoid unexpected changes when the module is updated.

  2. Execute Terraform:

    $ terraform apply
    
  3. Configure your local Vault binary to communicate with the Vault server:

    $ export VAULT_ADDR="$(terraform output vault_addr)"
    $ export VAULT_CACERT="$(pwd)/ca.crt"
    
  4. Wait for Vault to start. Here's a script or you can wait ~2 minutes.

    (while [[ $count -lt 60 && "$(vault status 2>&1)" =~ "connection refused" ]]; do ((count=count+1)) ; echo "$(date) $count: Waiting for Vault to start..." ; sleep 2; done && [[ $count -lt 60 ]])
    [[ $? -ne 0 ]] && echo "ERROR: Error waiting for Vault to start" && exit 1
    
  5. Initialize the Vault cluster, generating the initial root token and unseal keys:

    $ vault operator init \
        -recovery-shares 5 \
        -recovery-threshold 3
    

    The Vault servers will automatically unseal using the Google Cloud KMS key created earlier. The recovery shares are to be given to operators to unseal the Vault nodes in case Cloud KMS is unavailable in a disaster recovery. They can also be used to generate a new root token. Distribute these keys to trusted people on your team (like people who will be on-call and responsible for maintaining Vault).

    The output will look like this:

    Recovery Key 1: 2EWrT/YVlYE54EwvKaH3JzOGmq8AVJJkVFQDni8MYC+T
    Recovery Key 2: 6WCNGKN+dU43APJuGEVvIG6bAHA6tsth5ZR8/bJWi60/
    Recovery Key 3: XC1vSb/GfH35zTK4UkAR7okJWaRjnGrP75aQX0xByKfV
    Recovery Key 4: ZSvu2hWWmd4ECEIHj/FShxxCw7Wd2KbkLRsDm30f2tu3
    Recovery Key 5: T4VBvwRv0pkQLeTC/98JJ+Rj/Zn75bLfmAaFLDQihL9Y
    
    Initial Root Token: s.kn11NdBhLig2VJ0botgrwq9u
    

    Save this initial root token and do not clear your history. You will need this token to continue the tutorial.

Managing TLS

If, like many orgs, you manage your own self-signed TLS certificates, you likely will not want them managed by Terraform. Additionally this poses a security risk since the private keys will be stored in plaintext in the terraform.tfstate file. To use your own certificates, set manage_tls = false. Then before you apply this module, you'll need to have your certificates prepared. An example instantiation would look like this:

module "vault" {
  source  = "terraform-google-modules/vault/google"
  ...

  # Manage our own TLS Certs so the private keys don't
  # end up in Terraform state
  manage_tls        = false
  vault_tls_bucket  = google_storage_bucket.vault_tls.name
  vault_tls_kms_key = google_kms_crypto_key.vault_tls.self_link

  # These are the default values shown here for clarity
  vault_ca_cert_filename  = "ca.crt"
  vault_tls_cert_filename = "vault.crt"
  vault_tls_key_filename  = "vault.key.enc"
}

To store your keys, you’ll need to create at minimum the 3 files that are shown above at the root of the TLS bucket specified by vault_tls_bucket, but their filenames and paths can be overridden using the vault_*_filename variables shown above.

  • CA Certificate. This file should be the PEM formatted CA certificate that the Vault server certificate is created from
  • Vault Server Certificate. This file should correspond to the Vault Private Key also stored on the Vault hosts to terminate the TLS connection.
  • Vault Private Key. As you’ll notice, this key has the .enc file extension denoting it is encrypted. When the Vault host spins up, it will fetch all these certificates and the key and on the key it will use the specified TLS KMS Key to Base64 decode and then decrypt the private key before storing it on the filesystem.

Assuming you have these files locally that have been generated by OpenSSL or some other CA, you can store them with the following commands:

gcloud kms encrypt \
  --project=${PROJECT} \
  --key=${KMS_KEY} \
  --plaintext-file=vault.key \
  --ciphertext-file=- | base64 > "vault.key.enc"

for file in vault.key.enc ca.crt vault.crt; do
  gsutil cp $file gs://$TLS_BUCKET/$file
done

Inputs

Name Description Type Default Required
allow_public_egress Whether to create a NAT for external egress. If false, you must also specify an http_proxy to download required executables including Vault, Fluentd and Stackdriver bool true no
allow_ssh Allow external access to ssh port 22 on the Vault VMs. It is a best practice to set this to false, however it is true by default for the sake of backwards compatibility. bool true no
domain The domain name that will be set in the api_addr. Load Balancer IP used by default string "" no
host_project_id The project id of the shared VPC host project, when deploying into a shared VPC string "" no
http_proxy HTTP proxy for downloading agents and vault executable on startup. Only necessary if allow_public_egress is false. This is only used on the first startup of the Vault cluster and will NOT set the global HTTP_PROXY environment variable. i.e. If you configure Vault to manage credentials for other services, default HTTP routes will be taken. string "" no
kms_crypto_key The name of the Cloud KMS Key used for encrypting initial TLS certificates and for configuring Vault auto-unseal. Terraform will create this key. string "vault-init" no
kms_keyring Name of the Cloud KMS KeyRing for asset encryption. Terraform will create this keyring. string "vault" no
kms_protection_level The protection level to use for the KMS crypto key. string "software" no
load_balancing_scheme Options are INTERNAL or EXTERNAL. If EXTERNAL, the forwarding rule will be of type EXTERNAL and a public IP will be created. If INTERNAL the type will be INTERNAL and a random RFC 1918 private IP will be assigned string "EXTERNAL" no
manage_tls Set to false if you'd like to manage and upload your own TLS files. See Managing TLS for more details bool true no
network The self link of the VPC network for Vault. By default, one will be created for you. string "" no
network_subnet_cidr_range CIDR block range for the subnet. string "10.127.0.0/20" no
project_id ID of the project in which to create resources and add IAM bindings. string n/a yes
project_services List of services to enable on the project where Vault will run. These services are required in order for this Vault setup to function. list(string)
[
"cloudkms.googleapis.com",
"cloudresourcemanager.googleapis.com",
"compute.googleapis.com",
"iam.googleapis.com",
"logging.googleapis.com",
"monitoring.googleapis.com"
]
no
region Region in which to create resources. string "us-east4" no
service_account_name Name of the Vault service account. string "vault-admin" no
service_account_project_additional_iam_roles List of custom IAM roles to add to the project. list(string) [] no
service_account_project_iam_roles List of IAM roles for the Vault admin service account to function. If you need to add additional roles, update service_account_project_additional_iam_roles instead. list(string)
[
"roles/logging.logWriter",
"roles/monitoring.metricWriter",
"roles/monitoring.viewer"
]
no
service_account_storage_bucket_iam_roles List of IAM roles for the Vault admin service account to have on the storage bucket. list(string)
[
"roles/storage.legacyBucketReader",
"roles/storage.objectAdmin"
]
no
service_label The service label to set on the internal load balancer. If not empty, this enables internal DNS for internal load balancers. By default, the service label is disabled. This has no effect on external load balancers. string null no
ssh_allowed_cidrs List of CIDR blocks to allow access to SSH into nodes. list(string)
[
"0.0.0.0/0"
]
no
storage_bucket_class Type of data storage to use. If you change this value, you will also need to choose a storage_bucket_location which matches this parameter type string "MULTI_REGIONAL" no
storage_bucket_enable_versioning Set to true to enable object versioning in the GCS bucket.. You may want to define lifecycle rules if you want a finite number of old versions. string false no
storage_bucket_force_destroy Set to true to force deletion of backend bucket on terraform destroy string false no
storage_bucket_lifecycle_rules Vault storage lifecycle rules
list(object({
action = map(object({
type = string,
storage_class = string
})),
condition = map(object({
age = number,
created_before = string,
with_state = string,
is_live = string,
matches_storage_class = string,
num_newer_versions = number
}))
}))
[] no
storage_bucket_location Location for the Google Cloud Storage bucket in which Vault data will be stored. string "us" no
storage_bucket_name Name of the Google Cloud Storage bucket for the Vault backend storage. This must be globally unique across of of GCP. If left as the empty string, this will default to: '-vault-data'. string "" no
subnet The self link of the VPC subnetwork for Vault. By default, one will be created for you. string "" no
tls_ca_subject The subject block for the root CA certificate.
object({
common_name = string,
organization = string,
organizational_unit = string,
street_address = list(string),
locality = string,
province = string,
country = string,
postal_code = string,
})
{
"common_name": "Example Inc. Root",
"country": "US",
"locality": "The Intranet",
"organization": "Example, Inc",
"organizational_unit": "Department of Certificate Authority",
"postal_code": "95559-1227",
"province": "CA",
"street_address": [
"123 Example Street"
]
}
no
tls_cn The TLS Common Name for the TLS certificates string "vault.example.net" no
tls_dns_names List of DNS names added to the Vault server self-signed certificate list(string)
[
"vault.example.net"
]
no
tls_ips List of IP addresses added to the Vault server self-signed certificate list(string)
[
"127.0.0.1"
]
no
tls_ou The TLS Organizational Unit for the TLS certificate string "IT Security Operations" no
tls_save_ca_to_disk Save the CA public certificate on the local filesystem. The CA is always stored in GCS, but this option also saves it to the filesystem. bool true no
tls_save_ca_to_disk_filename The filename or full path to save the CA public certificate on the local filesystem. Ony applicable if tls_save_ca_to_disk is set to true. string "ca.crt" no
user_startup_script Additional user-provided code injected after Vault is setup string "" no
user_vault_config Additional user-provided vault config added at the end of standard vault config string "" no
vault_allowed_cidrs List of CIDR blocks to allow access to the Vault nodes. Since the load balancer is a pass-through load balancer, this must also include all IPs from which you will access Vault. The default is unrestricted (any IP address can access Vault). It is recommended that you reduce this to a smaller list. list(string)
[
"0.0.0.0/0"
]
no
vault_args Additional command line arguments passed to Vault server string "" no
vault_ca_cert_filename GCS object path within the vault_tls_bucket. This is the root CA certificate. string "ca.crt" no
vault_instance_base_image Base operating system image in which to install Vault. This must be a Debian-based system at the moment due to how the metadata startup script runs. string "debian-cloud/debian-10" no
vault_instance_labels Labels to apply to the Vault instances. map(string) {} no
vault_instance_metadata Additional metadata to add to the Vault instances. map(string) {} no
vault_instance_tags Additional tags to apply to the instances. Note 'allow-ssh' and 'allow-vault' will be present on all instances. list(string) [] no
vault_log_level Log level to run Vault in. See the Vault documentation for valid values. string "warn" no
vault_machine_type Machine type to use for Vault instances. string "e2-standard-2" no
vault_max_num_servers Maximum number of Vault server nodes to run at one time. The group will not autoscale beyond this number. string "7" no
vault_min_num_servers Minimum number of Vault server nodes in the autoscaling group. The group will not have less than this number of nodes. string "1" no
vault_port Numeric port on which to run and expose Vault. string "8200" no
vault_proxy_port Port to expose Vault's health status endpoint on over HTTP on /. This is required for the health checks to verify Vault's status is using an external load balancer. Only the health status endpoint is exposed, and it is only accessible from Google's load balancer addresses. string "58200" no
vault_tls_bucket GCS Bucket override where Vault will expect TLS certificates are stored. string "" no
vault_tls_cert_filename GCS object path within the vault_tls_bucket. This is the vault server certificate. string "vault.crt" no
vault_tls_disable_client_certs Use client certificates when provided. You may want to disable this if users will not be authenticating to Vault with client certificates. string false no
vault_tls_key_filename Encrypted and base64 encoded GCS object path within the vault_tls_bucket. This is the Vault TLS private key. string "vault.key.enc" no
vault_tls_kms_key Fully qualified name of the KMS key, for example, vault_tls_kms_key = "projects/PROJECT_ID/locations/LOCATION/keyRings/KEYRING/cryptoKeys/KEY_NAME". This key should have been used to encrypt the TLS private key if Terraform is not managing TLS. The Vault service account will be granted access to the KMS Decrypter role once it is created so it can pull from this the vault_tls_bucket at boot time. This option is required when manage_tls is set to false. string "" no
vault_tls_kms_key_project Project ID where the KMS key is stored. By default, same as project_id string "" no
vault_tls_require_and_verify_client_cert Always use client certificates. You may want to disable this if users will not be authenticating to Vault with client certificates. string false no
vault_ui_enabled Controls whether the Vault UI is enabled and accessible. string true no
vault_update_policy_type Options are OPPORTUNISTIC or PROACTIVE. If PROACTIVE, the instance group manager proactively executes actions in order to bring instances to their target versions string "OPPORTUNISTIC" no
vault_version Version of vault to install. This version must be 1.0+ and must be published on the HashiCorp releases service. string "1.6.0" no

Outputs

Name Description
ca_cert_pem CA certificate used to verify Vault TLS client connections.
ca_key_pem Private key for the CA.
service_account_email Email for the vault-admin service account.
vault_addr Full protocol, address, and port (FQDN) pointing to the Vault load balancer.This is a drop-in to VAULT_ADDR: export VAULT_ADDR="$(terraform output vault_addr)". And then continue to use Vault commands as usual.
vault_lb_addr Address of the load balancer without port or protocol information. You probably want to use vault_addr.
vault_lb_port Port where Vault is exposed on the load balancer.
vault_nat_ips The NAT-ips that the vault nodes will use to communicate with external services.
vault_network The network in which the Vault cluster resides
vault_storage_bucket GCS Bucket Vault is using as a backend/database
vault_subnet The subnetwork in which the Vault cluster resides

Additional permissions

The default installation includes the most minimal set of permissions to run Vault. Certain plugins may require more permissions, which you can grant to the service account using service_account_project_additional_iam_roles:

GCP auth method

The GCP auth method requires the following additional permissions:

roles/iam.serviceAccountKeyAdmin

GCP secrets engine

The GCP secrets engine requires the following additional permissions:

roles/iam.serviceAccountKeyAdmin
roles/iam.serviceAccountAdmin

GCP KMS secrets engine

The GCP secrets engine permissions vary. There are examples in the secrets engine documentation.

Logs

The Vault server logs will automatically appear in Stackdriver under "GCE VM Instance" tagged as "vaultproject.io/server".

The Vault audit logs, once enabled, will appear in Stackdriver under "GCE VM Instance" tagged as "vaultproject.io/audit".

Sandboxes & Terraform Cloud

When running in a sandbox such as Terraform Cloud, you need to disable filesystem access. You can do this by setting the following variables:

# terraform.tfvars
tls_save_ca_to_disk = false

FAQ

  • I see unhealthy Vault nodes in my load balancer pool!

    This is the expected behavior. Only the active Vault node is added to the load balancer to prevent redirect loops. If that node loses leadership, its health check will start failing and a standby node will take its place in the load balancer.

  • Can I connect to the Vault nodes directly?

    Connecting to the vault nodes directly is not recommended, even if on the same network. Always connect through the load balancer. You can alter the load balancer to be an internal-only load balancer if needed.

terraform-google-vault's People

Contributors

apeabody avatar bharathkkb avatar boeboe avatar burninmedia avatar cloud-foundation-bot avatar danisla avatar daviey avatar dcaba avatar dlouvier avatar fmsbeekmans avatar frits-v avatar g-awmalik avatar ggprod avatar guhcampos avatar jai avatar jeffmccune avatar lbordowitz avatar lucasvianna avatar mark-00 avatar mindlace avatar onetwopunch avatar raj-saxena avatar release-please[bot] avatar renovate[bot] avatar riptl avatar sethvargo avatar symbiont-tony-moulton avatar vovnguyen avatar yveoch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-google-vault's Issues

Make the NAT optional

Many users have strict requirements about systems reaching out to the internet. Private Google Access alleviates some of these concerns, but in this module, the nat, external IP are generated by default with no way to override.

To address these concerns, we should:

  • Make the NAT, external IP and router optional using a flag variable. Something like allow_public_egress
  • If there is no NAT, we need to adjust the startup script to not make external curl calls. Possibly pass in a http_proxy to the script

seal-status: x509: certificate signed by unknown authority

Following solution guide https://cloud.google.com/solutions/using-vault-for-secret-management, but I got the following error when running the vault unseal command.

Error checking seal status: Get https://127.0.0.1:8200/v1/sys/seal-status: x509: certificate signed by unknown authority

I made sure these variables were set, but I must be missing something.

export VAULT_ADDR=https://127.0.0.1:8200
export VAULT_CACERT=/etc/vault/vault-server.ca.crt.pem
export VAULT_CLIENT_CERT=/etc/vault/vault-server.crt.pem
export VAULT_CLIENT_KEY=/etc/vault/vault-server.key.pem

Accessing the vault server from another service

I followed the example to setup the vault server as is . Now I want it to be exposed to a particular service to store and retrieve secrets. How do I go about it. What configurations should I change.

Thank you in advance

resource "google_project_iam_policy" causes deleting iam policy

{resource "google_project_iam_policy" "vault" { project = "${var.project_id}" policy_data = "${data.google_iam_policy.vault.policy_data}" }

Which makes problem!that when you destroy vault using terraform it will restore the project iam policy to the state that was before destroying.

That means all the new iam policies that was created during the period that Vault was created by terraform and till the the time vault was destroyed are gonna be gone !

Permission problem while vault-admin has Owner permission

Not working!

module.vault.google_service_account_key.vault-admin: google_service_account_key.vault-admin: Error reading Service Account Key "projects/xxxxxxx/serviceAccounts/[email protected]/keys/bcb59907af2222638ccc9d35addccdf7b0d1eb44": googleapi: Error 403: Permission iam.serviceAccountKeys.get is required to perform this operation on service account key projects/xxxxxxx/serviceAccounts/[email protected]/keys/bcb59907af2222638ccc9d35addccdf7b0d1eb44., forbidden

While [email protected] has Owner permissions.

Has anyone got this project working?

Terraform Cloud Gotchas

Hi,

This is more of a documentation issue perhaps?

I think it's useful to make a note this module does not work with Terraform Cloud, because it uses the local_file provider. Your run on TF Cloud will fail at the last very step, which consists of dowloading generated certificates to the local machine.

There is no work around, my suggestion would be to change the module to use a GCS bucket instead of a local file to save the resulting certificate.

If you - like me - only found this out on the last minute, you can set the execution model of your Terraform Workspace to local, so you'll at least be able to keep using the shared state from the remote backend and not have to destroy everything.

Missing Permissions and Service Account keys

I've been trying to get vault-on-gce running in a brand new project I am owner of. However, I've run into several issues. For instance, I've had to add several Roles to the created Service Account (vault-admin). Additionally, because of the lack of dependencies in the vault-server module, the startup script for the GCE instance fails as the service account used on that instance does not yet have the permissions to authenticate and upload to GStorage. So every time I've run this, I've had to rolling restart the instance group.

Similarly, in the steps laid out here, I've found that I cannot run this command:

JWT_TOKEN=$(gcloud beta iam service-accounts sign-jwt login_request.json \
    signed_jwt.json \
    --iam-account=${SERVICE_ACCOUNT} && cat signed_jwt.json)

without running gcloud auth login and giving myself the Service Account Token Creator role.

After resolving that, this command fails:

vault write -field=token auth/gcp/login role=dev-role jwt=${JWT_TOKEN} > ~/.vault-token

with an error stating the key used to sign the JWT Token does not exist. It looks like the JWT was signed using a service account key that doesn't exist and is not visible on the Service Account UI in the cloud console.

This is the redacted error:

* service account key 'projects/{PROJECT}/serviceAccounts/vault-admin@{PROJECT}.iam.gserviceaccount.com/keys/{NON_EXISTANT_KEY_ID}' does not exist: googleapi: Error 403: Permission iam.serviceAccountKeys.get is required to perform this operation on service account key projects/{PROJECT}/serviceAccounts/vault-admin@{PROJECT}.iam.gserviceaccount.com/keys/{NON_EXISTANT_KEY_ID}., forbidden

How can I resolve this issue?

Defining provider in the module makes it impossible to remove the module from the project.

While experimenting with this module, I've noticed the following issue:

after wanting to remove the module from my terraform config and the state, terraform apply fails with a bunch of errors that look like the one below.

In the terraform state, all GCP resources reference the provider defined in the module that does not exist after the module is deleted. The only way to remove the resources was using terraform destroy -target=... and listing the vault resources one by one.

Error: Provider configuration not present

To work with module.vault.module.vault.google_kms_crypto_key_iam_member.ck-iam
its original provider configuration at
module.vault.module.vault.provider.google is required, but it has been
removed. This occurs when a provider configuration is removed while objects
created by that provider still exist in the state. Re-add the provider
configuration to destroy
module.vault.module.vault.google_kms_crypto_key_iam_member.ck-iam, after which
you can remove the provider configuration again.

Add ability to specify my own certificate for client connection, but let everything else be managed by module

For instance, I have generated key and certificate via ACME provider:

provider "acme" {
  server_url = "https://acme-v02.api.letsencrypt.org/directory"
  version    = "~> 1.5"
}

resource "tls_private_key" "letsencrypt" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

resource "acme_registration" "letsencrypt" {
  account_key_pem = tls_private_key.letsencrypt.private_key_pem
  email_address   = "[email protected]"
}

resource "acme_certificate" "vault" {
  account_key_pem = acme_registration.letsencrypt.account_key_pem
  common_name     = "vault.example.com"

  recursive_nameservers = [
    "8.8.8.8:53",
    "8.8.4.4:53",
  ]

  key_type = 4096

  dns_challenge {
    provider = "gcloud"
    config = {
      GCE_PROJECT = "myproject"
    }
  }
}

For sure, I can specify it via setting manage_tls = false and do all the TLS-related stuff by myself, but it will be a lot handier if you will add the ability to specify my key and certificate for the module and let the module to the rest: encrypt with KMS and upload files to bucket.

Why my startup.sh.tpl does not run

when i run "terraform apply" and it come complete, but startup.sh.tpl contents has not run, and in VM instance detail page, I can see startup-script with the startup.sh.tpl contents, I don't know why startup.sh.tpl doesn't run. Can anyone help me with this issue, thanks!

Feature Request: Auto scaling group

I was wondering what extra config would be required to add a load balancer and an ASG that would scale out automatically based on CPU usage ?

instance is unhealthy behind the load balancer and local vault client is unable to connect to it

I'm running the module using the following arguments:

# project configurations
project_id = "<project_id>"
region = "europe-west1"
# KMS configuration
kms_location = "europe-west1"
# storage location
storage_bucket_class = "REGIONAL"
storage_bucket_location = "europe-west1"
# version
vault_version = "1.2.2"

I find myself in the interesting position where, because the instance is unhealthy behind the load balancer, the steps in the readme:

$ export VAULT_ADDR="$(terraform output vault_addr)"
$ export VAULT_CACERT="$(pwd)/ca.crt"
$ vault operator init \
    -recovery-shares 5 \
    -recovery-threshold 3

cannot be executed, because I do not have access to any of the instances outside of the LB.

No worries, I thought. SSH to the rescue, but using the same credentials, I am unable to even get the vault status without getting a 404 of this form:

Error checking seal status: Error making API request.
URL: GET http://127.0.0.1/v1/sys/seal-status
Code: 404. Raw Message:
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.10.3</center>
</body>
</html>

Is there some simple step I am missing to be able to perform the vault operator init .. step?

Disable SSH access by default

We should not be encouraging SSH onto the Vault nodes in general and allowing SSH from 0.0.0.0/0 by default seems especially not great since a single change to the forwarding rule ports could make Vault SSH public. Since #81 is merged and users can address the vault network via outputs, it makes sense to deny SSH by default and allow users to add that firewall rule if they need access for testing, otherwise we should only encourage configuring via HTTP.

Service account keys are regenerated on every boot.

The instance startup script runs the gcloud command to generated and save the service account JSON for the GCP auth plugin. If the instance is recreated, the boot disk is lost and this action is run again.

Should have Terraform generate the service account key JSON through an external resource, encrypt it and store it in the assets bucket like how the TLS certs are handled.

Input variables listed in the README.me aren't used

network and subnetwork are listed as input variables for terrform-google-vault but neither of them is used. when attempting to use them they are ignored

  source = "github.com/GoogleCloudPlatform/terrform-google-vault"
  project_id           = "${var.project_id}"
  region               = "${var.region}"
  zone                 = "${var.zone}"
  storage_bucket       = "${var.storage_bucket}"
  kms_keyring_name     = "${var.kms_keyring_name}"
  network              = "foo"
  subnetwork           = "bar"
}

Example Points to Wrong Repo?

What source should I use on this example? The original is commented out and when I put it back in I get this error:

Downloading github.com/GoogleCloudPlatform/terraform-google-vault for vault...
- vault in .terraform/modules/vault
There are some problems with the configuration, described below.

The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.

Error: Invalid default value for variable

  on .terraform/modules/vault/variables.tf line 272, in variable "tls_ca_subject":
 272:   default = {
 273:     common_name         = "Example Inc. Root"
 274:     organization        = "Example, Inc"
 275:     organizational_unit = "Department of Certificate Authority"
 276:     street_address      = ["123 Example Street"]
 277:     locality            = "The Intranet"
 278:     province            = "CA"
 279:     country             = "US"
 280:     postal_code         = "95559-1227"
 281:   }

This default value is not compatible with the variable's type constraint: all
map elements must have the same type.

Spin up Vault cluster on custom network

I'm not sure if there are any security considerations or other reasons behind the creation of dedicated network/subnetwork in this module. Would like to know the reasons if there are any. Otherwise we can probably work on adding the ability to choose whether to use dedicated network or custom network.

terraform init failing with Invalid default value for variable "tls_ca_subject"

terraform plan
Error: Invalid default value for variable
on ../../variables.tf line 76, in variable "tls_ca_subject":
76: default = {
77: common_name = "Example Inc. Root"
78: organization = "Example, Inc"
79: organizational_unit = "Department of Certificate Authority"
80: street_address = ["123 Example Street"]
81: locality = "The Intranet"
82: province = "CA"
83: country = "US"
84: postal_code = "95559-1227"
85: }
This default value is not compatible with the variable's type constraint: all
map elements must have the same type.

changing street_address type to String fixing the issue.

Output the NAT-ips

Outputting the NAT-ips will help to configure other resources via terraform that can depend on this value.

For example, the NAT-ips can be whitelisted in the firewall to allow vault to reach, say databases for managing database secret-engine.

Right now, we have to manually go and fetch the IPs from Google Cloud Console and insert it in the terraform files.

examples/vault-on-gcp failing now

I ran the terraform scripts previously 2 weeks ago and everything worked fine. Running them again today I end up with variations of

* module.vault.data.external.sa-key: data.external.sa-key: failed to execute "/home/robert/terraform-google-vault/examples/vault-on-gce/.terraform/modules/85d849467b2166b96780a48cb9f
6e5d7/get_sa_key.sh": fork/exec /home/robert/terraform-google-vault/examples/vault-on-gce/.terraform/modules/85d849467b2166b96780a48cb9f6e5d7/get_sa_key.sh: no such file or directory

I've verified that these scripts exist at the exact location specified. Any idea what's wrong?

Can't run from OSX

OSX doesn't have the md5sum tool by default so the gcpkms-encrypt.sh script fails to execute successfully.

The tool can be easily installed using brew install md5sha1sum but there's no mention of this anywhere.

Internal Loadbalancer support

The README.md states "You can alter the load balancer to be an internal-only load balancer if needed" (https://github.com/terraform-google-modules/terraform-google-vault/blob/master/README.md#faq). However, there is no guidance on how this would be done.

Looking at the module, it appears to be hard-coded. This implies that having an internal-only load balancer requires forking the module which is less than ideal

load_balancing_scheme = "EXTERNAL"

If I am mistaken, is it possible to include some guidance on how to achieve this or if it doesn't currently support this, can support for an Internal LB be added? Thanks

Changing project_id of a deployed vault doesn't cause google_compute_region_instance_group_manager to be recreated

Deploying the latest vault module (Master):

module "vault" {
  source = "github.com/terraform-google-modules/terraform-google-vault?ref=810e6d1559b68023a84379f518dbfa5c0f15253a"

  project_id = "old"
...

Run plan/apply then change the project_id to something, such as 'new', plan+apply causes this error:

1 error occurred:
	* module.vault.google_compute_region_instance_group_manager.vault: 1 error occurred:
	* google_compute_region_instance_group_manager.vault: Error waiting for Creating InstanceGroupManager: The user does not have access to service account '[email protected]'.  User: 'terraform@<redacted>.iam.gserviceaccount.com'.  Ask a project owner to grant you the iam.serviceAccountUser role on the service account

The expected behaviour is that the resource would be changed or recreated to match the new project_id.

Fields for google provider [REMOVED]

Note this issue is related to #34 but I am getting a slightly different error.

Problem: When attempting to use the vault-on-gce example I get the following errors after executing terraform plan.

Error: module.vault.module.vault-server.google_compute_instance_group_manager.default: "auto_healing_policies.0.initial_delay_sec": [REMOVED] This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.

Error: module.vault.module.vault-server.google_compute_instance_group_manager.default: "rolling_update_policy": [REMOVED] This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.

Error: module.vault.module.vault-server.google_compute_instance_template.default: "network_interface.0.address": [REMOVED] Please use network_ip

Attempted fixes:

  1. I attempted to execute the terraform execution in the context of google cloud shell to see if there was perhaps something unique about my local environment but that did not change the outcome.

  2. Reading the error and associated documentation link it appears to me that I should be able to switch from the google provider to the google-beta provider to resolve this issue. I edited the following line to be google-beta.

This did not resolve the issue. I still see the google provider being provisioned during terraform init. I am fairly new to terraform and in reading about the plugin system it seems there isn't a straightforward way to remove old providers.

To help mitigate the potential of an old google version of the provider being the issue I started over again in a new folder, as terraform will provision providers for only the current working directory. However when I execute terraform init I still see the google provider being initialized.

* provider.google: version = "~> 2.0"
* provider.google-beta: version = "~> 2.0"

Attempting terraform plan after the initialization in a new folder does still trigger the same error.

I searched through the remainder of the codebase in the hope that I would find a call to the original google provider but I haven't been able to find anything. This is literally my first time using terraform, I very well may be chasing the wrong issue.

System information:

  • osx: 10.14.3
  • Terraform: v0.11.11

Getting a perpetual diff for google_storage_bucket_object.vault-private-key

If I run a plan after a clean apply, I get this diff:

  # module.vault.data.external.vault-tls-key-encrypted[0] will be read during apply
  # (config refers to values not yet known)
 <= data "external" "vault-tls-key-encrypted"  {
      + id      = (known after apply)
      + program = [
          + ".terraform/modules/vault/terraform-google-modules-terraform-google-vault-88211c5/scripts/gcpkms-encrypt.sh",
        ]
      + query   = {
          + "data"     = "[redacted]"
          + "key"      = "vault-init"
          + "keyring"  = "vault"
          + "location" = "europe-west2"
          + "project"  = "[redacted]"
          + "root"     = ".terraform/modules/vault/terraform-google-modules-terraform-google-vault-88211c5"
        }
      + result  = (known after apply)
    }

  # module.vault.google_storage_bucket_object.vault-private-key[0] must be replaced
-/+ resource "google_storage_bucket_object" "vault-private-key" {
        bucket         = "[redacted]-vault-data"
      ~ content        = (sensitive value)
      ~ content_type   = "text/plain; charset=utf-8" -> (known after apply)
      ~ crc32c         = "[redacted]" -> (known after apply)
      ~ detect_md5hash = "[redacted]" -> "different hash" # forces replacement
      ~ id             = "[redacted]-vault-data-vault.key.enc" -> (known after apply)
      ~ md5hash        = "[redacted (same as detect_md5hash)]==" -> (known after apply)
        name           = "vault.key.enc"
      ~ output_name    = "vault.key.enc" -> (known after apply)
      ~ self_link      = "https://www.googleapis.com/storage/v1/b/[redacted]-vault-data/o/vault.key.enc" -> (known after apply)
      ~ storage_class  = "MULTI_REGIONAL" -> (known after apply)
    }

This is my module config:

module "vault" {
  source         = "terraform-google-modules/vault/google"
  version        = "3.0.0"
  project_id     = var.google_project
  region         = var.google_region
  storage_bucket_location = "eu"

  tls_ca_subject = {[redacted]}

  tls_cn         = "vault.[redacted]"
  tls_dns_names  = ["vault.[redacted]"]
}

(let me know if i over-redacted. there's no interpolation in any of the values, they are all just static string literals)

An error following terraform plan " project: required field is not set"

I'm getting this error message after a terraform plan

Error: Error refreshing state: 1 error(s) occurred:

* module.vault.module.vault-server.data.google_compute_instance_group.zonal: 1 error(s) occurred:

* module.vault.module.vault-server.data.google_compute_instance_group.zonal: data.google_compute_instance_group.zonal: project: required field is not set

deprecations

Warning: module.vault.module.vault-server.google_compute_instance_group_manager.default: "auto_healing_policies": [DEPRECATED] This field is in beta and will be removed from this provider. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.

Warning: module.vault.module.vault-server.google_compute_instance_group_manager.default: "rolling_update_policy": [DEPRECATED] This field is in beta and will be removed from this provider. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.

Warning: module.vault.module.vault-server.google_compute_instance_template.default: "network_interface.0.address": [DEPRECATED] Please use network_ip

output -module=vault is unsupported

in the documentation it state that address can be outputed using the following call,

terraform output -module=vault vault_addr

however, it returns error :

Error: Unsupported option

The -module option is no longer supported since Terraform 0.12, because now
only root outputs are persisted in the state.```

Release?

There are some more or less important changes and fixes since last 3.0.3 release. Maybe it's time to next release?

google_storage_bucket_object Errors

Terraform version: v0.10.8

I experienced this issue before and thought he fix was related to my values

...
kms_keyring_name = "vault"
kms_key_name = "vault-init"
...

However, I'm doing fresh deployment and came across the issue once again, double-checked my terraform.tfvars file to make sure I have the correct values and as you can see they are correct.

Anyone else having this issue and what it related to, did I miss a step?

Error: Error applying plan:

4 error(s) occurred:

* module.vault.google_storage_bucket_object.vault-tls-key: 1 error(s) occurred:

* google_storage_bucket_object.vault-tls-key: Error, either "content" or "source" must be specified
* module.vault.google_storage_bucket_object.vault-sa-key: 1 error(s) occurred:

* google_storage_bucket_object.vault-sa-key: Error, either "content" or "source" must be specified
* module.vault.google_storage_bucket_object.vault-ca-cert: 1 error(s) occurred:

* google_storage_bucket_object.vault-ca-cert: Error, either "content" or "source" must be specified
* module.vault.google_storage_bucket_object.vault-tls-cert: 1 error(s) occurred:

* google_storage_bucket_object.vault-tls-cert: Error, either "content" or "source" must be specified

downloading plugin for provider null

  • Downloading plugin for provider "null" (2.0.0)...

The expectation was that something else than null would be returned while running terraform init

4 errors with examples/vault-on-gce

I'm using the vault-on-gce example and I've been getting the following errors during a terraform apply is anyone else experiencing this?

Error: Error applying plan:

4 error(s) occurred:

* module.vault.google_storage_bucket.vault: 1 error(s) occurred:

* google_storage_bucket.vault: project: required field is not set
* module.vault.google_storage_bucket.vault-assets: 1 error(s) occurred:

* google_storage_bucket.vault-assets: project: required field is not set
* module.vault.module.vault-server.google_compute_firewall.default-ssh: 1 error(s) occurred:

* google_compute_firewall.default-ssh: project: required field is not set
* module.vault.google_service_account.vault-admin: 1 error(s) occurred:

* google_service_account.vault-admin: project: required field is not set

Debian 10 upgrade breaks startup script

After a successful terraform apply, Vault is never installed on the created machine. Inspecting the logs reveals the root problem:

vault-us-east1-jvv9 login: Oct 1 18:29:06 vault-us-east1-jvv9 startup-script: INFO startup-script: E: The repository 'http://packages.cloud.google.com/apt google-cloud-monitoring-buster Release' does not have a Release file.

Taking a look at Google's repository I can see entries for jessie and strech, but not buster, so perhaps a downgrade is necessary.

Enabling the UI Doesn't Work

Setting vault_ui_enabled to true does not enable the UI for me. No matter what value I try setting it to the instance template sets it to false. I have a workaround/fix and would be happy to submit a merge request, but first wanted to verify this is not a problem with the way I'm doing things.

Versions:

(gcp-vault) C02VJ7T0HTD6:gcp-vault cworden$ terraform version
Terraform v0.11.11
+ provider.external v1.0.0
+ provider.google v2.1.0
+ provider.local v1.1.0
+ provider.template v2.1.0
+ provider.tls v1.2.0
{
  "Modules": [
    {
      "Source": "terraform-google-modules/vault/google",
      "Key": "1.vault;terraform-google-modules/vault/google.2.0.0",
      "Version": "2.0.0",
      "Dir": ".terraform/modules/61e6366303a658a4103537a91df0988a",
      "Root": "terraform-google-modules-terraform-google-vault-c403901"
    }
  ]
}

Configuration:

main.tf:

module "vault" {
  source  = "terraform-google-modules/vault/google"
  version = "2.0.0"

  project_id       = "${var.project_id}"
  kms_keyring = "${var.kms_keyring}"
  network_subnet_cidr_range = "${var.network_subnet_cidr_range}"
  project_services = "${var.project_services}"
  region           = "${var.region}"
  ssh_allowed_cidrs = "${var.ssh_allowed_cidrs}"
  storage_bucket_force_destroy = "${var.storage_bucket_force_destroy}"
  storage_bucket_name   = "${var.storage_bucket_name}"
  tls_ca_subject = "${var.tls_ca_subject}"
  tls_cn = "${var.tls_cn}"
  tls_dns_names = "${var.tls_dns_names}"
  tls_ips = "${var.tls_ips}"
  tls_ou = "${var.tls_ou}"
  vault_allowed_cidrs = "${var.vault_allowed_cidrs}"
  vault_args = "${var.vault_args}"
  vault_instance_labels = "${var.vault_instance_labels}"
  vault_instance_metadata = "${var.vault_instance_metadata}"
  vault_instance_tags = "${var.vault_instance_tags}"
  vault_log_level = "${var.vault_log_level}"
  vault_machine_type = "${var.vault_machine_type}"
  vault_max_num_servers = "${var.vault_max_num_servers}"
  vault_min_num_servers = "${var.vault_min_num_servers}"
  vault_ui_enabled = "${var.vault_ui_enabled}"
  vault_version = "${var.vault_version}"
  vault_tls_disable_client_certs = "${var.vault_tls_disable_client_certs}"

}

Relavant line of vars.tf:

variable "vault_ui_enabled" {
  default = true
}

Expected Results:

An instance template is created in GCP with the configuration setting ui set to true

startup-script snippet:

# Vault config
mkdir -p /etc/vault.d
cat <<"EOF" > /etc/vault.d/config.hcl
# Run Vault in HA mode. Even if there's only one Vault node, it doesn't hurt to
# have this set.
api_addr = "https://XXXXXX:8200"
cluster_addr = "https://LOCAL_IP:8201"

# Set debugging level
log_level = "warn"

# Enable the UI
ui = true

# Enable auto-unsealing with Google Cloud KMS
seal "gcpckms" {
  project    = "XXXXXX"
  region     = "us-west1"
  key_ring   = "vault_keyring1"
  crypto_key = "vault-init"
}

Actual Results:

An instance template is created in GCP with the configuration setting ui set to false

startup-script snippet:

# Vault config
mkdir -p /etc/vault.d
cat <<"EOF" > /etc/vault.d/config.hcl
# Run Vault in HA mode. Even if there's only one Vault node, it doesn't hurt to
# have this set.
api_addr = "https://XXXXXX:8200"
cluster_addr = "https://LOCAL_IP:8201"

# Set debugging level
log_level = "warn"

# Enable the UI
ui = false

# Enable auto-unsealing with Google Cloud KMS
seal "gcpckms" {
  project    = "XXXXXX"
  region     = "us-west1"
  key_ring   = "vault_keyring1"
  crypto_key = "vault-init"
}

Problem Code

I played around with this for a while and found that the ternary in config.hcl.tpl always results in the second value (false) no matter the value of vault_ui_enabled. If I switched true to false and vice versa in the ternary, no matter the value of vault_ui_enabled the resulting startup-script would have ui = true

Relavant snippet from config.hcl.tpl

# Enable the UI
ui = ${vault_ui_enabled == 1 ? true : false}

Workaround/fix

If I move the ternary to the data "template_file" section of main.tf everything works as expected for me.

So config.hcl.tpl becomes:

# Run Vault in HA mode. Even if there's only one Vault node, it doesn't hurt to
# have this set.
api_addr = "https://${lb_ip}:${vault_port}"
cluster_addr = "https://LOCAL_IP:8201"

# Set debugging level
log_level = "${vault_log_level}"

# Enable the UI
ui = ${vault_ui_enabled}

# Enable auto-unsealing with Google Cloud KMS
seal "gcpckms" {
  project    = "${kms_project}"
  region     = "${kms_location}"
  key_ring   = "${kms_keyring}"
  crypto_key = "${kms_crypto_key}"
}
...

and main.tf becomes:

...
# Compile the Vault configuration.
data "template_file" "vault-config" {
  template = "${file("${format("%s/scripts/config.hcl.tpl", path.module)}")}"

  vars {
    kms_project    = "${var.project_id}"
    kms_location   = "${google_kms_key_ring.vault.location}"
    kms_keyring    = "${google_kms_key_ring.vault.name}"
    kms_crypto_key = "${google_kms_crypto_key.vault-init.name}"

    lb_ip          = "${google_compute_address.vault.address}"
    storage_bucket = "${google_storage_bucket.vault.name}"

    vault_log_level                = "${var.vault_log_level}"
    vault_port                     = "${var.vault_port}"
    vault_tls_disable_client_certs = "${var.vault_tls_disable_client_certs}"
    vault_ui_enabled = "${var.vault_ui_enabled == 1 ? true : false}"
  }
}

No health check for auto-healing on managed instance group.

Managed instance group for Vault server does not have a health check. If the instance is non-responsive, it's not automatically restarted.

Need a http -> https proxy to support GCE instance HTTP health checks to the vault REST API.

Example proxy upstream URI:

https://127.0.0.1:8200/v1/sys/health?standbyok=true&sealedcode=200

Getting a project: required field is not set error

Hi

I'm setting the terraform variables on the plan and apply, but getting the following error message. Using default settings on the example repo using the hard coded US based regions and zone works but setting the location variables europe-west2 seems to be causing issues:

`Error applying plan:

4 error(s) occurred:

  • google_storage_bucket.vault: 1 error(s) occurred:

  • google_storage_bucket.vault: project: required field is not set

  • google_service_account.vault-admin: 1 error(s) occurred:

  • google_service_account.vault-admin: project: required field is not set

  • google_storage_bucket.vault-assets: 1 error(s) occurred:

  • google_storage_bucket.vault-assets: project: required field is not set

  • module.vault-server.google_compute_firewall.default-ssh: 1 error(s) occurred:

  • google_compute_firewall.default-ssh: project: required field is not set`

Identity Aware proxy with ssh tunnel

An alternative to bastion could be Identity Aware proxy with ssh tunnel.

The Vault nodes are not publicly accessible. They do have SSH enabled, but require a bastion host on their dedicated network to access. You can disable SSH access entirely by setting ssh_allowed_cidrs to the empty list.

Recommended 0.11 compatible version?

Hi, I'm curious what the recommended version to use for Terraform 0.11 is? I've not been able to move to 0.12 due to blocks in other modules I'm reliant upon, but the README.md doesn't have any guidance on what version to choose.

I took a quick poke around the commits, and I'm guessing 2.1.0, but wanted to get your feel for it before committing.

tls.tf missing resource instance key

Hi there,

In attempt to perform terraform apply/plan, an error message indicating that a missing reference is in the code, particular as follow:

Error: Missing resource instance key

  on tls.tf line 124, in resource "google_storage_bucket_object" "vault-private-key":
 124:   content = data.google_kms_secret_ciphertext.vault-tls-key-encrypted.ciphertext

Because data.google_kms_secret_ciphertext.vault-tls-key-encrypted has "count"
set, its attributes must be accessed on specific instances.

For example, to correlate with indices of a referring resource, use:
    data.google_kms_secret_ciphertext.vault-tls-key-encrypted[count.index]


The line is referencing the ciphertext call that doesn't look to have been assigned

# Encrypt server key with GCP KMS
data "google_kms_secret_ciphertext" "vault-tls-key-encrypted" {
  count      = local.manage_tls_count
  crypto_key = google_kms_crypto_key.vault-init.self_link
  plaintext  = tls_private_key.vault-server[0].private_key_pem
}

resource "google_storage_bucket_object" "vault-private-key" {
  count = local.manage_tls_count

  name    = var.vault_tls_key_filename
  content = data.google_kms_secret_ciphertext.vault-tls-key-encrypted.ciphertext
  bucket  = local.vault_tls_bucket

  depends_on = [google_storage_bucket.vault]

  lifecycle {
    ignore_changes = [
      content,
    ]
  }
}

As this looks to be a 0.12+ requirement, is there anyway around this?
Thanks for the work!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.