Giter VIP home page Giter VIP logo

terraform-docs-samples's People

Contributors

apeabody avatar askubis avatar aston-github avatar betsy-lichtenberg avatar bharathkkb avatar c2thorn avatar camiekim avatar ccarpentiere avatar dependabot[bot] avatar feng-zhe avatar ferrarimarco avatar gericdong avatar glasnt avatar gleichda avatar ibhaskar2 avatar minyu19 avatar modular-magician avatar msampathkumar avatar na2047 avatar nevzheng avatar pattishin avatar pay20 avatar renovate[bot] avatar rogerthatdev avatar ronnieplooker avatar sahanacp-hub avatar savijatv avatar seviet avatar thomasmaclean avatar thomasmburke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-docs-samples's Issues

bug: PR request blocked by Error 400 for google_cloudfunctions2_function

TL;DR

TL;DR - PR blocked due to a test failed. Cloud Function Error.

Expected behavior

No response

Observed behavior

Blocked PR: #485

Terraform File: https://github.com/terraform-google-modules/terraform-docs-samples/blob/1ea28834510310126196331d5951da96a24bb047/functions/basic_audit_logs/main.tf

Error Logs:

Step #2 - "test-samples-0": TestSamples/0/basic_audit_logs 2023-08-28T12:36:47Z command.go:185: Error: Error creating function: googleapi: Error 400: Validation failed for trigger projects/ci-tf-samples-0-96df/locations/us-central1/triggers/gcf-function-032436: Invalid resource state for "": Permission denied while using the Eventarc Service Agent. If you recently started to use Eventarc, it may take a few minutes before all necessary permissions are propagated to the Service Agent. Otherwise, verify that it has Eventarc Service Agent role.
Step #2 - "test-samples-0": TestSamples/0/basic_audit_logs 2023-08-28T12:36:47Z command.go:185:
Step #2 - "test-samples-0": TestSamples/0/basic_audit_logs 2023-08-28T12:36:47Z command.go:185: with google_cloudfunctions2_function.default,
Step #2 - "test-samples-0": TestSamples/0/basic_audit_logs 2023-08-28T12:36:47Z command.go:185: on main.tf line 91, in resource "google_cloudfunctions2_function" "default":
Step #2 - "test-samples-0": TestSamples/0/basic_audit_logs 2023-08-28T12:36:47Z command.go:185: 91: resource "google_cloudfunctions2_function" "default" {
Step #2 - "test-samples-0": TestSamples/0/basic_audit_logs 2023-08-28T12:36:47Z command.go:185:
Step #2 - "test-samples-0": TestSamples/0/basic_audit_logs 2023-08-28T12:36:47Z retry.go:144: 'terraform [apply -input=false -auto-approve -no-color -lock=false]' failed with the error 'error while running command: exit status 1;
Step #2 - "test-samples-0": Error: Error creating function: googleapi: Error 400: Validation failed for trigger projects/ci-tf-samples-0-96df/locations/us-central1/triggers/gcf-function-032436: Invalid resource state for "": Permission denied while using the Eventarc Service Agent. If you recently started to use Eventarc, it may take a few minutes before all necessary permissions are propagated to the Service Agent. Otherwise, verify that it has Eventarc Service Agent role.
Step #2 - "test-samples-0":
Step #2 - "test-samples-0": with google_cloudfunctions2_function.default,
Step #2 - "test-samples-0": on main.tf line 91, in resource "google_cloudfunctions2_function" "default":
Step #2 - "test-samples-0": 91: resource "google_cloudfunctions2_function" "default" {
Step #2 - "test-samples-0": ' but this error was expected and warrants a retry. Further details: Eventarc Service Agent IAM is eventually consistent
Step #2 - "test-samples-0":
Step #2 - "test-samples-0": TestSamples/0/basic_audit_logs 2023-08-28T12:36:47Z retry.go:103: terraform [apply -input=false -auto-approve -no-color -lock=false] returned an error: error while running command: exit status 1;
Step #2 - "test-samples-0": Error: Error creating function: googleapi: Error 400: Validation failed for trigger projects/ci-tf-samples-0-96df/locations/us-central1/triggers/gcf-function-032436: Invalid resource state for "": Permission denied while using the Eventarc Service Agent. If you recently started to use Eventarc, it may take a few minutes before all necessary permissions are propagated to the Service Agent. Otherwise, verify that it has Eventarc Service Agent role.
Step #2 - "test-samples-0":
Step #2 - "test-samples-0": with google_cloudfunctions2_function.default,
Step #2 - "test-samples-0": on main.tf line 91, in resource "google_cloudfunctions2_function" "default":
Step #2 - "test-samples-0": 91: resource "google_cloudfunctions2_function" "default" {
Step #2 - "test-samples-0": . Sleeping for 1m0s and will try again.

Complete Error Logs link

Terraform Configuration

NA

Terraform Version

NA

Additional information

No response

Need to update comments in the terraform sample

  • I think the comment at line#47 should be specific to the network instead of "secret default". Please review and change accordingly.
  • The comment at line#53 needs more information. Maybe explain why scopes are mentioned here. Something similar to "To avoid embedding secret keys or user credentials in the instances, Google recommends that you use custom service accounts with the following access scopes."
  • Should we specify the 'email' field in the service account block?

feat: Request for Clean for Cloud SQL folder samples

TL;DR

feat: Request for Clean for Cloud SQL folder samples

This is clean up is not related to modification of code but related to re-organising of some cloud sql samples locations and their folders.

The following is the list of samples need these clean up

  1. cloud_sql/database_basic/main.tf
  2. cloud_sql/database_instance_my_sql/main.tf
  3. cloud_sql/database_instance_postgres/main.tf
  4. cloud_sql/database_instance_sqlserver/main.tf
  5. cloud_sql/instance_cmek/main.tf
  6. cloud_sql/instance_ha/main.tf
  7. cloud_sql/instance_iam_condition/main.tf
  8. cloud_sql/instance_labels/main.tf
  9. cloud_sql/instance_pitr/main.tf
  10. cloud_sql/instance_ssl_cert/main.tf

Clean up tasks (What to do?)

Common issues

  1. Unlike the other samples in cloud_sql, these folder don't have prefixes like mysql, postgres_ & sqlserver_
  2. Many of these samples like instance_*/main.tf, have snippet mysql, postgres_ & sqlserver_. Such file should be split up into multiple files.

Example instance_ha should be split into mysql_instance_ha, postgres_instance_ha & sqlserver_instance_ha

The following require folder rename change

  1. database_instance_my_sql to mysql_database_instance
  2. database_instance_postgres to postgres_database_instance
  3. database_instance_sqlserver to sqlserver_database_instance

Why

As a user when I open cloudsql folder, I find the samples related to mysql db, have a prefix of mysql. This is easy to spot and use, but folders that being with instance and database does not provide this benefit.

Terraform Resources

No response

Detailed design

No response

Additional information

No response

Setting static IP for google_notebooks_instance

TL;DR

How to set static IP for vertex_ai_notebooks_instance?

Terraform Resources

https://github.com/terraform-google-modules/terraform-docs-samples/tree/main/vertex_ai/user_managed_notebooks_instance

Detailed design

I checked all documentation about how to create notebooks_instance via Terraform.
But I came across the fact that Vertex Ai Notebooks have Dynamic IPs that complicate the process of distributing access over IP. As a workaround, you can manually create a static IP and bind it in the Notebook VM itself.
But I need to be able to do this through Terraform.
Since the implementation works in a manual format so it's possible via terraform format.
Maybe you have a work sample to set static IP?

Additional information

No response

Error creating ForwardingRule:

TL;DR

Error: Error creating ForwardingRule: googleapi: Error 400: All the backends of the BackendService must be in the same region as the forwarding rule since the forwarding rule is in STANDARD network tier., badRequest

│ with google_compute_forwarding_rule.default,
│ on main.tf line 188, in resource "google_compute_forwarding_rule" "default":
│ 188: resource "google_compute_forwarding_rule" "default" {

Expected behavior

No response

Observed behavior

No response

Terraform Configuration

https://github.com/terraform-google-modules/terraform-docs-samples/blob/main/regional-external-http-load-balancer/main.tf

Terraform Version

Terraform v1.2.8
on linux_amd64
+ provider registry.terraform.io/hashicorp/google v4.37.0
+ provider registry.terraform.io/hashicorp/google-beta v4.37.0



Your version of Terraform is out of date! The latest version
is 1.2.9. You can update by downloading from https://www.terraform.io/downloads.html

Additional information

No response

meaningful resource names for cloud_run_service_interservice sample

TL;DR

The cloud run services in terraform-docs-samples/cloud_run_service_interservice/main.tf sample should be changed to have meaningful resource names.

Terraform Resources

google_cloud_run_service

Detailed design

`google_cloud_run_service.default` should be renamed to something like `public_service` and `google_cloud_run_service.default_private`should be renamed to something like `private_service`

Additional information

Per Google Cloud published best practices.

Can't run integration tests describe at CONTRIBUTING.md

TL;DR

When I try to run the integration tests described at https://github.com/terraform-google-modules/terraform-docs-samples/blob/main/test/setup/main.tf, the output says the following:

testing: warning: no tests to run
PASS

So it doesn't seem like the tests are running.

Expected behavior

I expected to see something about a test pasing.

Observed behavior

The output from make -s docker_test_sample SAMPLE=networkconnectivity_create_service_connection_policy contains the following:

testing: warning: no tests to run
PASS

Terraform Configuration

/**
 * Copyright 2023 Google LLC
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

# [START networkconnectivity_create_service_connection_policy]
# Create a VPC network
resource "google_compute_network" "default" {
  name                    = "consumer-network"
  auto_create_subnetworks = false
}

# Create a subnetwork
resource "google_compute_subnetwork" "default" {
  name          = "consumer-subnet"
  ip_cidr_range = "10.0.0.0/16"
  region        = "us-central1"
  network       = google_compute_network.default.id
}

# Create a service connection policy
resource "google_network_connectivity_service_connection_policy" "default" {
  name          = "my-service-connection-policy"
  location      = "us-central1"
  service_class = "gcp-memorystore-redis"
  description   = "Description"
  network       = google_compute_network.default.id
  psc_config {
    subnetworks = [google_compute_subnetwork.default.id]
    limit       = 2
  }
}
# [END networkconnectivity_create_service_connection_policy]

Terraform Version

Terraform v1.5.7
on linux_amd64

Additional information

Please let me know if you'd like to provide output from make docker_test_prepare or make -s docker_test_sample SAMPLE=${SAMPLE_NAME}. The content is too long to post here.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

These problems occurred while renovating this repository. View logs.

  • WARN: getNewValue error

This repository currently has no open or pending branches.

Detected dependencies

github-actions
.github/workflows/periodic-reporter.yaml
  • actions/github-script v7
gomod
test/integration/go.mod
  • go 1.22
  • go 1.22.5
  • github.com/GoogleCloudPlatform/cloud-foundation-toolkit/infra/blueprint-test v0.16.0
  • github.com/stretchr/testify v1.9.0
npm
functions/basic/functions/hello-world/package.json
  • @google-cloud/functions-framework ^3.0.0
functions/basic_audit_logs/function-source/package.json
  • @google-cloud/functions-framework ^3.0.0
functions/basic_gcs/function-source/package.json
  • @google-cloud/functions-framework ^3.0.0
functions/pubsub/function-source/package.json
  • @google-cloud/functions-framework ^3.0.0
regex
Makefile
  • cft/developer-tools 1.21
build/int.cloudbuild.yaml
  • cft/developer-tools 1.21
build/lint.cloudbuild.yaml
  • cft/developer-tools 1.21
terraform
application_integration/create_auth_config/main.tf
functions/basic/main.tf
  • google >= 4.34.0
functions/basic_audit_logs/main.tf
  • google >= 4.34.0
functions/basic_gcs/main.tf
  • google >= 4.34.0
functions/pubsub/main.tf
  • google >= 4.34.0
gke/autopilot/reservation/main.tf
gke/quickstart/autopilot/app.tf
  • us-docker.pkg.dev/google-samples/containers/gke/hello-app 2.0
gke/quickstart/multitenant/main.tf
gke/standard/regional/loadbalancer/main.tf
  • us-docker.pkg.dev/google-samples/containers/gke/hello-app 2.0
network_connectivity/interconnect/ha_vpn_over_interconnect_10GB_attach/main.tf
network_connectivity/interconnect/ha_vpn_over_interconnect_5GB_attach/main.tf
privateca/quickstart/main.tf
run/connect_cloud_sql/main.tf
  • google ~> 5.13.0
run/ingress/main.tf
run/jobs_create/main.tf
run/jobs_execute_jobs_on_schedule/main.tf
run/jobs_max_retries_create/main.tf
run/jobs_task_parallelism_create/main.tf
run/jobs_task_timeout_create/main.tf
run/secret_manager/main.tf
  • google ~> 5.13.0
run/static_outbound/main.tf
test/setup/main.tf
  • terraform-google-modules/project-factory/google ~> 15.0
vertex_ai/endpoint/main.tf

  • Check this box to trigger a request for Renovate to run again on this repository

Periodic Test: Looker Instance Qutoa

TL;DR

The Periodic Test is failing on a Looker Instance Qutoa:

Error creating Instance: googleapi: Error 429: Quota limit 'StandardInstancesPerProjectPerRegion' has been exceeded. Limit: 0 in region us-central1.

Ref: #609

Expected behavior

No response

Observed behavior

No response

Terraform Configuration

CI

Terraform Version

CI

Additional information

No response

privateca root ca example is invalid for a standard compliant root ca

TL;DR

The sample in privateca/certificate_authority_basic/main.tf looks like it's a copy of the subordinate setup and not for the root.

Expected behavior

Sample should be somewhat compliant to RFC 5280 and CA/B Baseline Requirements.

Observed behavior

SAN on Root -> does not make any sense
pathLen on Root is not forbidden but according to the rfc not evaluated and not recommended by CA/B BR
extendedKeyUsage is forbidden by CA/B BR on a root

Terraform Configuration

does not apply

Terraform Version

does not apply

Additional information

No response

Fix Proxy Subnet Naming

TL;DR

The recommended purpose for Proxy subnets changed from INTERNAL_HTTPS_LOAD_BALANCER to REGIONAL_MANAGED_PROXY Reflect this in our documentation

Expected behavior

No response

Observed behavior

No response

Terraform Configuration

N/A

Terraform Version

N/A

Additional information

No response

feat: Request to add `conventional-commit-lint` automated check for terraform-docs-samples

TL;DR

Life is short, use automations. This is a feature request to add/enable following feature/check to the this terraform-docs-samples repository PRs & Issues.

This convention can be used to facilitate other automations, such as automated CHANGELOG.md updates, and automated versioning.

Terraform Resources

None

Detailed design

Please follow the instructions provided at

Additional information

No response

feat: Enable Github Traffic Monitoring

TL;DR

Request to enable a analytics feature for this Repository.

Terraform Resources

None

Detailed design

Please check and enable  https://github.com/marketplace/actions/repository-traffic

Additional information

No response

feat: Add snippet bot

TL;DR

There is no snippet bot to check to ensure that region tags are healthy.

Currently there is no check to ensure safe deletion or addition of region tags in code samples. This can potentially lead to issues when users click Github Link shown in Cloud Docs pages.

Example: Check snippet-bot.yml in https://github.com/GoogleCloudPlatform/python-docs-samples/tree/main/.github

Terraform Resources

NA

Detailed design

Adding snippet bot

Additional information

No response

style: default values for `name` parameter in Cloud Run samples

TL;DR

Currently, in Cloud Run samples, the google_cloud_run_service resource uses various generic values as the name, such ascloudrun-srv, cloud-run-service-name. For the sake of consistency, these names should be either meaningful to their intended function (example) or have a default generic value (something like my-cloud-run-service).

Terraform Resources

google_cloud_run_service

Detailed design

No response

Additional information

No response

Variables provisioning_model and instance_termination_action in spot_instance_basic module

TL;DR

Variables provisioning_model and instance_termination_action in spot_instance_basic module not working with provider hashicorp/google v3.90.1.

Expected behavior

When I use "terraform validate" command, I expect how to answer "Success! The configuration is valid."

Example:

$ terraform validate
Success! The configuration is valid.

Observed behavior

When I use "terraform validate" command, i receive the next answer:

$ terraform validate

│ Error: Unsupported argument

│ on main.tf line 15, in resource "google_compute_instance" "spot_vm_instance":
│ 15: provisioning_model = "SPOT"

│ An argument named "provisioning_model" is not expected here.


│ Error: Unsupported argument

│ on main.tf line 16, in resource "google_compute_instance" "spot_vm_instance":
│ 16: instance_termination_action = "STOP"

│ An argument named "instance_termination_action" is not expected here.

Terraform Configuration

# main.tf
resource "google_compute_instance" "spot_vm_instance" {
  name         = "spot-instance-name"
  machine_type = "f1-micro"
  zone         = "us-central1-a"

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-11"
    }
  }

  scheduling {
      preemptible = true
      automatic_restart = false
      provisioning_model = "SPOT"
      instance_termination_action = "STOP"
  }

  network_interface {
    # A default network is created for all GCP projects
    network = "default"
    access_config {
    }
  }
}

# provider.tf
terraform {
  required_version = ">= 0.13.1"
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~>3.85"
    }
  }
}

Terraform Version

$ terraform version
Terraform v1.3.1
on darwin_amd64
+ provider registry.terraform.io/hashicorp/google v3.90.1

Additional information

No more comments.

Support for version selection in Workbench Instances

TL;DR

Hello,

Currently both UI and gcloud allow us the selection of a specific version when provisioning Workbench Instances. However the capability to create a Workbench Instance with a specific version via Terraform seems absent. Attempting to create a version through the metadata configuration results in GCP overriding these parameters, subsequently initiating an instance with the most recent version. This is because the version metadata is designated as a protected key.

Is there an alternative method or workaround to achieve the creation of a Workbench Instance with a specified version using Terraform?

Terraform Resources

https://registry.terraform.io/providers/hashicorp/google/5.22.0/docs/resources/workbench_instance#example-usage---workbench-instance-full

Detailed design

No response

Additional information

https://cloud.google.com/vertex-ai/docs/workbench/instances/manage-metadata#protected

Bi-Weekly periodic check

TL;DR

This GH repo is now home many samples. Overtime this may likely to grow even further.

To ensure that all samples are adhere to new changes, I propose a bi-weekly automated checks.

Terraform Resources

No response

Detailed design

Using periodic schedular as a trigger, run CICD test for all samples.

Example PR: https://github.com/GoogleCloudPlatform/cloud-foundation-toolkit/pull/1562

Additional information

No response

metadata_startup_script

TL;DR

Commands in metadata_startup_script are not properly executed once the compute instance is launched.

Expected behavior

Flask and the dependencies should be installed.

Observed behavior

  1. terraform apply
  2. go to VM instances and log in with SSH in the browser.
xyz@flask-vm:~$ pip list
Package             Version
------------------- -------
dbus-python         1.2.16
distro-info         1.0
pip                 20.3.4
python-apt          2.2.1
setuptools          52.0.0
unattended-upgrades 0.1
wheel               0.34.2

no flask installed.
4. If I installed it manually, create the app, and launch it, it works as expected.

Terraform Configuration

provider "google" {
  credentials = file("credentials.json")
  project = "NAME_OF_PROJECT"
  region  = "us-east1"
  zone    = "us-east1-b"
}

# [START compute_flask_quickstart_vpc]
resource "google_compute_network" "vpc_network" {
  name                    = "my-custom-mode-network"
  auto_create_subnetworks = false
  mtu                     = 1460
}

resource "google_compute_subnetwork" "default" {
  name          = "my-custom-subnet"
  ip_cidr_range = "10.0.1.0/24"
  region        = "us-east1"
  network       = google_compute_network.vpc_network.id
}
# [END compute_flask_quickstart_vpc]

# [START compute_flask_quickstart_vm]
# Create a single Compute Engine instance
resource "google_compute_instance" "default" {
  name         = "flask-vm"
  machine_type = "f1-micro"
  zone         = "us-east1-b"
  tags         = ["ssh"]

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-11"
    }
  }

  # Install Flask
  metadata_startup_script = "sudo apt-get update; sudo apt-get install -yq build-essential python3-pip rsync; pip install flask"

  network_interface {
    subnetwork = google_compute_subnetwork.default.id

    access_config {
      # Include this section to give the VM an external IP address
    }
  }
}
# [END compute_flask_quickstart_vm]

# [START vpc_flask_quickstart_ssh_fw]
resource "google_compute_firewall" "ssh" {
  name = "allow-ssh"
  allow {
    ports    = ["22"]
    protocol = "tcp"
  }
  direction     = "INGRESS"
  network       = google_compute_network.vpc_network.id
  priority      = 1000
  source_ranges = ["0.0.0.0/0"]
  target_tags   = ["ssh"]
}
# [END vpc_flask_quickstart_ssh_fw]


# [START vpc_flask_quickstart_5000_fw]
resource "google_compute_firewall" "flask" {
  name    = "flask-app-firewall"
  network = google_compute_network.vpc_network.id

  allow {
    protocol = "tcp"
    ports    = ["5000"]
  }
  source_ranges = ["0.0.0.0/0"]
}
# [END vpc_flask_quickstart_5000_fw]

# Create new multi-region storage bucket in the US
# with versioning enabled

# [START storage_bucket_tf_with_versioning]
resource "random_id" "bucket_prefix" {
  byte_length = 8
}

resource "google_storage_bucket" "default" {
  name          = "${random_id.bucket_prefix.hex}-bucket-tfstate"
  force_destroy = false
  location      = "US"
  storage_class = "STANDARD"
  versioning {
    enabled = true
  }
}
# [END storage_bucket_tf_with_versioning]

Terraform Version

Terraform v1.3.7

Additional information

No response

KMS KeyRing Issues

TL;DR

For google_kms_key_ring resources are failing CICD tests, when its name is not unique.

For example:

resource "google_kms_key_ring" "keyring" {
  name     = "keyring-name"
  location = "us-central1"
}

will fail a CICD tests, just as found at #503 (comment) with KMS error: keyring-name already exists

During the CICD, a keyring is created and deleted multiple times. When a resource is archived or reserved this error can be expected.

https://github.com/search?q=repo%3Aterraform-google-modules%2Fterraform-docs-samples+resource+%22google_kms_key_ring%22&type=code

Expected behavior

Two way this issue can be fixed

  1. Using a random prefix or suffix

     resource "random_id" "default" {
       byte_length = 8
     }
    
     resource "google_kms_key_ring" "keyring" {
     name     = "keyring-name-${random_id.default.hex}"
     location = "us-central1"
     }
    
  2. Skipping CICD tests

Files to update

Observed behavior

No response

Terraform Configuration

NA

Terraform Version

NA

Additional information

No response

Excessive permissions in Scheduled Job example

TL;DR

The "execute jobs on schedule" Cloud Run example creates permissions that are not needed, and binds project-level permissions where job-level binding would do.

Expected behavior

The example demonstrates the minimum permissions required to achieve the goal.

Observed behavior

It is unclear to the reader which permissions are required, or what they are used for.

Terraform Configuration

resource "google_cloud_run_v2_job_iam_binding" "run_invoker_binding" {
  project  = google_cloud_run_v2_job.default.project
  location = google_cloud_run_v2_job.default.location
  name     = google_cloud_run_v2_job.default.name
  role     = "roles/run.invoker"
  members  = ["serviceAccount:${google_service_account.cloud_run_invoker_sa.email}"]
}

Terraform Version

❯ terraform version
Terraform v1.5.5
on darwin_arm64
+ provider registry.terraform.io/hashicorp/google v4.80.0
+ provider registry.terraform.io/hashicorp/google-beta v4.80.0

Additional information

I also needed roles/iam.serviceAccountUser for the account that actually applies the Terraform, but all examples seem to imply owner permissions on the project, so it does not need to be included in the example.

enforce SSL/TLS encryption to mysql_instance

TL;DR

Iam trying to enforce SSL/TLS encryption to my mysql_instance and I have followed this code spinet instance_ssl_cert and terraform is failed run in all stated ( plan,validate & appy)

Expected behavior

No response

Observed behavior

Step #2 - "Validate": │ Error: Unsupported block type
Step #2 - "Validate": │
Step #2 - "Validate": │ on main.tf line 99, in module "XXXXX":
Step #2 - "Validate": │ 99: settings {
Step #2 - "Validate": │
Step #2 - "Validate": │ Blocks of type "settings" are not expected here.

Terraform Configuration

module "mysql-test" {
  source  = "......./modules/safer_mysql"
  version = "~> 9.0.0"

  name              = "mysql-test"
  project_id        = module.project.project_id
  region            = "us-central1"
  zone              = "us-central1-a"
  availability_type = "REGIONAL"
  database_version  = "MYSQL_8_0_26"
  vpc_network       = "test-network"
  
   settings {
    tier = "db-n1-standard-1"
    ip_configuration {
      require_ssl = "false"
    }
  }
}

Terraform Version

Terraform v0.14.8

Additional information

No response

Fix - Enabling API's in Code Samples

TL;DR

Enabling API's in code samples is a time taking process and also it complicates the code samples with depends_on. Please fix this by maintaining a separate .tf file just for enabling API's

Terraform Resources

No response

Detailed design

In the `build` folder, have file like `project_config/main.tf` with all the TF code for enabling API's for TF samples.

1. In general/regular CICD executions, just enable all the required API's for testing/CI-CD pipelines
2. In weekly CICD run, **enable** & **disable** all the enabled API's to ensure that the GCP environment has proper cleanup.
3. Code clean up @ https://github.com/terraform-google-modules/terraform-docs-samples/search?q=google_project_service+depends_on

Additional information

No response

Instead of adding the functions inside of a zip file, could we instead add the files in a sub-directory and then use the `archive_file` terraform resource?

Re: function-source.zip - instead of adding the functions inside of a zip file, could we instead add the files in a sub-directory and then use the archive_file terraform resource?

For example, move index.js and package.json to functions/hello-world/ and then:

data "archive_file" "archive" {
  type        = "zip"
  output_path = "/tmp/function-source.zip"
  source_dir  = "functions/hello-world"
}
resource "google_storage_bucket_object" "object" {
  name   = "function-source.zip"
  bucket = google_storage_bucket.bucket.name
  source = data.archive_file.archive.output_path  # Add path to the zipped function source code
}

It adds a little more complexity to the sample but could be useful for showing one method of including those functions in source control along side their terraformed cloud function resources.

Originally posted by @rogerthatdev in #143 (comment)

CICD Bugs

TL;DR

PTAL #510 (comment)

CICD is failing for unrelated errors.

Expected behavior

No response

Observed behavior

No response

Terraform Configuration

NA

Terraform Version

NA

Additional information

No response

Adding "idle-timeout-seconds" terraform variable in for vertex_workbench_notebook pipeline

TL;DR

Hello,

We wuld like to add new terraform variable (idle-timeout-seconds) to control idle vertex ai notebooks for vertex_workbench_notebook. Probably needs to be added into "vertex.tf"
Could you please help us ?
Thanks,
İmran

Terraform Resources

https://cloud.google.com/vertex-ai/docs/workbench/instances/idle-shutdown
https://github.com/terraform-google-modules/terraform-docs-samples/blob/HEAD/vertex_ai/workbench_idle_shutdown/main.tf

Detailed design

No response

Additional information

No response

Feature Request: Please provide `do not submit` automation bot

TL;DR

For this repo, enable Do Not Submit bot

More details @ https://github.com/googleapis/repo-automation-bots

Terraform Resources

NA

Detailed design

General steps
1. Go to https://github.com/apps/do-not-merge-gcf
2. Select https://github.com/terraform-google-modules/terraform-docs-samples
3. Save/Enable

Additional information

terraform-docs-samples provides the PR reviewers with auto-merge option for PRs. This enabled automation to merge the PRs once all the CICD tests ran successful. This is very convenient when all the CICD tests are passed but there new changes to main branch.

Why do-no-merge bot?
As seen in #484, sometimes PRs need to be on hold tills some CLs or other updates are passed. Sometimes, this may be over 1-2 weeks.

As PR reviewers works rotation, providing do-not-merge automation help us prevent any accidental auto-merge.

workflows/cloud_run_job CI failure

TL;DR

workflows/cloud_run_job is failing during periodic CI due to inability to access the us-central1-docker.pkg.dev/ci-tf-samples-3/my-repo/parallel-job:latest image
https://github.com/terraform-google-modules/terraform-docs-samples/blob/main/workflows/cloud_run_job/main.tf#L137

Expected behavior

No response

Observed behavior

Step #6 - "test-samples-3": TestSamples/3/cloud_run_job 2024-02-23T06:54:07Z command.go:185: google_eventarc_trigger.default: Creation complete after 16s [id=projects/ci-tf-samples-3-drrldvhz/locations/us-central1/triggers/cloud-run-job-trigger]
Step #6 - "test-samples-3": TestSamples/3/cloud_run_job 2024-02-23T06:54:07Z command.go:185: 
Step #6 - "test-samples-3": TestSamples/3/cloud_run_job 2024-02-23T06:54:07Z command.go:185: Error: Error waiting to create Job: Error waiting for Creating Job: Error code 13, message: Google Cloud Run Service Agent service-{redacted}@serverless-robot-prod.iam.gserviceaccount.com must have permission to read the image, us-central1-docker.pkg.dev/ci-tf-samples-3/my-repo/parallel-job:latest. Ensure that the provided container image URL is correct and that the above account has permission to access the image. If you just enabled the Cloud Run API, the permissions might take a few minutes to propagate. Note that the image is from project [ci-tf-samples-3], which is not the same as this project [ci-tf-samples-3-drrldvhz]. Permission must be granted to the Google Cloud Run Service Agent service-949089809644@serverless-robot-prod.iam.gserviceaccount.com from this project.
Step #6 - "test-samples-3": TestSamples/3/cloud_run_job 2024-02-23T06:54:07Z command.go:185: 
Step #6 - "test-samples-3": TestSamples/3/cloud_run_job 2024-02-23T06:54:07Z command.go:185:   with google_cloud_run_v2_job.default,
Step #6 - "test-samples-3": TestSamples/3/cloud_run_job 2024-02-23T06:54:07Z command.go:185:   on main.tf line 129, in resource "google_cloud_run_v2_job" "default":
Step #6 - "test-samples-3": TestSamples/3/cloud_run_job 2024-02-23T06:54:07Z command.go:185:  129: resource "google_cloud_run_v2_job" "default" {
Step #6 - "test-samples-3": TestSamples/3/cloud_run_job 2024-02-23T06:54:07Z command.go:185: 
Step #6 - "test-samples-3": TestSamples/3/cloud_run_job 2024-02-23T06:54:07Z retry.go:99: Returning due to fatal error: FatalError{Underlying: error while running command: exit status 1; 

Terraform Configuration

CI

Terraform Version

CI

Additional information

Ref: #605

Pubsub manipulation

TL;DR

After applying the eventarc tf script a new pubsub topic gets created with a random name, how can output the pubsub topic name after applying the script? I need the pubsub topic name to further create an alert using terraform.

Terraform Resources

No response

Detailed design

No response

Additional information

No response

add section for local SSD

TL;DR

Create a VM with attached local SSD by specifying:
disk_type - (Optional) The GCE disk type. Such as "pd-ssd", "local-ssd", "pd-balanced" or "pd-standard".
disk_size_gb - (Optional) The size of the image in gigabytes. If not specified, it will inherit the size of its base image. For SCRATCH disks, the size must be exactly 375GB.
type - (Optional) The type of GCE disk, can be either "SCRATCH" or "PERSISTENT".

// Local SSD disk
scratch_disk {
interface = "SCSI"
}

Terraform Resources

No response

Detailed design

No response

Additional information

No response

Project Number is used instead of project_id in run/jobs_execute_jobs_on_schedule/main.tf

TL;DR

https://github.com/terraform-google-modules/terraform-docs-samples/blob/902a30c60089ac90b8afce0f5985f067c0ae0f66/run/jobs_execute_jobs_on_schedule/main.tf#L113C131-L113C165

Incorrect code:
uri = "https://${google_cloud_run_v2_job.default.location}-run.googleapis.com/apis/run.googleapis.com/v1/namespaces/${data.google_project.project.number}/jobs/${google_cloud_run_v2_job.default.name}:run"

Corrected code:
uri = "https://${google_cloud_run_v2_job.default.location}-run.googleapis.com/apis/run.googleapis.com/v1/namespaces/${data.google_project.project.project_id}/jobs/${google_cloud_run_v2_job.default.name}:run"

Expected behavior

Creating the trigger would work and would show up under triggers

Observed behavior

Trigger is created and works using project number but doesn't show as a configured trigger.

When created via the GUI project_id is used in the trigger, and when you use project_id in the trigger in Terraform then it works correctly

Terraform Configuration

https://github.com/terraform-google-modules/terraform-docs-samples/blob/902a30c60089ac90b8afce0f5985f067c0ae0f66/run/jobs_execute_jobs_on_schedule/main.tf#L113C131-L113C165

Terraform Version

Any

Additional information

No response

sqlserver failed TF samples is blocking other PRs

          > Hi @msampathkumar, thanks for reviewing the change. I saw the above `terraform-docs-samples-int-trigger (cloud-foundation-cicd)` failed with `--- FAIL: TestSamples/2/sqlserver_instance_private_ip (603.18s)`. I feel my change should not impact it. Please advice. Thanks.

Sorry for the trouble. The CICD is the process of being upgraded and we would have wait for it be completed or this bug to cleared up.

Originally posted by @msampathkumar in #538 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.