Giter VIP home page Giter VIP logo

pacovk / tapir Goto Github PK

View Code? Open in Web Editor NEW
183.0 4.0 11.0 98.33 MB

A Private Terraform Registry

Home Page: https://pascal.euhus.dev/tapir/

License: Apache License 2.0

Ruby 0.03% Makefile 0.25% HCL 0.12% Java 73.05% HTML 0.22% TypeScript 24.86% CSS 0.11% Dockerfile 0.38% Shell 0.02% Roff 0.96%
hashicorp hashicorp-terraform registry terraform terraform-modules cloud infrastructure-as-code terraform-provider hacktoberfest

tapir's Introduction

Tapir

Contributors Docker Pulls

A Private Terraform Registry

Test Release Docs-deployment

Tapir overview

Tapir is the registry you always wanted if you are using Terraform at enterprise scale. Core values of Tapir is to provide

  • visibility
  • transparency
  • increases adoption rate
  • security
  • quality for your Terraform modules.

Feedback

You can send feedback and feature requests via GitHub issues. Either vote existing issues or feel free to raise an issue.

Why?

Modules

Terraform modules are reusable parts of infrastructure code. The most crucial part of re-usability is transparency and visibility. Since Terraform supports Git-based modules there are several disadvantages that come along with this capability.

  • Access to Git repos are often designed on team level, no access for others per default
  • Search capabilities are very limited, in terms you are searching for specific Terraform modules
  • You may not get insights in the codes quality and security measures
  • Module versioning is not enforced
  • Documentation formats vary or docs are missing at all. This is where Tapir jumps in.

Providers

If you make use of custom providers, or just want to have them mirrored you need an Artifactory to store the binaries. Additionally, users of the module need to break out the Toolchain and manually setup providers and copy them into the global provider directory. Supporting Terraform providers, Tapir does not help you to get your providers visible, but also keeps the users within the toolchain of Terraform only. That means:

  • Build providers with the same process and pipeline and make use of official HashiCorp provider project template.
  • Increase security and enforce providers to be GPG signed. Running terraform init will check if SHASUMS are valid before downloading the actual provider binary.
  • Help your users to focus on the infrastructure code rather that the setup. Tapir provides ready-to-copy code with a proper provider config example.

About Tapir

Tapir is an implementation of the official Terraform registry protocol. You can easily run an instance on your own with the full flexibility and power a central registry has.

  • It will provide you a simple, but powerful UI to search for modules and providers that are available across your organization.
  • It implements the official Terraform registry protocols
    • modules and providers supported
  • It scans the module source code on push, you will have insights about the code quality and security measures
    • Tapir integrates Trivy for that purpose
  • It generates documentation and stats for the module
    • See module dependencies, inputs, outputs and resources that will be generated
    • Tapir integrates terraform-docs for that purpose
  • It provides several storage adapters
    • currently S3, AzureBlob and Local
  • It provides several database adapters for the data
    • currently Dynamodb (default), Elasticsearch, CosmosDb
  • It provides a REST-API for custom integrations and further automation Tapir is build on Quarkus and ReactJS. You can run Tapir wherever you can run Docker images.
  • If you run Tapir with local storage, it can even be operated in an air-gaped environment, with no internet access

Apart from the above, this is what Wikipedia knows about Tapirs.

Overview

Deployment

NOTE starting with version 0.6.0 authentication is required. Hence, you need an OIDC IDP to run Tapir. Read more about the authentication below.

You can run Tapir wherever you can run Docker images. Images are available on DockerHub pacovk/tapir and AWS Elastic Container Registry public.ecr.aws/pacovk/tapir. There are samples with Terraform in examples/.

Other deployment options available are:

Configure

Tapir is configured via environment variables. You can learn how to set up Tapir here.

How-to

To see how to use Tapir, please read the usage docs.

Troubleshoot

See troubleshooting docs

Roadmap

  • Add more storage adapter
    • GCP
  • Add more Database adapter
    • Postgresql
  • Provide a Github/ Gitlab integration to crawl for additional code metrics and ownership informations

Contribution

If you want to contribute to this project, please read the contribution guidelines. A detailed How-to guide on local development can be found in the docs.

Actively searching for contributors.
Feedback is always appreciated 🌈
Feel free to open an Issue (Bug- /Feature-Request) or provide a Pull request. πŸ”§

Contributors ✨

Thanks go to these wonderful people (emoji key):

PacoVK
PacoVK

πŸ‘€ πŸ“† 🚧 πŸ’‘ πŸ’» πŸ“–
Andrea Defraia
Andrea Defraia

πŸ’‘
Wmxs
Wmxs

πŸ› πŸ€”
Jonasz Łasut-Balcerzak
Jonasz Łasut-Balcerzak

πŸ’‘ πŸ’»
Tim Chaffin
Tim Chaffin

πŸ‘€
Tim Chaffin
Tim Chaffin

πŸ“–
Tom Beckett
Tom Beckett

πŸ’‘ πŸ’»
Oleksandr Kuzminskyi
Oleksandr Kuzminskyi

πŸ›
GrzegorzHejman
GrzegorzHejman

πŸ›
CΓ©dric Braekevelt
CΓ©dric Braekevelt

πŸ›

tapir's People

Contributors

allcontributors[bot] avatar jonasz-lasut avatar pacovk avatar renovate[bot] avatar tim-chaffin avatar tombeckett avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

tapir's Issues

Issues with Microsoft Entra ID (former Azure AD) authentication

Has anyone tried to use Microsoft Entra ID (former Azure AD) as an OIDC IDP? When I try to configure it, it seems to work ok (Tapir gets the correct token), except I get the token verification error:

ERROR [io.qua.oid.run.CodeAuthenticationMechanism] (vert.x-eventloop-thread-1) ID token verification has failed: JWT rejected due to invalid signature.

I've found this information related to the same or very similar issue:

quarkusio/quarkus#32701

And it seems that this PR enables verification customization to work with Azure AD tokens:

quarkusio/quarkus#33319

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

These problems occurred while renovating this repository. View logs.

  • WARN: Package lookup failures

Warning

These dependencies are deprecated:

Datasource Name Replacement PR?
npm @babel/plugin-proposal-private-property-in-object Unavailable

Rate-Limited

These updates are currently rate-limited. Click on a checkbox below to force their creation now.

  • chore(deps): update surefire-plugin.version to v3.4.0 (org.apache.maven.plugins:maven-failsafe-plugin, org.apache.maven.plugins:maven-surefire-plugin)

Warning

Renovate failed to look up the following dependencies: Failed to look up terraform-module package 127.0.0.1:8443/paco/testme/aws.

Files affected: dev/example/main.tf


Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

docker-compose
docker-compose.yml
  • quay.io/keycloak/keycloak 22.0
  • opensearchproject/opensearch 2.15.0
dockerfile
Dockerfile
  • aquasec/trivy 0.54.1
  • amazoncorretto 21-alpine3.19-jdk
github-actions
.github/workflows/build.yml
  • actions/checkout v4
  • actions/setup-java v4
  • actions/upload-artifact v4
  • actions/checkout v4
.github/workflows/deploy.yml
  • actions/checkout v4
  • docker/setup-qemu-action v3
  • docker/setup-buildx-action v3
  • dawidd6/action-download-artifact v6
  • docker/login-action v3
  • aws-actions/configure-aws-credentials v4
  • aws-actions/amazon-ecr-login v2
  • docker/metadata-action v5
  • docker/build-push-action v6
maven
pom.xml
  • io.quarkus.platform:quarkus-bom 3.13.0
  • io.quarkus.platform:quarkus-amazon-services-bom 3.13.0
  • com.azure:azure-sdk-bom 1.2.26
  • io.quarkiverse.quinoa:quarkus-quinoa 2.4.5
  • org.apache.maven:maven-artifact 3.9.7
  • io.rest-assured:rest-assured 5.5.0
  • io.quarkus.platform:quarkus-maven-plugin 3.13.0
  • org.apache.maven.plugins:maven-compiler-plugin 3.13.0
  • org.apache.maven.plugins:maven-surefire-plugin 3.3.1
  • org.apache.maven.plugins:maven-failsafe-plugin 3.3.1
  • org.apache.maven.plugins:maven-checkstyle-plugin 3.4.0
  • org.jacoco:jacoco-maven-plugin 0.8.12
  • org.graalvm.buildtools:native-maven-plugin 0.10.2
maven-wrapper
.mvn/wrapper/maven-wrapper.properties
  • maven 3.9.8
npm
src/main/webui/package.json
  • @emotion/react ^11.10.5
  • @emotion/styled ^11.10.5
  • @fontsource/roboto ^5.0.0
  • @mui/icons-material ^5.15.19
  • @mui/material ^5.15.19
  • @testing-library/jest-dom ^6.1.3
  • @testing-library/react ^16.0.0
  • @testing-library/user-event ^14.0.0
  • @types/jest ^29.5.6
  • @types/node ^20.8.9
  • @types/react ^18.2.32
  • @types/react-dom ^18.2.14
  • @types/react-syntax-highlighter ^15.5.9
  • lodash.debounce ^4.0.8
  • react ^18.2.0
  • react-dom ^18.2.0
  • react-router-dom ^6.17.0
  • react-scripts 5.0.1
  • react-syntax-highlighter ^15.5.0
  • typescript ^5.2.2
  • web-vitals ^4.0.0
  • @babel/plugin-proposal-private-property-in-object ^7.21.11
  • @testing-library/dom ^10.1.0
  • prettier 3.3.3
terraform
dev/example/main.tf
  • 127.0.0.1:8443/paco/testme/aws 1.4.0

  • Check this box to trigger a request for Renovate to run again on this repository

How can I get more information to help me deploy tapir?

When I try to upload the provider, it returns {"errorId":"41ede21f-72c9-4df3-a33b-e9f75eea3965","errors":[{"message":"An unexpected error has occurred. Please raise an issue if you think this is a bug."}]
But based on the existing documents, I haven't been able to find out where the problem lies?

Console output of tapir:
image
No further information...

Error trying to use Elasticsearch Backend

Image: pacovk/tapir:0.6.3

Issue: Trying to setup Tapir using Helm Chart I'm receiving an error from the application in the step where it's supposed to connect to ElasticSearch:

2024-06-04 04:47:20,899 INFO  [io.quarkus] (main) Installed features: [amazon-dynamodb, amazon-s3, amazon-sts, cdi, config-yaml, elasticsearch-rest-client, hibernate-validator, oidc, quinoa, resteasy-reactive, resteasy-reactive-jackson, security, smallrye-context-propagation, smallrye-openapi, vertx]
2024-06-04 04:47:20,904 INFO  [cor.Bootstrap] (main) Validate GPG key configuration provided
2024-06-04 04:47:20,905 INFO  [cor.Bootstrap] (main) Start to bootstrap registry database [elasticsearch]
2024-06-04 04:47:21,224 ERROR [io.qua.run.Application] (main) Failed to start application (with profile [prod]): org.apache.http.ConnectionClosedException: Connection is closed
	at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:920)
	at org.elasticsearch.client.RestClient.performRequest(RestClient.java:300)
	at org.elasticsearch.client.RestClient.performRequest(RestClient.java:288)
	at core.backend.elasticsearch.ElasticSearchRepository.createIndexIfNotExists(ElasticSearchRepository.java:310)
	at core.backend.elasticsearch.ElasticSearchRepository.bootstrap(ElasticSearchRepository.java:209)
	at core.backend.elasticsearch.ElasticSearchRepository_ClientProxy.bootstrap(Unknown Source)
	at core.Bootstrap.bootstrap(Bootstrap.java:33)
	at Main$TerraformRegistry.run(Main.java:22)
	at Main_TerraformRegistry_ClientProxy.run(Unknown Source)
	at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:132)
	at io.quarkus.runtime.Quarkus.run(Quarkus.java:71)
	at io.quarkus.runtime.Quarkus.run(Quarkus.java:44)
	at Main.main(Main.java:9)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at io.quarkus.bootstrap.runner.QuarkusEntryPoint.doRun(QuarkusEntryPoint.java:61)
	at io.quarkus.bootstrap.runner.QuarkusEntryPoint.main(QuarkusEntryPoint.java:32)
Caused by: org.apache.http.ConnectionClosedException: Connection is closed

I've tried to use other ports in the Elastic definitions, but looking into the code I wasn't able to find any authentication method to Elastic, what's expected by it to accept connections, as the default behavior is to close connections without any auth method. This is a issue to me because when using the ECK deploy to Elastic, it isn't possible to deactivate the authentication to Elasticsearch.

Allow deployment on AWS EKS using Service Account & IRSA

I've deployed Tapir on AWS EKS using Terraform.
Assuming you have a cluster, this is the needed code (might be useful to add this as an example)

The below code will work on a "standard" cluster, but, the service account gets ignored.
To be clear: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html

The service account is not used, the pod has hardcoded a call to the EKS node IAM role.
Adding the needed permissions to the node IAM role allows the deployment to run without issues, but I think it should be allowed/encouraged to use IRSA instead.

Providers:

  region = "eu-west-1"
}

resource "random_pet" "pet" {
}

data "aws_caller_identity" "current" {}
data "aws_region" "current" {}

data "aws_eks_cluster" "eks" {
  name = module.eks.name
}

provider "kubernetes" {
config_path = "./my-kubeconfig-file"
}

Tapir:

resource "kubernetes_namespace_v1" "tapir" {
  metadata {
    name = "tapir"
    labels = merge(
      module.labels.labels,
      { name = "tapir" }
    )
  }
  lifecycle {
    ignore_changes = [metadata[0].annotations]
  }
}

resource "kubernetes_deployment_v1" "tapir" {
  metadata {
    namespace = kubernetes_namespace_v1.tapir.id
    name      = "tapir"
    labels = merge(
      module.labels.labels,
      { name = "tapir" }
    )
    annotations = {
      "eks.amazonaws.com/role-arn" = aws_iam_role.tapir.arn
    }
  }

  spec {
    replicas = 1

    selector {
      match_labels = {
        name = "tapir"
      }
    }

    template {
      metadata {
        labels = {
          name = "tapir"
        }
        annotations = {
          "eks.amazonaws.com/role-arn" = aws_iam_role.tapir.arn
        }
      }

      spec {
        service_account_name = "tapir"
        container {
          image = "pacovk/tapir"
          name  = "tapir"

          port {
            container_port = 8080
          }

          env {
            name  = "S3_STORAGE_BUCKET_NAME"
            value = module.tapir_s3.id
          }
          env {
            name  = "S3_STORAGE_BUCKET_REGION"
            value = data.aws_region.current.name
          }
          env {
            name  = "REGISTRY_HOSTNAME"
            value = "tapir.<mydomain>"
          }
          env {
            name  = "REGISTRY_PORT"
            value = 443
          }
        }
      }
    }
  }
}

resource "kubernetes_service_v1" "tapir" {
  metadata {
    name      = "tapir"
    namespace = kubernetes_namespace_v1.tapir.id
  }
  spec {
    selector = {
      name = "tapir"
    }
    port {
      port        = 443
      target_port = 8080
    }

    type = "ClusterIP"
  }
}

resource "kubernetes_ingress_v1" "tapir" {
  metadata {
    name      = "tapir"
    namespace = kubernetes_namespace_v1.tapir.id
    labels = {
      service = "tapir"
    }
    # Values here:https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations/
    annotations = {
      "kubernetes.io/ingress.class"                        = "alb"
      "alb.ingress.kubernetes.io/group.name"               = "default"
      "alb.ingress.kubernetes.io/scheme"                   = "internet-facing"
      "alb.ingress.kubernetes.io/backend-protocol"         = "HTTPS"
      "alb.ingress.kubernetes.io/target-type"              = "ip"
      "alb.ingress.kubernetes.io/listen-ports"             = "[{\"HTTPS\": 443}]"
      "alb.ingress.kubernetes.io/ssl-policy"               = "ELBSecurityPolicy-TLS-1-2-Ext-2018-06"
      "alb.ingress.kubernetes.io/certificate-arn"          = module.cert.arn
    }
  }
  spec {
    rule {
      host = "tapir.<mydomain>"
      http {
        path {
          path      = "/*"
          path_type = "ImplementationSpecific"
          backend {
            service {
              name = "tapir"
              port {
                number = 443
              }
            }
          }
        }
      }
    }
  }
}
resource "kubernetes_service_account_v1" "tapir" {
  metadata {
    name      = "tapir"
    namespace = kubernetes_namespace_v1.tapir.id
    annotations = {
      "eks.amazonaws.com/role-arn" = aws_iam_role.tapir.arn
    }
  }
  automount_service_account_token = false
}

resource "kubernetes_cluster_role" "tapir" {
  metadata {
    name = "tapir"
  }

  rule {
    api_groups = [""]
    resources  = ["services", "endpoints", "pods"]
    verbs      = ["get", "watch", "list"]
  }

  rule {
    api_groups = ["extensions", "networking.k8s.io"]
    resources  = ["ingresses"]
    verbs      = ["get", "watch", "list"]
  }

  rule {
    api_groups = [""]
    resources  = ["nodes"]
    verbs      = ["list", "watch"]
  }
}

resource "kubernetes_cluster_role_binding" "tapir" {
  metadata {
    name = "tapir"
  }

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = kubernetes_cluster_role.tapir.metadata[0].name
  }

  subject {
    kind      = "ServiceAccount"
    name      = "tapir"
    namespace = kubernetes_namespace_v1.tapir.id
  }
}


resource "aws_iam_role" "tapir" {
  name               = "tapir"
  description        = "Role assumed by EKS ServiceAccount tapir"
  assume_role_policy = data.aws_iam_policy_document.tapir_sa.json
  tags = merge(
    module.labels.labels,
    {
      "Resource" = "aws_iam_role.tapir"
    }
  )
}

data "aws_iam_policy_document" "tapir_sa" {
  statement {
    effect  = "Allow"
    actions = ["sts:AssumeRoleWithWebIdentity"]
    principals {
      type = "Federated"
      identifiers = [
        "arn:aws:iam::${data.aws_caller_identity.current.account_id}:oidc-provider/${replace(module.eks.cluster_oidc_issuer_url, "https://", "")}"
      ]
    }
    condition {
      test     = "StringEquals"
      variable = "${replace(module.eks.cluster_oidc_issuer_url, "https://", "")}:sub"
      values   = ["system:serviceaccount:${kubernetes_namespace_v1.tapir.id}:tapir"]
    }
  }
}

resource "aws_iam_role_policy" "tapir" {
  role   = aws_iam_role.tapir.id
  policy = data.aws_iam_policy_document.tapir.json
}

data "aws_iam_policy_document" "tapir" {
  statement {
    sid    = "S3Access"
    effect = "Allow"
    resources = [
      "${module.tapir_s3.arn}/*",
      module.tapir_s3.arn
    ] #tfsec:ignore:aws-iam-no-policy-wildcards
    actions = [
      "s3:Describe*",
      "s3:List*",
      "s3:Get*",
      "s3:Put*"
    ]
  }

  statement {
    sid    = "DynamoDbAccess"
    effect = "Allow"
    resources = [
      "arn:aws:dynamodb:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:table/Modules",
      "arn:aws:dynamodb:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:table/Providers",
      "arn:aws:dynamodb:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:table/Reports"
    ]
    actions = [
      "dynamodb:DescribeLimits",
      "dynamodb:DescribeTimeToLive",
      "dynamodb:ListTagsOfResource",
      "dynamodb:DescribeReservedCapacityOfferings",
      "dynamodb:DescribeReservedCapacity",
      "dynamodb:ListTables",
      "dynamodb:BatchGetItem",
      "dynamodb:BatchWriteItem",
      "dynamodb:CreateTable",
      "dynamodb:DeleteItem",
      "dynamodb:GetItem",
      "dynamodb:GetRecords",
      "dynamodb:PutItem",
      "dynamodb:Query",
      "dynamodb:UpdateItem",
      "dynamodb:Scan",
      "dynamodb:DescribeTable"
    ]
  }

  statement {
    sid       = "KMSAccess"
    effect    = "Allow"
    resources = [module.kms.arn]
    actions = [
      "kms:Decrypt",
      "kms:Describe*",
      "kms:Encrypt",
      "kms:GenerateDataKey*",
      "kms:List*",
      "kms:ReEncrypt*"
    ]
  }
}


module "tapir_s3" {
  source        = "../../../modules/s3"
  labels        = module.labels.labels
  kms_key       = module.kms.arn
  name          = "${random_pet.pet.id}-tapir"
  force_destroy = true
}

Tapir pod logs:
2023-09-04 09:59:56,270 INFO [io.quarkus] (main) Installed features: [amazon-dynamodb, amazon-s3, cdi, config-yaml, elasticsearch-rest-client, hibernate-validator, quinoa, resteasy-reactive, resteasy-reactive-jackson, smallrye-context-propagation, smallrye-openapi, vertx] 2023-09-04 09:59:56,275 INFO [cor.Bootstrap] (main) Validate GPG key configuration provided 2023-09-04 09:59:56,277 INFO [cor.Bootstrap] (main) Start to bootstrap registry database [dynamodb] 2023-09-04 09:59:56,884 WARN [sof.ama.aws.aut.cre.int.WebIdentityCredentialsUtils] (main) To use web identity tokens, the 'sts' service module must be on the class path. 2023-09-04 09:59:57,329 ERROR [io.qua.run.Application] (main) Failed to start application (with profile [prod]): software.amazon.awssdk.services.dynamodb.model.DynamoDbException: User: arn:aws:sts::123456789101:assumed-role/safe-drake-test-cluster20230904091517423300000010/i-0bde33149ce682d09 is not authorized to perform: dynamodb:CreateTable on resource: arn:aws:dynamodb:eu-west-1:123456789101:table/Modules because no identity-based policy allows the dynamodb:CreateTable action (Service: DynamoDb, Status Code: 400, Request ID: QQP3NPAQJ9U1UJILGST8IMFDVVVV4KQNSO5AEMVJF66Q9ASUAAJG)

User: arn:aws:sts::123456789101:assumed-role/safe-drake-test-cluster20230904091517423300000010/i-0bde33149ce682d09 is not authorized to perform...
Pod should instead be using the Service Account IAM Role specified in Annotations: "eks.amazonaws.com/role-arn" = aws_iam_role.tapir.arn

Uploading a new provider version fails when using CosmosDB as a backend

Version: latest docker image/0.7.1

I've noticed that uploading a new version of a provider fails if CosmosDB is being used as a backend, with Cosmos returning a ConflictException with the message: Resource with specified id or name already exists.

Looking at the stack trace, it might be caused by the injestProviderData method using createItem when creating/updating the provider within the Cosmos container:

providerContainer.createItem(
providerToIngest,
new PartitionKey(provider.getId()),
new CosmosItemRequestOptions()
);

While injestModuleData uses upsertItem:

modulesContainer.upsertItem(
moduleToIngest,
new PartitionKey(module.getId()),
new CosmosItemRequestOptions()
);

Stack Trace:

2024-05-30T13:15:56.514889052Z 	Suppressed: java.lang.Exception: #block terminated with an error
2024-05-30T13:15:56.514895805Z 		at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:100)
2024-05-30T13:15:56.514902477Z 		at reactor.core.publisher.Mono.block(Mono.java:1742)
2024-05-30T13:15:56.514909220Z 		at com.azure.cosmos.CosmosContainer.blockItemResponse(CosmosContainer.java:270)
2024-05-30T13:15:56.514915802Z 		at com.azure.cosmos.CosmosContainer.createItem(CosmosContainer.java:207)
2024-05-30T13:15:56.514922505Z 		at core.backend.azure.cosmosdb.CosmosDbRepository.ingestProviderData(CosmosDbRepository.java:195)
2024-05-30T13:15:56.514929297Z 		at core.backend.azure.cosmosdb.CosmosDbRepository_ClientProxy.ingestProviderData(Unknown Source)
2024-05-30T13:15:56.514935849Z 		at core.service.ProviderService.ingestProviderData(ProviderService.java:48)
2024-05-30T13:15:56.514948022Z 		at core.service.ProviderService_ClientProxy.ingestProviderData(Unknown Source)
2024-05-30T13:15:56.514954694Z 		at core.upload.service.UploadService.uploadProvider(UploadService.java:67)
2024-05-30T13:15:56.514961237Z 		at core.upload.service.UploadService_ClientProxy.uploadProvider(Unknown Source)
2024-05-30T13:15:56.514967979Z 		at api.Providers.uploadProvider(Providers.java:73)
2024-05-30T13:15:56.514974481Z 		at api.Providers$quarkusrestinvoker$uploadProvider_b68a044ef86b0f23fd44d8fa5cae7bb31d1e67e2.invoke(Unknown Source)
2024-05-30T13:15:56.514981394Z 		at org.jboss.resteasy.reactive.server.handlers.InvocationHandler.handle(InvocationHandler.java:29)
2024-05-30T13:15:56.514988287Z 		at io.quarkus.resteasy.reactive.server.runtime.QuarkusResteasyReactiveRequestContext.invokeHandler(QuarkusResteasyReactiveRequestContext.java:141)
2024-05-30T13:15:56.514995059Z 		at org.jboss.resteasy.reactive.common.core.AbstractResteasyReactiveContext.run(AbstractResteasyReactiveContext.java:147)
2024-05-30T13:15:56.515001842Z 		at io.quarkus.vertx.core.runtime.VertxCoreRecorder$14.runWith(VertxCoreRecorder.java:599)
2024-05-30T13:15:56.515008444Z 		at org.jboss.threads.EnhancedQueueExecutor$Task.doRunWith(EnhancedQueueExecutor.java:2516)
2024-05-30T13:15:56.515015207Z 		at org.jboss.threads.EnhancedQueueExecutor$Task.run(EnhancedQueueExecutor.java:2495)
2024-05-30T13:15:56.515021759Z 		at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1521)
2024-05-30T13:15:56.515034553Z 		at org.jboss.threads.DelegatingRunnable.run(DelegatingRunnable.java:11)
2024-05-30T13:15:56.515041475Z 		at org.jboss.threads.ThreadLocalResettingRunnable.run(ThreadLocalResettingRunnable.java:11)
2024-05-30T13:15:56.515048258Z 		... 2 more
2024-05-30T13:15:56.515080631Z 2024-05-30 13:15:56,504 SEVERE [api.map.exc.ThrowableMapper] (executor-thread-5) errorId 730998d6-0a3e-4cfb-afd2-fbe0c8b02266: {"ClassName":"ConflictException","userAgent":"azsdk-java-cosmos/4.61.0 Linux/5.15.153.1-2.cm2 JRE/21.0.3","statusCode":409,"resourceAddress":"rntbd://cdb-ms-prod-uksouth1-be17.documents.azure.com:14153/apps/.../services/.../partitions/.../replicas/133598370177412741p/","error":"{\"Errors\":[\"Resource with specified id or name already exists.\"]}","innerErrorMessage":"[\"Resource with specified id or name already exists.\"]","causeInfo":null,"responseHeaders":"{x-ms-current-replica-set-size=4, x-ms-last-state-change-utc=Wed, 29 May 2024 02:24:47.171 GMT, x-ms-request-duration-ms=0.813, x-ms-session-token=0:-1#7, lsn=7, x-ms-request-charge=5.48, x-ms-schemaversion=1.17, x-ms-transport-request-id=54, x-ms-number-of-read-regions=0, x-ms-current-write-quorum=3, x-ms-cosmos-quorum-acked-llsn=7, x-ms-quorum-acked-lsn=7, x-ms-activity-id=22868e67-358f-41f7-abd8-8fb3c3bc3c4a, x-ms-xp-role=1, x-ms-global-Committed-lsn=7, x-ms-cosmos-llsn=7, x-ms-serviceversion= version=2.14.0.0}","cosmosDiagnostics":{"userAgent":"azsdk-java-cosmos/4.61.0 Linux/5.15.153.1-2.cm2 JRE/21.0.3","activityId":"22868e67-358f-41f7-abd8-8fb3c3bc3c4a","requestLatencyInMs":4,"requestStartTimeUTC":"2024-05-30T13:15:56.496358087Z","requestEndTimeUTC":"2024-05-30T13:15:56.501189160Z","responseStatisticsList":[{"storeResult":{"storePhysicalAddress":"rntbd://cdb-ms-prod-uksouth1-be17.documents.azure.com:14153/apps/.../services/.../partitions/.../replicas/133598370177412741p/","lsn":7,"globalCommittedLsn":7,"partitionKeyRangeId":"0","isValid":true,"statusCode":409,"subStatusCode":0,"isGone":false,"isNotFound":false,"isInvalidPartition":false,"isThroughputControlRequestRateTooLarge":false,"requestCharge":5.48,"itemLSN":-1,"sessionToken":"0:-1#7","backendLatencyInMs":0.813,"retryAfterInMs":null,"exceptionMessage":"[\"Resource with specified id or name already exists.\"]","exceptionResponseHeaders":"{x-ms-current-replica-set-size=4, x-ms-last-state-change-utc=Wed, 29 May 2024 02:24:47.171 GMT, x-ms-request-duration-ms=0.813, x-ms-session-token=0:-1#7, lsn=7, x-ms-request-charge=5.48, x-ms-schemaversion=1.17, x-ms-transport-request-id=54, x-ms-number-of-read-regions=0, x-ms-current-write-quorum=3, x-ms-cosmos-quorum-acked-llsn=7, x-ms-quorum-acked-lsn=7, x-ms-activity-id=22868e67-358f-41f7-abd8-8fb3c3bc3c4a, x-ms-xp-role=1, x-ms-global-Committed-lsn=7, x-ms-cosmos-llsn=7, x-ms-documentdb-partitionkeyrangeid=0, x-ms-serviceversion= version=2.14.0.0}","replicaStatusList":["14153:Connected"],"transportRequestTimeline":[{"eventName":"created","startTimeUTC":"2024-05-30T13:15:56.498202070Z","durationInMilliSecs":0.031148},{"eventName":"queued","startTimeUTC":"2024-05-30T13:15:56.498233218Z","durationInMilliSecs":6.01E-4},{"eventName":"channelAcquisitionStarted","startTimeUTC":"2024-05-30T13:15:56.498233819Z","durationInMilliSecs":0.194},{"eventName":"pipelined","startTimeUTC":"2024-05-30T13:15:56.498427819Z","durationInMilliSecs":0.309159},{"eventName":"transitTime","startTimeUTC":"2024-05-30T13:15:56.498736978Z","durationInMilliSecs":1.614457},{"eventName":"decodeTime","startTimeUTC":"2024-05-30T13:15:56.500351435Z","durationInMilliSecs":0.100136},{"eventName":"received","startTimeUTC":"2024-05-30T13:15:56.500451571Z","durationInMilliSecs":0.604603},{"eventName":"completed","startTimeUTC":"2024-05-30T13:15:56.501056174Z","durationInMilliSecs":0.009998}],"rntbdRequestLengthInBytes":4628,"rntbdResponseLengthInBytes":275,"requestPayloadLengthInBytes":4198,"responsePayloadLengthInBytes":275,"channelStatistics":{"channelId":"c47981a1","channelTaskQueueSize":1,"pendingRequestsCount":0,"lastReadTime":"2024-05-30T13:15:56.494839812Z","waitForConnectionInit":false},"serviceEndpointStatistics":{"availableChannels":1,"acquiredChannels":0,"executorTaskQueueSize":0,"inflightRequests":1,"lastSuccessfulRequestTime":"2024-05-30T13:15:56.496Z","lastRequestTime":"2024-05-30T13:15:56.491Z","createdTime":"2024-05-30T12:58:11.658099847Z","isClosed":false,"cerMetrics":{}}},"requestResponseTimeUTC":"2024-05-30T13:15:56.501189160Z","requestStartTimeUTC":"2024-05-30T13:15:56.498202070Z","requestResourceType":"Document","requestOperationType":"Create","requestSessionToken":null,"e2ePolicyCfg":null,"excludedRegions":null,"sessionTokenEvaluationResults":[]}],"supplementalResponseStatisticsList":[],"addressResolutionStatistics":{},"regionsContacted":["uk south"],"retryContext":{"statusAndSubStatusCodes":null,"retryLatency":0,"retryCount":0},"metadataDiagnosticsContext":{"metadataDiagnosticList":null},"serializationDiagnosticsContext":{"serializationDiagnosticsList":[{"serializationType":"ITEM_SERIALIZATION","startTimeUTC":"2024-05-30T13:15:56.496516451Z","endTimeUTC":"2024-05-30T13:15:56.497166698Z","durationInMilliSecs":0.650247}]},"gatewayStatisticsList":[],"samplingRateSnapshot":1.0,"bloomFilterInsertionCountSnapshot":0,"systemInformation":{"usedMemory":"103909 KB","availableMemory":"944667 KB","systemCpuLoad":"(2024-05-30T13:15:30.917723281Z 0.3%), (2024-05-30T13:15:35.917718511Z 0.4%), (2024-05-30T13:15:40.917719735Z 0.3%), (2024-05-30T13:15:45.917729722Z 0.3%), (2024-05-30T13:15:50.917720638Z 0.4%), (2024-05-30T13:15:55.917768297Z 20.9%)","availableProcessors":2},"clientCfgs":{"id":1,"machineId":"uuid:2c59872a-0835-43d5-902e-bf79deb869cf","connectionMode":"DIRECT","numberOfClients":1,"excrgns":"[]","clientEndpoints":{"https://<COSMOSDB>.documents.azure.com":1},"connCfg":{"rntbd":"(cto:PT5S, nrto:PT5S, icto:PT0S, ieto:PT1H, mcpe:130, mrpc:30, cer:true)","gw":"(cps:1000, nrto:PT1M, icto:PT1M, p:false)","other":"(ed: true, cs: false, rv: true)"},"consistencyCfg":"(consistency: Eventual, mm: true, prgns: [])","proactiveInitCfg":"","e2ePolicyCfg":"","sessionRetryCfg":""}}}

Error loading module if security finding is present

We've noticed that if there is an security finding present the modules screen will not load.

To resolve, we've had to remove the report data from our DB for the screen to load.

Version: 0.7.1 and 0.7.0.
Stack Trace:

2024-05-16T15:42:16.370174603Z 2024-05-16 15:42:16,369 DEBUG [org.jbo.res.rea.com.cor.AbstractResteasyReactiveContext] (executor-thread-6) Restarting handler chain for exception exception: java.lang.IllegalStateException: Unable to parse JSON {"id":"test-foo-azurerm-1.0.0","moduleName":"foo","moduleVersion":"1.0.0","moduleNamespace":"test","provider":"azurerm","securityReport":{"main.tf":[{"id":"AVD-AZU-0014","qualifiedId":"AVD-AZU-0014","provider":"Azure","service":"keyvault","impact":"Expiration Date is an optional Key Vault Key behavior and is not set by default.\n\nSet when the resource will be become inactive.","resolution":"Set an expiration date on the vault key","links":[https://docs.microsoft.com/en-us/powershell/module/az.keyvault/update-azkeyvaultkey?view=azps-5.8.0#example-1--modify-a-key-to-enable-it--and-set-the-expiration-date-and-tags,https://avd.aquasec.com/misconfig/avd-azu-0014],"description":"Ensure that the expiration date is set on all keys","severity":"MEDIUM","warning":false,"status":0,"resource":"azurerm_key_vault_key.mykey","location":{"fileName":"main.tf","start_line":1,"end_line":9},"rule_description":"Key should have an expiry date specified."}]},"documentation":{"inputs":[],"modules":[],"outputs":[],"providers":[{"name":"azurerm"}],"resources":[{"name":"mykey","type":"key_vault_key","source":"hashicorp/azurerm","mode":"managed","version":"latest"}]},"_rid":"Vb0UAL75uyoFAAAAAAAAAA==","_self":"dbs/Vb0UAA==/colls/Vb0UAL75uyo=/docs/Vb0UAL75uyoFAAAAAAAAAA==/","_etag":"\"0b006deb-0000-1100-0000-664627120000\"","_attachments":"attachments/","_ts":1715873554}
2024-05-16T15:42:16.370312593Z at com.azure.cosmos.implementation.ItemDeserializer$JsonDeserializer.convert(ItemDeserializer.java:38)
2024-05-16T15:42:16.370338181Z at com.azure.cosmos.implementation.Utils.parse(Utils.java:563)
2024-05-16T15:42:16.370354351Z at com.azure.cosmos.models.CosmosItemResponse.getItem(CosmosItemResponse.java:144)
2024-05-16T15:42:16.370362226Z at core.backend.azure.cosmosdb.CosmosDbRepository.getReportByModuleVersion(CosmosDbRepository.java:227)
2024-05-16T15:42:16.370384709Z at core.backend.azure.cosmosdb.CosmosDbRepository_ClientProxy.getReportByModuleVersion(Unknown Source)
2024-05-16T15:42:16.370402071Z at api.Reports.getSecurityReportForModuleVersion(Reports.java:34)
2024-05-16T15:42:16.370409696Z at api.Reports$quarkusrestinvoker$getSecurityReportForModuleVersion_dd50841e3b0db64ee97258dee0aa43a88eba0668.invoke(Unknown Source)
2024-05-16T15:42:16.370417029Z at org.jboss.resteasy.reactive.server.handlers.InvocationHandler.handle(InvocationHandler.java:29)
2024-05-16T15:42:16.370424394Z at io.quarkus.resteasy.reactive.server.runtime.QuarkusResteasyReactiveRequestContext.invokeHandler(QuarkusResteasyReactiveRequestContext.java:141)
2024-05-16T15:42:16.370431928Z at org.jboss.resteasy.reactive.common.core.AbstractResteasyReactiveContext.run(AbstractResteasyReactiveContext.java:147)
2024-05-16T15:42:16.370439141Z at io.quarkus.vertx.core.runtime.VertxCoreRecorder$14.runWith(VertxCoreRecorder.java:582)
2024-05-16T15:42:16.370446024Z at org.jboss.threads.EnhancedQueueExecutor$Task.run(EnhancedQueueExecutor.java:2513)
2024-05-16T15:42:16.370453348Z at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1538)
2024-05-16T15:42:16.370460361Z at org.jboss.threads.DelegatingRunnable.run(DelegatingRunnable.java:29)
2024-05-16T15:42:16.370467464Z at org.jboss.threads.ThreadLocalResettingRunnable.run(ThreadLocalResettingRunnable.java:29)
2024-05-16T15:42:16.370474628Z at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
2024-05-16T15:42:16.370481771Z at java.base/java.lang.Thread.run(Thread.java:1583)
2024-05-16T15:42:16.370489135Z Caused by: com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Cannot construct instance of `extensions.security.report.SecurityFinding` (no Creators, like default constructor, exist): cannot deserialize from Object value (no delegate- or property-based Creator)
2024-05-16T15:42:16.370496579Z  at [Source: UNKNOWN; byte offset: #UNKNOWN] (through reference chain: extensions.core.Report["securityReport"]->java.util.LinkedHashMap["main.tf"]->java.util.ArrayList[0])
2024-05-16T15:42:16.370504174Z at com.fasterxml.jackson.databind.exc.InvalidDefinitionException.from(InvalidDefinitionException.java:67)
2024-05-16T15:42:16.370511317Z at com.fasterxml.jackson.databind.DeserializationContext.reportBadDefinition(DeserializationContext.java:1887)
2024-05-16T15:42:16.370518481Z at com.fasterxml.jackson.databind.DatabindContext.reportBadDefinition(DatabindContext.java:414)
2024-05-16T15:42:16.370525634Z at com.fasterxml.jackson.databind.DeserializationContext.handleMissingInstantiator(DeserializationContext.java:1375)
2024-05-16T15:42:16.370536474Z at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1508)
2024-05-16T15:42:16.370543768Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:348)
2024-05-16T15:42:16.370550892Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:185)
2024-05-16T15:42:16.370558005Z at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:359)
2024-05-16T15:42:16.370565179Z at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244)
2024-05-16T15:42:16.370572482Z at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:28)
2024-05-16T15:42:16.370579616Z at com.fasterxml.jackson.databind.deser.std.MapDeserializer._readAndBindStringKeyMap(MapDeserializer.java:623)
2024-05-16T15:42:16.370586769Z at com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:449)
2024-05-16T15:42:16.370593823Z at com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:32)
2024-05-16T15:42:16.370607989Z at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
2024-05-16T15:42:16.370615494Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:310)
2024-05-16T15:42:16.370623078Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:177)
2024-05-16T15:42:16.370630812Z at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:342)
2024-05-16T15:42:16.370638046Z at com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:4875)
2024-05-16T15:42:16.370645259Z at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3033)
2024-05-16T15:42:16.370652323Z at com.fasterxml.jackson.databind.ObjectMapper.treeToValue(ObjectMapper.java:3497)
2024-05-16T15:42:16.370659166Z at com.azure.cosmos.implementation.ItemDeserializer$JsonDeserializer.convert(ItemDeserializer.java:36)
2024-05-16T15:42:16.370666309Z ... 16 more

Helm Chart Deployment Option

Feature request.
It'd be excellent if we could reference a Helm Chart, from the Helm Chart registry. That way we could use the Terraform Hashicorp / Helm provider to deploy Tapir into a kubernetes service.

Management and Providers tabs throw 404 Error on 0.7.1

On the latest 0.7.1 release we receive the following error:

{"errorId":"037e63fc-536a-422d-81ed-2d0033f69e1f","errors":[{"message":"An unexpected error has occurred. Please raise an issue if you think this is a bug."}]}

When viewing /management or providers tabs.

How do we mask sensitive config information?

While reading the https://github.com/PacoVK/tapir/blob/main/docs/configuration.md file we are asked to set values which are sensitive by nature, such as BACKEND_AZURE_MASTER_KEY, AZURE_BLOB_CONNECTION_STRING, AUTH_CLIENT_SECRET and so on.

It's not clear in the docs how best to mask or secure these secrets. We could be doing a k8 namespace opaque secret, or something similar. But I'm unsure what works best with the container, when running in k8.

If you can help me understand how to implement a secret with Tapir, I'd be happy to add that to the docs as well.

Publishing provider to tapir

Hi I am trying to publish an existing provider to tapir, so I added my gpg keys to tapir envs, then created a zip-archive of the provider:

gpg --generate-key
shasum -a 256 dev-modules_3.0.2_linux_amd64.zip >  dev-modules_3.0.2_linux_amd64_SHA256SUMS
gpg --detach-sign dev-modules_3.0.2_linux_amd64_SHA256SUMS

Archive:  archive.zip
 Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
--------  ------  ------- ---- ---------- ----- --------  ----
       0  Stored        0   0% 2023-09-25 16:10 00000000  archive/
     100  Defl:N       91   9% 2023-09-25 16:10 a822df4d  archive/dev-modules_3.0.2_linux_amd64_SHA256SUMS
     438  Stored      438   0% 2023-09-25 16:10 50cb76e2  archive/dev-modules_3.0.2_linux_amd64_SHA256SUMS.sig
 7019352  Stored  7019352   0% 2023-09-25 16:10 e1f917b2  archive/dev-modules_3.0.2_linux_amd64.zip
--------          -------  ---                            -------
 7019890          7019881   0%                            4 files

But when i try to post provider to tapir I get an error:
curl -X POST --fail-with-body -F [email protected] https://tapir.dev.local/terraform/providers/v1/dev/modules/3.0.2 curl: (22) The requested URL returned error: 500 {"errorId":"b5985a8b-d172-4429-a706-9682a7208edc","errors":[{"message":"An unexpected error has occurred. Please raise an issue if you think this is a bug."}]} The application works behind nginx with valid certificates.
What am I doing wrong?
Thanks!

I would like to have more detailed examples of file structures during the upload process!

When uploading the provider, we already know that we need to prepare the .zip file containing all the files described in the 'how to prepare release' documentation. However, when uploading the module, it seems that the documentation does not explicitly specify how the directory structure should be maintained for the upload. I tried uploading the downloaded module and encountered the following error.
1、
error:
image

My directory structure is:
image

command use:
curl -XPOST -H 'x-api-key:eNsZxQ1vQsY0QqEGgWMHXsKE' --fail-with-body -F [email protected] "https://example.com/terraform/modules/v1/fc/vpc/aws/0.0.2"

2、
error:
image

My directory structure is:
image

command use:
curl -XPOST -H 'x-api-key:eNsZxQ1vQsY0QqEGgWMHXsKE' --fail-with-body -F [email protected] "https://example.com/terraform/modules/v1/fc/vpc/aws/0.0.2"

CORS error prevents creating a deployment key

Observed on version 0.7.0.

The server responds with a 403 when I try to create a deployment key.
request:

POST /management/deploykey/infrahouse-bookstack-aws HTTP/1.1
X-Forwarded-For: 23.123.142.164
X-Forwarded-Proto: https
X-Forwarded-Port: 443
Host: registry.infrahouse.com
X-Amzn-Trace-Id: Root=1-664fd265-7f9d22b3739dae9e74f069e0
Content-Length: 0
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:126.0) Gecko/20100101 Firefox/126.0
accept: */*
accept-language: en-US,en;q=0.5
accept-encoding: gzip, deflate, br, zstd
referer: https://registry.infrahouse.com/management
origin: https://registry.infrahouse.com/
dnt: 1
sec-fetch-dest: empty
sec-fetch-mode: cors
sec-fetch-site: same-origin
priority: u=1
cookie: q_auth_742...

response

HTTP/1.1 403 CORS Rejected - Invalid origin
content-length: 0

The behavior is the same regardless the options

-Dquarkus.http.cors=false

or

-Dquarkus.http.cors=true -Dquarkus.http.cors.origins=https://registry.infrahouse.com

Deploykeys not showing

Hi @PacoVK ,

I'm using CosmosDB with verison 0.8.0. When i create a deploykey it works correctly. But it's not showing in the GUI.
image
However they are being created in CosmosDB
image

(This was also in 0.7.0 the case for me)

Terraform login provider

Hi Paco,

As I'm testing right now only the UI needs authentication and as documented in the wiki, the terraform login provider is not (yet) required.
So our modules are public as it stands, is there any timeline on when the Terraform Login provider would be implemented?

Kind regards,
CΓ©dric

Push both amd64 & arm64 to ECR public

Only amd64 is pushed to public ECR.

I think that editing the current build like this would push both archs to both registries in one go, without having to pull and push:

     - name: Login to Amazon ECR Public
        id: login-ecr-public
        env:
          AWS_REGION: us-east-1
          AWS_DEFAULT_REGION: us-east-1
        uses: aws-actions/amazon-ecr-login@v1
        with:
          registry-type: public
      - name: Build and push
        uses: docker/build-push-action@v4
        env:
          AWS_REGION: us-east-1
          AWS_DEFAULT_REGION: us-east-1
          REGISTRY: ${{ steps.login-ecr-public.outputs.registry }}
          REGISTRY_ALIAS: o7d8p0l2
          REPOSITORY: tapir
          IMAGE_TAG: latest
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          push: ${{ github.event_name != 'pull_request' }}
          tags: |
            ${{ steps.meta.outputs.tags }}
            $REGISTRY/$REGISTRY_ALIAS/$REPOSITORY:$IMAGE_TAG
          labels: ${{ steps.meta.outputs.labels }}

I haven't tested this but AFAIK it works.
Source:
https://docs.docker.com/build/ci/github-actions/push-multi-registries/
docker/build-push-action#901

Also, there is no CONTRIBUTING file, should this kind of proposal come from an issue or a PR directly?

Can we reuse deploy key

We are starting to use tapir, it looks very promising.

We have about a 100 modules that are currently deployed via ci/cd using git tags. We'd like to transition them to use tapir.

But it seems cumbersome to create, store, and manage that amount of deploy keys.

Is it possible to either:

  • Reuse a deploy key for multiple modules
  • Create and manage them using a terraform provider?

OpenTofu Support

Hi :)

I have a little question. Are there plans to support both Terraform and OpenTofu in the future? Currently OpenTofu is a drop in replacement but this may change in the future.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.