Giter VIP home page Giter VIP logo

consul-api-gateway's People

Contributors

99 avatar andrewstucki avatar calebalbers avatar claire-labry avatar david-yu avatar dependabot[bot] avatar hashicorp-copywrite[bot] avatar hashicorp-tsccr[bot] avatar hc-github-team-consul-api-gateway avatar jeff-apple avatar jherschman avatar jm96441n avatar lkysow avatar mdeggies avatar mikemorris avatar missylbytes avatar modrake avatar nathancoleman avatar nickethier avatar preetapan avatar sarahalsmiller avatar sarahethompson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

consul-api-gateway's Issues

Namespace selector for `allowedRoutes` is applied to route instead of namespace

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you!
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Overview of the Issue

The Gateway API spec specifies for RouteNamespaces.selector that

Selector must be specified when From is set to “Selector”. In that case, only Routes in Namespaces matching this Selector will be selected by this Gateway.

This means that the label selector must match the namespace of the route, not the route itself.

The current implementation of the controller compares the labels on the route itself where it should be comparing against the labels on the namespace containing the route:

case gw.NamespacesFromSelector:
ns, err := metav1.LabelSelectorAsSelector(namespaceSelector.Selector)
if err != nil {
return false, fmt.Errorf("error parsing label selector: %v", err)
}
return ns.Matches(toNamespaceSet(route.GetNamespace(), route.GetLabels())), nil

Reproduction Steps

  1. Create a Gateway in one namespace with a selector for namespaces w/ label (code)

  2. Create HTTPRoute (code) in a different Namespace (code) where the labels on the Namespace match the selector on the Gateway

  3. Observe that route never attaches to gateway:

    kubectl get gateway other-namespace -o yaml

Logs

2022-03-08T20:17:46.926Z [TRACE] memory/store.go:191: consul-api-gateway-server.state: detected route state change: id=http-other-namespace/other-namespace-backend-route
2022-03-08T20:17:46.926Z [TRACE] memory/gateway.go:53: consul-api-gateway-server.state: checking if route can bind to gateway: gateway.consul.namespace="" gateway.consul.service=other-namespace route=http-other-namespace/other-namespace-backend-route
2022-03-08T20:17:46.926Z [TRACE] memory/gateway.go:58: consul-api-gateway-server.state: checking if route can bind to listener: gateway.consul.namespace="" gateway.consul.service=other-namespace listener=http route=http-other-namespace/other-namespace-backend-route
2022-03-08T20:17:46.926Z [TRACE] reconciler/listener.go:320: consul-api-gateway-server.k8s.Reconciler.gateway.listener: checking route parent ref: listener=http name=other-namespace namespace=consul name=other-namespace
2022-03-08T20:17:46.926Z [TRACE] reconciler/listener.go:323: consul-api-gateway-server.k8s.Reconciler.gateway.listener: checking gateway match: listener=http name=other-namespace namespace=consul expected=consul/other-namespace found=consul/other-namespace
2022-03-08T20:17:46.926Z [TRACE] reconciler/listener.go:344: consul-api-gateway-server.k8s.Reconciler.gateway.listener: checking listener match: listener=http name=other-namespace namespace=consul expected=http found=<nil>
2022-03-08T20:17:46.926Z [TRACE] reconciler/listener.go:362: consul-api-gateway-server.k8s.Reconciler.gateway.listener: route not allowed because of listener namespace policy: listener=http name=other-namespace namespace=consul route=http-other-namespace/other-namespace-backend-route
2022-03-08T20:17:46.940Z [TRACE] reconciler/gateway.go:416: consul-api-gateway-server.k8s.Reconciler.gateway: created or updated gateway service: name=other-namespace namespace=consul

Expected behavior

HTTPRoute should successfully attach to Gateway

Environment details

Additional Context

When deployment/service is deleted, after n hrs gateway listener is removed and never re-added

Overview of the Issue

I have an HTTPRoute that points at two services. When I delete the services the listener is still configured on the gateway. After a certain time (12h?), Kubernetes triggers a re-resolve on the HTTPRoute and it's determined that the route is invalid since it targets non-running services, this causes the listener to be deleted.

If I then restart my services, the listener isn't added back. I have to restart the api gateway controller to get the listener back.

Incorrect IP is added to k8s `Gateway` resource

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you!
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Overview of the Issue

The gateway pod's IP is assigned to the Gateway resource in k8s (here); however, this is incorrect for cases where a service of type LoadBalancer, NodePort, ClusterIP is used. The IP on the Gateway resource needs to be externally available and so should be the service's external IP, the node IP, or the internal IP respectively. If no service is created, then we can continue assigning the pod's IP as we do today.

Reproduction Steps

  1. Install to Kubernetes cluster (tested on GKE) using serviceType: LoadBalancer in the Consul Helm chart

  2. Check IP address on Gateway resource:

    $ kubectl get gateway <gatewayName> -o yaml
  3. Check external IP of Service resource:

    $ kubectl get service <serviceName>
  4. Observe that they do not match

Logs

Expected behavior

IP address on Gateway should match the external IP of the Service

Environment details

Additional Context

Controller crashes when certificates are handled by Vault PKI

Overview of the Issue

I configured a Gateway with a TLS certificate that is generated by Vault PKI secrets engine. It comes up successfully but when I create an HTTPRoute to add an upstream, the API Gateway controller throws an error and fails to add the route because it cannot validate the certificate's SPIFFE URL.

Reproduction Steps

  1. Create three self-signed root CAs and configure each with a
    Vault PKI secrets engine with two levels of intermediate certificates.

    • Cluster certificate (self-signed)
    • Connect CA (self-signed)
    • Consul API Gateway certificate (self-signed)
  2. Create a Consul cluster that uses Vault PKI Secrets Engine.

    global:
      datacenter: "${CONSUL_DATACENTER}"
      name: consul
      secretsBackend:
        vault:
          enabled: true
          consulServerRole: ${CONSUL_SERVER_ROLE}
          consulClientRole: ${CONSUL_CLIENT_ROLE}
          consulCARole: ${CONSUL_CA_ROLE}
          manageSystemACLsRole: ${SERVER_ACL_INIT_ROLE}
          agentAnnotations: |
            "vault.hashicorp.com/namespace": "${VAULT_NAMESPACE}"
          connectCA:
            address: ${VAULT_ADDR}
            rootPKIPath: ${CONSUL_CONNECT_PKI_PATH_ROOT}
            intermediatePKIPath: ${CONSUL_CONNECT_PKI_PATH_INT}
            authMethodPath: ${KUBERNETES_AUTH_METHOD_PATH}
            additionalConfig: '"{"connect": [{ "ca_config": [{ "namespace": "${VAULT_NAMESPACE}"}]}]}"'
      tls:
        enabled: true
        enableAutoEncrypt: true
        caCert:
          secretName: "${CONSUL_PKI_PATH}/cert/ca"
        caKey:
          secretName: "${CONSUL_PKI_PATH}/issue/${CONSUL_SERVER_ROLE}"
          secretKey: private_key
      acls:
        manageSystemACLs: true
        bootstrapToken:
          secretName: "${CONSUL_STATIC_PATH}/data/bootstrap"
          secretKey: token
      gossipEncryption:
        secretName: ${CONSUL_STATIC_PATH}/data/gossip
        secretKey: key
    
    server:
      replicas: 1
      serverCert:
        secretName: "${CONSUL_PKI_PATH}/issue/${CONSUL_SERVER_ROLE}"
    
    connectInject:
      replicas: 1
      enabled: true
    
    controller:
      enabled: true
    
    terminatingGateways:
      enabled: true
      defaults:
        replicas: 1
    
    apiGateway:
      enabled: true
      logLevel: trace
      image: "hashicorp/consul-api-gateway:0.2.1"
      managedGatewayClass:
        serviceType: LoadBalancer
    
    ui:
      enabled: true
      service:
        enabled: true
        type: LoadBalancer
  3. Deploy a gateway with a TLS certificate.

    apiVersion: gateway.networking.k8s.io/v1alpha2
    kind: Gateway
    metadata:
      name: api-gateway
      namespace: default
    spec:
      gatewayClassName: consul-api-gateway
      listeners:
      - allowedRoutes:
          namespaces:
            from: Same
        name: https
        port: 8443
        protocol: HTTPS
        tls:
          certificateRefs:
          - group: ""
            kind: Secret
            name: consul-api-gateway-cert
          mode: Terminate

    The gateway comes up:

    $ kubectl get pods
    NAME                                             READY   STATUS    RESTARTS       AGE
    api-gateway-5d5dd555b5-9kxqh                     1/1     Running   0              8m35s
    consul-api-gateway-controller-6489bfb4dc-rn8rw   2/2     Running   18 (23m ago)   85m
  4. Deploy an HTTPRoute.

    apiVersion: gateway.networking.k8s.io/v1alpha2
    kind: HTTPRoute
    metadata:
      name: hashicups
    spec:
      parentRefs:
      - name: api-gateway
      rules:
      - matches:
        - path:
            type: PathPrefix
            value: /
        backendRefs:
        - kind: Service
          name: nginx
          namespace: default
          port: 80

    The gateway throws an error and restarts:

    $ kubectl get pods
    NAME                                             READY   STATUS    RESTARTS       AGE
    api-gateway-5d5dd555b5-9kxqh                     1/1     Running   0              10m
    consul-api-gateway-controller-6489bfb4dc-rn8rw   1/2     Error     20 (19s ago)   87m

Logs

Logs
2022-06-03T16:39:26.260Z [INFO]  manager/internal.go:383: consul-api-gateway-server.controller-runtime: starting metrics server: path=/metrics
2022-06-03T16:39:26.260Z [TRACE] envoy/secrets.go:300: consul-api-gateway-server.sds-server.secret-manager: running secrets manager
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0xafb0b1]

goroutine 324 [running]:
github.com/hashicorp/consul-api-gateway/internal/envoy.verifySPIFFE({0x1c56bf8, 0xc0007bbe60}, {0x1c987b0, 0xc00015b280}, 0x0, {0x1c3a9b8, 0xc00080a3c0})
        /home/runner/work/consul-api-gateway/consul-api-gateway/internal/envoy/middleware.go:84 +0x1d1
github.com/hashicorp/consul-api-gateway/internal/envoy.SPIFFEStreamMiddleware.func1({0x198c780, 0xc0004fa090}, {0x1c732b0, 0xc00068ec00}, 0x167aec0, 0x1abe690)
        /home/runner/work/consul-api-gateway/consul-api-gateway/internal/envoy/middleware.go:68 +0xc5
google.golang.org/grpc.(*Server).processStreamingRPC(0xc0003cd6c0, {0x1c85848, 0xc000017500}, 0xc0000c9b00, 0xc0004fa120, 0x2ae5940, 0x0)
        /home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:1557 +0xe9a
google.golang.org/grpc.(*Server).handleStream(0xc0003cd6c0, {0x1c85848, 0xc000017500}, 0xc0000c9b00, 0x0)
        /home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:1630 +0x9e5
google.golang.org/grpc.(*Server).serveStreams.func1.2()
        /home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:941 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
        /home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:939 +0x294

Expected behavior

I expected to have the HTTPRoute add an upstream to my service and be able to access the service over HTTPS.

Environment details

  • consul-api-gateway version: v0.2.1
  • Kubernetes version: v1.22.9-eks-a64ea69
  • Consul Server version: v1.12.0
  • Consul-K8s version: v0.44.0
  • Cloud Provider (If self-hosted, the Kubernetes provider utilized): EKS, AKS, GKE, OpenShift (and version), Rancher (and version), TKGI (and version): EKS v1.22.9

Additional Context

You can find the full deployment (including Vault PKI secrets engine setup and certificate generation) at joatmon08/hashicorp-stack-demoapp.

Tight loop spamming config entry delete errors after deleting gateway

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you!
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Overview of the Issue

Reproduction Steps

  1. Followed learn guide (https://learn-git-consul-er-api-gw-hashicorp.vercel.app/tutorials/consul/kubernetes-api-gateway)

  2. Then deleted the gateway: k delete gateway test-gateway

  3. Controller is now spamming:

2022-01-27T22:20:02.013Z [WARN]  gatewayclient/middleware.go:39: consul-api-gateway-server.k8s.Gateway: received non-Kubernetes error, delaying requeue:
  error=
  | error removing service defaults config entries: 1 error occurred:
  | 	* Unexpected response code: 500 (rpc error making call: service "test-gateway-merged-9b9265b" has protocol "tcp", which does not match defined listener protocol "http")
  |

API Gateway doesn't attach cross-namespace lookup policy on namespace creation

Overview of the Issue

This issue impacts API Gateway instances created in Kubernetes, with Consul Enterprise clusters leveraging namespace mirroring.

When an API Gateway instance is launched in a Kubernetes namespace whose equivalent does not yet exist in Consul, it creates a matching namespace automatically. This mimics the behavior of consul connect-inject in the same scenario.

The created namespace needs a default policy assigned permitting cross-namespace service lookup, which is currently skipped by API Gateway Controller. Without this policy, the ACL token given to the API Gateway pods cannot discover the appropriate endpoints for any target services that are not in the same namespace as the gateway.

Reproduction Steps

  • Deploy Consul Enterprise to a kubernetes cluster with the following features enabled:
global:
  image: "hashicorp/consul-enterprise:1.14.3-ent"
  enableConsulNamespaces: true
  acls:
    manageSystemACLs: true
connectInject:
  enabled: true
  consulNamespaces:
    mirroringK8S: true
apiGateway:
  enabled: true
  image: hashicorp/consul-api-gateway:0.5.1
  imageEnvoy: envoyproxy/envoy:v1.23.1
  • Create a new kubernetes namespace, and deploy a Gateway resource to that namespace:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: apigw
  namespace: apigateway
spec:
  gatewayClassName: consul-api-gateway
  listeners:
  - protocol: HTTP
    port: 8080
    name: http
    allowedRoutes:
      namespaces:
        from: All
  • Create a different kubernetes namespace, and deploy a reference connect-injected application and service:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard
  namespace: demoapp
---
apiVersion: v1
kind: Service
metadata:
  name: dashboard
  namespace: demoapp
spec:
  selector:
    app: dashboard
  ports:
  - port: 9002
    targetPort: 9002
    name: dashboard
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: dashboard
  name: dashboard
  namespace: demoapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dashboard
  template:
    metadata:
      annotations:
        'consul.hashicorp.com/connect-inject': 'true'
        'consul.hashicorp.com/connect-service-upstreams': 'counting:9001'
      labels:
        app: dashboard
    spec:
      serviceAccountName: dashboard
      containers:
      - name: dashboard
        image: hashicorp/dashboard-service:0.0.4
        ports:
        - containerPort: 9002
        env:
        - name: COUNTING_SERVICE_URL
          value: 'http://127.0.0.1:9001'
---
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
  name: dashboard
  namespace: demoapp
spec:
  protocol: http
  • Define an HTTPRoute/TCPRoute linking the API Gateway and target service:
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: HTTPRoute
metadata:
  name: dashboard
  namespace: demoapp
spec:
  parentRefs:
    - name: apigw
      namespace: apigateway
  rules:
    - backendRefs:
      - kind: Service
        name: dashboard
        namespace: demoapp
        port: 9002
  • The API Gateway will accept the route, and the backing ingress-gateway config entry will be updated to appropriately reference the target service. However, inspecting the envoy clusters endpoint for an individual API Gateway instance will show no known upstreams.

  • Updating the newly-created namespace with the cross-namespace-policy automatically created by consul-k8s at cluster initialization will allow the API gateway's token to begin discovering service endpoints in other namespaces: consul namespace update -name=apigateway -default-policy-name=cross-namespace-policy (Note, this must be done via CLI or the HTTP API. The Consul UI seems to have a bug associating a new default policy to an existing namespace created in this manner. Perhaps due to the external-source meta applied to the namespace?)

Expected behavior

API Gateway Controller should apply cross-namespace-policy to any created namespaces automatically, when ACLs and Namespaces are in use.

Controller: multiple replicas fail: invalid spiffe path

Overview of the Issue

Running the consul-api-gateway-controller with more than 1 replica and deploying a Gateway with spec.listeners[*].protocol = "HTTPS" causes the following error on all consul-api-gateway-controller replicas that have not acquired the leader lease:

2022-10-13T14:59:47.784Z [WARN]  envoy/middleware.go:101: consul-api-gateway-server.sds-server: gateway not found: namespace="" service=consul-api-gateway
2022-10-13T14:59:47.784Z [ERROR] envoy/middleware.go:89: consul-api-gateway-server.sds-server: error parsing spiffe path, skipping: error="invalid spiffe path" path=""
2022-10-13T15:00:00.885Z [WARN]  envoy/middleware.go:101: consul-api-gateway-server.sds-server: gateway not found: namespace="" service=consul-api-gateway
2022-10-13T15:00:00.885Z [ERROR] envoy/middleware.go:89: consul-api-gateway-server.sds-server: error parsing spiffe path, skipping: error="invalid spiffe path" path=""

Additionally, this has the effect that all replicas of consul-api-gateway that are in contact with the failing consul-api-gateway-controller replicas reject ingress traffic.

  1. When creating a gateway with the following configuration:
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: Gateway
metadata:
  name: consul-api-gateway
  namespace: consul
spec:
  gatewayClassName: consul-api-gateway
  listeners:
    - allowedRoutes:
        kinds:
          - kind: HTTPRoute
        namespaces:
          from: Selector
          selector:
            matchLabels:
              shared-gateway-access: "true"
      name: prod-api
      port: 8080
      protocol: HTTPS
      tls:
        certificateRefs:
          - kind: Secret
            name: consul-api-gateway
        mode: Terminate
  1. View error in consul-api-gateway-controller
2022-10-13T15:00:00.885Z [WARN]  envoy/middleware.go:101: consul-api-gateway-server.sds-server: gateway not found: namespace="" service=consul-api-gateway
2022-10-13T15:00:00.885Z [ERROR] envoy/middleware.go:89: consul-api-gateway-server.sds-server: error parsing spiffe path, skipping: error="invalid spiffe path" path=""

Provide log files from the gateway controller component by providing output from kubectl logs from the pod and container that is surfacing the issue.

Logs
2022-10-13T14:53:24.540Z [INFO]  grpc/logging.go:40: consul-api-gateway-server.sds-server: [core][Server #1] Server created
2022-10-13T14:53:24.540Z [INFO]  grpc/logging.go:40: consul-api-gateway-server.sds-server: [core][Server #1 ListenSocket #2] ListenSocket created
2022-10-13T14:53:24.540Z [INFO]  k8s/logger.go:30: consul-api-gateway-server.controller-runtime: Starting server: addr=[::]:8081 kind="health probe" info="Starting server"
2022-10-13T14:53:24.541Z [INFO]  k8s/logger.go:30: consul-api-gateway-server.kubernetes-client: attempting to acquire leader lease consul/consul-api-gateway.consul.hashicorp.com...
:
  info=
  | attempting to acquire leader lease consul/consul-api-gateway.consul.hashicorp.com...
  
2022-10-13T14:53:24.541Z [INFO]  k8s/logger.go:30: consul-api-gateway-server.controller-runtime: Starting server: addr=[::]:8080 kind=metrics path=/metrics info="Starting server"
2022-10-13T14:54:00.671Z [WARN]  envoy/middleware.go:101: consul-api-gateway-server.sds-server: gateway not found: namespace="" service=consul-api-gateway
2022-10-13T14:54:00.671Z [ERROR] envoy/middleware.go:89: consul-api-gateway-server.sds-server: error parsing spiffe path, skipping: error="invalid spiffe path" path=""
2022-10-13T14:54:00.985Z [WARN]  envoy/middleware.go:101: consul-api-gateway-server.sds-server: gateway not found: namespace="" service=consul-api-gateway
2022-10-13T14:54:00.985Z [ERROR] envoy/middleware.go:89: consul-api-gateway-server.sds-server: error parsing spiffe path, skipping: error="invalid spiffe path" path=""

If not already included, please provide the following:

  • consul-api-gateway version: v0.4.0
  • Kubernetes version: v1.23.x
  • Consul Server version: v1.13.x

Controller certificate watch fails in secondary datacenter

Overview of the Issue

This issue is a contributing factor to #300.

The controller fails to initialize a watch on root certificates when starting up in a secondary datacenter. This is because the primary datacenter needs to be included when creating the watch but is not today.

rootWatch, err := watch.Parse(map[string]interface{}{
"type": "connect_roots",
})

Reproduction Steps

  1. Create a federated setup by following the guide Federation Between Kubernetes Clusters. When installing, ensure API Gateway is enabled in both the primary and secondary clusters (apiGateway.enabled: true in values.yaml).
  2. View logs of the API gateway controller in the secondary datacenter (kubectl logs ...)

Logs

Logs
2022-09-06T19:58:26.971Z [INFO]  k8s/logger.go:30: consul-api-gateway-server.controller-runtime: Starting workers: controller=gateway controllerGroup=gateway.networking.k8s.io controllerKind=Gateway info="Starting workers" worker count=1
2022-09-06T19:58:26.971Z [INFO]  k8s/logger.go:30: consul-api-gateway-server.controller-runtime: Starting workers: controller=tcproute controllerGroup=gateway.networking.k8s.io controllerKind=TCPRoute info="Starting workers" worker count=1
2022-09-06T20:08:30.888Z [ERROR] watch/plan.go:95: consul-api-gateway-server.cert-manager.watch: Watch errored: type=connect_roots error="Unexpected response code: 500 (rpc error making call: i/o deadline reached)" retry=5s
2022-09-06T20:18:51.442Z [ERROR] watch/plan.go:95: consul-api-gateway-server.cert-manager.watch: Watch errored: type=connect_roots error="Unexpected response code: 500 (rpc error making call: rpc error making call: i/o deadline reached)" retry=5s
2022-09-06T20:39:13.030Z [ERROR] watch/plan.go:95: consul-api-gateway-server.cert-manager.watch: Watch errored: type=connect_roots error="Unexpected response code: 500 (rpc error making call: rpc error making call: i/o deadline reached)" retry=5s
2022-09-06T20:49:28.020Z [ERROR] watch/plan.go:95: consul-api-gateway-server.cert-manager.watch: Watch errored: type=connect_roots error="Unexpected response code: 500 (rpc error making call: i/o deadline reached)" retry=5s
2022-09-06T20:59:41.606Z [ERROR] watch/plan.go:95: consul-api-gateway-server.cert-manager.watch: Watch errored: type=connect_roots error="Unexpected response code: 500 (rpc error making call: rpc error making call: i/o deadline reached)" retry=5s
2022-09-06T21:40:44.984Z [ERROR] watch/plan.go:95: consul-api-gateway-server.cert-manager.watch: Watch errored: type=connect_roots error="Unexpected response code: 500 (rpc error making call: i/o deadline reached)" retry=5s
2022-09-06T21:51:03.780Z [ERROR] watch/plan.go:95: consul-api-gateway-server.cert-manager.watch: Watch errored: type=connect_roots error="Unexpected response code: 500 (rpc error making call: rpc error making call: i/o deadline reached)" retry=5s

Expected behavior

Controller successfully starts up (including cert watch initiated as part of startup)

Environment details

If not already included, please provide the following:

  • consul-api-gateway version: v0.4.0
  • configuration used to deploy the gateway controller:
apiGateway:
  enabled: true
  image: "hashicorp/consul-api-gateway:0.4.0"

Additionally, please provide details regarding the Kubernetes Infrastructure, as shown below:

  • Consul Server version: v1.13.1
  • Consul-K8s version: 0.47.1
  • Cloud Provider (If self-hosted, the Kubernetes provider utilized): GKE

Additional Context

Consul API Gateway pods keep being re-created

Hello :)

Overview of the Issue

When deploying the Consul API Gateway: https://www.consul.io/docs/api-gateway/api-gateway-usage#installation, the related Deployment / Pods api-gateway-xxxxxxxxx-xxxxx keep being terminated and created.

Reproduction Steps

I followed the tutorial: https://developer.hashicorp.com/consul/tutorials/kubernetes/kubernetes-api-gateway (self-managed, Helm).

  1. Install Consul :

With apiGateway stanza commented in values.yaml:

(values.yaml)

global:
  enabled: true
  name: consul
  datacenter: dc1
  tls:
    enabled: true
  acls:
    manageSystemACLs: true
server:
  enabled: true
  replicas: 1
ui:
  enabled: true
  service:
    type: NodePort
# apiGateway:
#   enabled: true
#   image: "hashicorp/consul-api-gateway:0.5.1"
#   managedGatewayClass:
#     serviceType: NodePort
#     useHostPorts: true
helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update hashicorp

helm install --values consul/values.yaml consul hashicorp/consul --create-namespace --namespace consul

I first comment the apiGateway stanza because I get the following error message otherwise:

Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "consul-api-gateway" namespace: "" from "": no matches for kind "GatewayClass" in version "gateway.networking.k8s.io/v1alpha2"
ensure CRDs are installed first, resource mapping not found for name: "consul-api-gateway" namespace: "" from "": no matches for kind "GatewayClassConfig" in version "api-gateway.consul.hashicorp.com/v1alpha1"
ensure CRDs are installed first]
  1. Create the CRDs :

kubectl apply --kustomize "github.com/hashicorp/consul-api-gateway/config/crd?ref=v0.5.1"

  1. Uncomment the apiGateway stanza in values.yaml.

helm upgrade --values consul/values.yaml consul hashicorp/consul --create-namespace --namespace consul

  1. Deploy Hashicups and Echo apps :

kubectl apply --filename two-services/

  1. Déployer l'API Gateway Consul :

kubectl apply --filename api-gw/consul-api-gateway.yaml

Logs

kubectl logs on the API Gateway controller pod displays:

[ERROR] k8s/logger.go:35: consul-api-gateway-server.controller-runtime: Reconciler error: Gateway=consul/api-gateway controller=gateway controllerGroup=gateway.networking.k8s.io controllerKind=Gateway name=api-gateway namespace=consul reconcileID=62f3ae49-ed3f-4f5a-908f-2879f94ba910
  error=
  | 1 error occurred:
  | \t* Operation cannot be fulfilled on gateways.gateway.networking.k8s.io "api-gateway": the object has been modified; please apply your changes to the latest version and try again
  |

Expected behavior

A Consul API Gateway Service ready and operational.

Environment details

If not already included, please provide the following:

  • consul-api-gateway version: 0.5.1
  • configuration used to deploy the gateway controller:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: api-gateway
  namespace: consul
spec:
  gatewayClassName: consul-api-gateway
  listeners:
  - protocol: HTTPS
    port: 8443
    name: https
    allowedRoutes:
      namespaces:
        from: All
    tls:
      certificateRefs:
        - name: consul-server-cert

Additionally, please provide details regarding the Kubernetes Infrastructure, as shown below:

  • Kubernetes version: v1.27.3
  • Consul Server version: v1.16.0
  • Consul-K8s version: 1.2.0
  • Minikube: 1.31.1 (default configuration)

Thank you for the help :)

API Gateway deployment in AKS using Helm

Overview of the Issue

unable to recognize "": no matches for kind "GatewayClass" in version "gateway.networking.k8s.io/v1alpha2", unable to recognize "": no matches for kind "GatewayClassConfig" in version "api-gateway.consul.hashicorp.com/v1alpha1"]�[0m

Reproduction Steps

Logs

Expected behavior

Environment details

Additional Context

Add support for new Gateway infrastructure labels, possibly deprecate GatewayClassConfig copyAnnotations

Is your feature request related to a problem? Please describe.

The GatewayClassConfig copyAnnotations field currently enables similar functionality (and was a necessary workaround prior to this functionality landing upstream), but is a bit awkward UX for this configuration as populating these values requires both configuring annotations on the Gateway resources itself and then setting a global configuration from a resource referenced by the GatewayClass to select specific annotation keys to be copied, which is several steps removed from the resource a user will want to configure (the underlying Service provisioned by a user-configured Gateway resource) - this can be a source of confusion.

Feature Description

Add support for the new infrastructure field as described in https://kubernetes.io/blog/2023/11/28/gateway-api-ga/#gateway-infrastructure-labels, which allows a user to specify labels and annotations on a Gateway resource which should be populated through to any underlying resources provisioned by the Gateway API controller implementation.

Use Case(s)

This will enable a more direct UX for configuring labels and annotations for the underlying load balancer Service resources provisioned by Gateways for use cases such as customizing cloud load balancer types or timeouts as described in the linked Discuss post.

References

API-Gateway-Controller fails to resolve Vault Service registered in Consul

Overview of the Issue

The TL;DR:
The api-gateway-controller fails to resolve the consul service registration on deployment of httproutes for vault when deploying Vault with a consul HA backend.

The longer version:
I have a GKE cluster that I’m running Vault with Consul serving as the HA backend. I’ve installed Vault and Consul via helm charts, everything between those two installations appears to be playing nice. I’m now attempting to set up Consul API Gateway with that Consul cluster to set up some (internal) ingress traffic for a vault-injector living in another cluster. Everything is working together flawlessly until it comes to setting up the HTTPRoute, when doing so the api-gateway-controller fails to resolve the the consul service registration for Vault.

Reproduction Steps

Here's all of the things we've got setup for the installations of Vault + Consul + API Gateway:

Consul helm overrides:

global:
  name: consul
  gossipEncryption: 
    autoGenerate: true
  tls:
    enabled: true
    enableAutoEncrypt: true
    verify: true
    httpsOnly: false
  acls:
    enabled: true
    manageSystemACLs: true
    default_policy: "allow"
    enable_token_persistence: true
ui:
  enabled: true
  type: "LoadBalancer"
client:
  enabled: true
connectInject:
  enabled: true
controller:
  enabled: true
terminatingGateways:
  enabled: false
ingressGateways:
  enabled: false
apiGateway:
  enabled: true
  image: hashicorp/consul-api-gateway:0.3.0
  managedGatewayClass:
    copyAnnotations:
      service: 
        annotations: |
          - 'networking.gke.io/load-balancer-type'

Vault helm overrides:

global:
  enabled: true
  tlsDisable: true
injector:
  enabled: false
  logLevel: debug
  webhook:
    failurePolicy: Fail
  image:
    repository: hashicorp/vault-k8s
    tag: latest
  resources:
    requests:
      memory: 256Mi
      cpu: 250m
    limits:
      memory: 256Mi
      cpu: 250m
server:
  resources:
    requests:
      memory: 8Gi
      cpu: 2000m
    limits:
      memory: 16Gi
      cpu: 2000m
  readinessProbe:
    enabled: true
    path: /v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204
  livenessProbe:
    enabled: true
    path: /v1/sys/health?standbyok=true
    initialDelaySeconds: 60
  auditStorage:
    enabled: true
  standalone:
    enabled: false
  ha:
    enabled: true
    replicas: 5
    config: |

      ui = true

      listener "tcp" {
        tls_disable = true
        address = "[::]:8200"
        cluster_address = "[::]:8201"    
      }

      storage "consul" {
          path = "vault/"
          address = "<consul_addr>"
          token = "<consul_token>"
          scheme = "https"
          tls_skip_verify = true
      }

API Gateway resources: (I'm including the TLS block in case it turns out to be a clue, but I have validated that the api gateway at least receives traffic ok over ssl)

apiVersion: gateway.networking.k8s.io/v1alpha2
kind: Gateway
metadata:
  name: vault-gateway
  namespace: vault
  annotations:
    networking.gke.io/load-balancer-type: 'Internal'
spec:
  gatewayClassName: consul-api-gateway
  listeners:
    - protocol: HTTPS
      hostname: internal.vault.hostname
      port: 443
      name: https
      allowedRoutes:
        namespaces:
          from: Same
      tls:
        certificateRefs:
          - name: vault-ingress-certificate
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: HTTPRoute
metadata:
  name: vault-route
  namespace: vault
spec:
  parentRefs:
    - name: vault-gateway
  rules:
    - backendRefs:
        - kind: Service
          name: vault
          namespace: vault
          port: 8200
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: ReferencePolicy
metadata:
  name: vault-ref-policy-route
  namespace: vault
spec:
  from:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      namespace: vault
  to:
    - group: ""
      kind: Service
      name: vault

Logs

2022-08-04T20:42:28.917Z [ERROR] service/resolver.go:249: consul-api-gateway-server.k8s.Reconciler: could not resolve consul service: error="consul service vault/vault not found"
2022-08-04T20:43:00.370Z [ERROR] service/resolver.go:249: consul-api-gateway-server.k8s.Reconciler: could not resolve consul service: error="consul service vault/vault not found"
2022-08-04T20:43:31.315Z [ERROR] service/resolver.go:249: consul-api-gateway-server.k8s.Reconciler: could not resolve consul service: error="consul service vault/vault not found"

The HTTPRoute registration

apiVersion: v1
items:
- apiVersion: gateway.networking.k8s.io/v1alpha2
  kind: HTTPRoute
  metadata:
    name: vault-route
    namespace: vault
  spec:
    parentRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: vault-gateway
    rules:
    - backendRefs:
      - group: ""
        kind: Service
        name: vault
        port: 8200
        weight: 1
      matches:
      - path:
          type: PathPrefix
          value: /
  status:
    parents:
    - conditions:
      - lastTransitionTime: "2022-08-04T20:42:28Z"
        message: Route accepted.
        observedGeneration: 3
        reason: Accepted
        status: "True"
        type: Accepted
      - lastTransitionTime: "2022-08-04T20:42:28Z"
        message: 'consul: consul service vault/vault not found'
        observedGeneration: 3
        reason: ConsulServiceNotFound
        status: "False"
        type: ResolvedRefs
      controllerName: hashicorp.com/consul-api-gateway-controller
      parentRef:
        group: gateway.networking.k8s.io
        kind: Gateway
        name: vault-gateway
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Expected behavior

The HTTPRoute registers both the kubernetes and consul service registrations for vault and allows for the api gateway to route traffic to vault.

Environment details

  • consul-api-gateway version: 0.3.0
  • configuration used to deploy the gateway controller: Listed above as part of the consul helm chart overrides
  • Kubernetes version: v1.21.12-gke.1700 (couldn't find the exact k8s version this ties back to, not sure if it's a 1 to 1)
  • Consul Server version: v1.10.1
  • Cloud Provider (If self-hosted, the Kubernetes provider utilized): GKE,
  • Networking CNI plugin in use: None (GKE is using default kubenet as far as I can tell)

Additional Context

In case this helps, below is also a listing of the vault service registered in consul.

{
  "vault:10.4.0.59:8200": {
    "ID": "vault:10.4.0.59:8200",
    "Service": "vault",
    "Tags": [
      "standby",
      "initialized"
    ],
    "Meta": {
      "external-source": "vault"
    },
    "Port": 8200,
    "Address": "10.4.0.59",
    "TaggedAddresses": {
      "lan_ipv4": {
        "Address": "10.4.0.59",
        "Port": 8200
      },
      "wan_ipv4": {
        "Address": "10.4.0.59",
        "Port": 8200
      }
    },
    "Weights": {
      "Passing": 1,
      "Warning": 1
    },
    "EnableTagOverride": false,
    "Datacenter": "dc1"
  },
  "vault:10.4.1.89:8200": {
    "ID": "vault:10.4.1.89:8200",
    "Service": "vault",
    "Tags": [
      "active",
      "initialized"
    ],
    "Meta": {
      "external-source": "vault"
    },
    "Port": 8200,
    "Address": "10.4.1.89",
    "TaggedAddresses": {
      "lan_ipv4": {
        "Address": "10.4.1.89",
        "Port": 8200
      },
      "wan_ipv4": {
        "Address": "10.4.1.89",
        "Port": 8200
      }
    },
    "Weights": {
      "Passing": 1,
      "Warning": 1
    },
    "EnableTagOverride": false,
    "Datacenter": "dc1"
  },
  "vault:10.4.3.5:8200": {
    "ID": "vault:10.4.3.5:8200",
    "Service": "vault",
    "Tags": [
      "standby",
      "initialized"
    ],
    "Meta": {
      "external-source": "vault"
    },
    "Port": 8200,
    "Address": "10.4.3.5",
    "TaggedAddresses": {
      "lan_ipv4": {
        "Address": "10.4.3.5",
        "Port": 8200
      },
      "wan_ipv4": {
        "Address": "10.4.3.5",
        "Port": 8200
      }
    },
    "Weights": {
      "Passing": 1,
      "Warning": 1
    },
    "EnableTagOverride": false,
    "Datacenter": "dc1"
  }
}

Thank you for taking a look at this!

make ctrl-generate removes config from dev/config/helm/consul.yaml

Reproduction Steps

Running make ctrl-generate creates the diff below, which appears to remove a significant amount of custom Consul config. Either the generation source is out of date, or this Makefile command should be removed if the content of this file should no longer be auto-generated.

Expected behavior

Generated code should always be kept in sync with the source, and running make ctrl-generate or other code generation commands should be a no-op unless corresponding changes are made to the source (refs #93).

Additional Context

diff --git a/dev/config/helm/consul.yaml b/dev/config/helm/consul.yaml
index d6bce35..e6e5580 100644
--- a/dev/config/helm/consul.yaml
+++ b/dev/config/helm/consul.yaml
@@ -1,41 +1,11 @@
-global:
-  name: consul
-  image: "hashicorpdev/consul:581357c32"
-  tls:
-    enabled: true
-    serverAdditionalDNSSANs:
-    - host.docker.internal
-    - localhost
-    - consul-server.default.svc.cluster.local
+client:
+  enabled: true
+  exposeGossipPorts: true
+  join:
+    - "127.0.0.1"
+  hosts:
+    - "127.0.0.1"
 connectInject:
   enabled: true
 controller:
   enabled: true
-server:
-  replicas: 1
-  extraConfig: |
-    {
-      "log_level": "trace",
-      "acl": {
-        "enabled": true,
-        "default_policy": "allow",
-        "enable_token_persistence": true
-      },
-      "connect": {
-        "enabled": true
-      }
-    }
-ui:
-  enabled: true
-  ingress:
-    enabled: true
-    hosts:
-    - host: "host.docker.internal"
-      paths:
-      - "/"
-    - host: "localhost"
-      paths:
-      - "/"
-    annotations: |
-      "kubernetes.io/ingress.class": "nginx"
-      "nginx.ingress.kubernetes.io/ssl-passthrough": "true"

API Gateway Controller Reconciler throws errors: "Operation cannot be fulfilled"

Hello,

I am trying to deploy Consul on a GKE cluster, and my Consul deployment is up and running. The only issues, I am experiencing, are related to Consul API Gateway Controller/Consul API Gateway Instances. The steps to deploy Consul are as follows:

  1. Deploy Consul on a GKE cluster with Consul API Gateway controller enabled. At this point, there are error logs in the Consul API Gateway Controller and Consul API Gateway Instance (using Gateway CRD) has not been deployed yet.
  2. I deploy Consul API Gateway instance using a Gateway CRD. This causes that a Kubernetes Deployment/Service Account/Service to be automatically created and at this point Consul API Gateway instance is up and running. Moreover, I can see the following error logs in the Consul API Gateway Controller:

2023-04-19T06:01:36.525Z [ERROR] k8s/logger.go:38: consul-api-gateway-server.controller-runtime: Reconciler error: Gateway=test/consul-api-gateway controller=gateway controllerGroup=gateway.networking.k8s.io controllerKind=Gateway name=consul-api-gateway namespace=test reconcileID=eda55f28-7149-4da7-aaf7-bb2a9af4e287

| \t* Operation cannot be fulfilled on gateways.gateway.networking.k8s.io "consul-api-gateway": the object has been modified; please apply your changes to the latest version and try again

  1. Then, I apply a really simple HTTPRoute CRD. After this is applied, I can see some more error logs in the Consul API Gateway Controller:

2023-04-19T06:38:27.847Z [ERROR] store/store.go:474: consul-api-gateway-server: Failed to syncGatewaysAndRoutes route statuses:
| \t* error updating route status: Operation cannot be fulfilled on httproutes.gateway.networking.k8s.io "frontend-test-service-dev-02": the object has been modified; please apply your changes to the latest version and try again
2023-04-19T06:38:27.915Z [ERROR] k8s/logger.go:38: consul-api-gateway-server.controller-runtime: Reconciler error: Gateway=test/consul-api-gateway controller=gateway controllerGroup=gateway.networking.k8s.io controllerKind=Gateway name=consul-api-gateway namespace=test reconcileID=f09b6c95-af16-467e-867d-4e6fddf19104
| \t* error updating route status: Operation cannot be fulfilled on httproutes.gateway.networking.k8s.io "frontend-test-service-dev-02": the object has been modified; please apply your changes to the latest version and try again

  1. I checked the Gateway CRD and HTTPRoute CRD on the GKE cluster (after they have been deployed), and their states look correct - no error messages there.
  2. The result is that the API Gateway instance does not expose a service running in Consul Service Mesh properly. When I try to reach Consul API Gateway instance using its Kubernetes Service IP address I get connection refused (the IP address is reachable and there are no firewall issues).

What exactly are we ruuning:

  • Consul Helm Chart 1.0.6 (the consul/consul-k8s-control-plane/consul-dataplane/envoy images as suggested by the Helm Chart values.yaml file): consul:1.14.5, consul-k8s-control-plane:1.0.6, consul-dataplane:1.1.0, envoy:v1.23.1.
  • there is no apiGateway.image specified by default in the Consul Helm Chart, and we use version consul-api-gateway:0.5.3

Our custom value.yaml file is as follows:

global:
  image: hashicorp/consul:1.14.5
  imageK8S: hashicorp/consul-k8s-control-plane:1.0.6
  imageConsulDataplane: hashicorp/consul-dataplane:1.1.0

  imagePullSecrets:
  - name: secret

  peering:
    enabled: true

  datacenter: dev-02
  enablePodSecurityPolicies: false
  gossipEncryption:
    autoGenerate: false

  acls:
    manageSystemACLs: true
    createReplicationToken: false

  federation:
    enabled: false
    createFederationSecret: false

  tls:
    enabled: true
    httpsOnly: true
    verify: true

  consulAPITimeout: 30s

  metrics:
    enabled: true
    enableAgentMetrics: true
    agentMetricsRetentionTime: 24h

server:
  replicas: 3
  storage: 10Gi
  storageClass: csi-gce-pd-cmek
  resources:
    requests:
      memory: "256Mi"
      cpu: "250m"
    limits:
      memory: "1024Mi"
      cpu: "250m"
  extraConfig: |
    {
      "disable_update_check": true,
      "telemetry": {
        "disable_hostname": true
      }
    }

ui:
  enabled: true
  service:
    enabled: true
    type: LoadBalancer
    annotations: |
      networking.gke.io/load-balancer-type: "Internal"
      networking.gke.io/internal-load-balancer-allow-global-access: "true"

connectInject:
  enabled: true
  default: false

syncCatalog:
  enabled: true
  default: true
  toConsul: true
  toK8S: true

dns:
  enabled: true

meshGateway:
  enabled: true
  wanAddress:
    source: "Service"
  service:
    enabled: true
    type: LoadBalancer
    annotations: |
      networking.gke.io/load-balancer-type: "Internal"
      networking.gke.io/internal-load-balancer-allow-global-access: "true"

terminatingGateways:
  enabled: true
  defaults:
    replicas: 1

client:
  enabled: false
  
apiGateway:
  enabled: true
  image: hashicorp/consul-api-gateway:0.5.3
  imageEnvoy: envoyproxy/envoy:v1.23.1

  managedGatewayClass:
    enabled: true
    serviceType: LoadBalancer
    useHostPorts: true
    copyAnnotations:
      service:
        annotations: |
          - 'networking.gke.io/internal-load-balancer-allow-global-access'
          - 'networking.gke.io/load-balancer-type'
    deployment:
      defaultInstances: 1
      maxInstances: 1
      minInstances: 1
  controller:
    replicas: 1

tests:
  enabled: false

Gateway CRD (after it has been applied):

apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  annotations:
    api-gateway.consul.hashicorp.com/config: '{"serviceType":"LoadBalancer","useHostPorts":true,"consul":{"authentication":{"managed":true,"method":"consul-consul-k8s-auth-method"},"scheme":"https","address":"consul-consul-server.consul.svc","ports":{"http":8501,"grpc":8502}},"image":{"consulAPIGateway":"hashicorp/consul-api-gateway:0.5.3","envoy":"envoyproxy/envoy:v1.23.1"},"copyAnnotations":{"service":["networking.gke.io/internal-load-balancer-allow-global-access","networking.gke.io/load-balancer-type"]},"logLevel":"info","deployment":{"defaultInstances":1,"maxInstances":1,"minInstances":1},"connectionManagement":{}}'
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"gateway.networking.k8s.io/v1beta1","kind":"Gateway","metadata":{"annotations":{"networking.gke.io/internal-load-balancer-allow-global-access":"true","networking.gke.io/load-balancer-type":"Internal"},"name":"consul-api-gateway","namespace":"test"},"spec":{"gatewayClassName":"consul-api-gateway","listeners":[{"allowedRoutes":{"namespaces":{"from":"Same"}},"name":"https","port":443,"protocol":"HTTPS","tls":{"certificateRefs":[{"name":"consul-api-gateway-certificate"}]}}]}}
    networking.gke.io/internal-load-balancer-allow-global-access: "true"
    networking.gke.io/load-balancer-type: Internal
  creationTimestamp: "2023-04-19T08:13:19Z"
  generation: 1
  managedFields:
  - apiVersion: gateway.networking.k8s.io/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:api-gateway.consul.hashicorp.com/config: {}
    manager: consul-api-gateway
    operation: Update
    time: "2023-04-19T08:13:19Z"
  - apiVersion: gateway.networking.k8s.io/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
          f:networking.gke.io/internal-load-balancer-allow-global-access: {}
          f:networking.gke.io/load-balancer-type: {}
      f:spec:
        .: {}
        f:gatewayClassName: {}
        f:listeners:
          .: {}
          k:{"name":"https"}:
            .: {}
            f:allowedRoutes:
              .: {}
              f:namespaces:
                .: {}
                f:from: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:tls:
              .: {}
              f:certificateRefs: {}
              f:mode: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2023-04-19T08:13:19Z"
  - apiVersion: gateway.networking.k8s.io/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:addresses: {}
        f:conditions:
          k:{"type":"InSync"}:
            .: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:observedGeneration: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Ready"}:
            .: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:observedGeneration: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Scheduled"}:
            f:lastTransitionTime: {}
            f:message: {}
            f:observedGeneration: {}
            f:reason: {}
            f:status: {}
        f:listeners:
          .: {}
          k:{"name":"https"}:
            .: {}
            f:attachedRoutes: {}
            f:conditions:
              .: {}
              k:{"type":"Conflicted"}:
                .: {}
                f:lastTransitionTime: {}
                f:message: {}
                f:observedGeneration: {}
                f:reason: {}
                f:status: {}
                f:type: {}
              k:{"type":"Detached"}:
                .: {}
                f:lastTransitionTime: {}
                f:message: {}
                f:observedGeneration: {}
                f:reason: {}
                f:status: {}
                f:type: {}
              k:{"type":"Ready"}:
                .: {}
                f:lastTransitionTime: {}
                f:message: {}
                f:observedGeneration: {}
                f:reason: {}
                f:status: {}
                f:type: {}
              k:{"type":"ResolvedRefs"}:
                .: {}
                f:lastTransitionTime: {}
                f:message: {}
                f:observedGeneration: {}
                f:reason: {}
                f:status: {}
                f:type: {}
            f:name: {}
            f:supportedKinds: {}
    manager: consul-api-gateway
    operation: Update
    subresource: status
    time: "2023-04-19T09:31:22Z"
  name: consul-api-gateway
  namespace: test
  resourceVersion: "1209856"
  uid: 42652bf5-4bdd-4a6b-bece-90b9868cdacb
spec:
  gatewayClassName: consul-api-gateway
  listeners:
  - allowedRoutes:
      namespaces:
        from: Same
    name: https
    port: 443
    protocol: HTTPS
    tls:
      certificateRefs:
      - group: ""
        kind: Secret
        name: consul-api-gateway-certificate
      mode: Terminate
status:
  addresses:
  - type: IPAddress
    value: <<SANITIZED>>
  conditions:
  - lastTransitionTime: "2023-04-19T08:13:59Z"
    message: Ready
    observedGeneration: 1
    reason: Ready
    status: "True"
    type: Ready
  - lastTransitionTime: "2023-04-19T08:13:59Z"
    message: Scheduled
    observedGeneration: 1
    reason: Scheduled
    status: "True"
    type: Scheduled
  - lastTransitionTime: "2023-04-19T08:13:59Z"
    message: InSync
    observedGeneration: 1
    reason: InSync
    status: "True"
    type: InSync
  listeners:
  - attachedRoutes: 1
    conditions:
    - lastTransitionTime: "2023-04-19T09:31:22Z"
      message: NoConflicts
      observedGeneration: 1
      reason: NoConflicts
      status: "False"
      type: Conflicted
    - lastTransitionTime: "2023-04-19T09:31:22Z"
      message: Attached
      observedGeneration: 1
      reason: Attached
      status: "False"
      type: Detached
    - lastTransitionTime: "2023-04-19T09:31:22Z"
      message: Ready
      observedGeneration: 1
      reason: Ready
      status: "True"
      type: Ready
    - lastTransitionTime: "2023-04-19T09:31:22Z"
      message: ResolvedRefs
      observedGeneration: 1
      reason: ResolvedRefs
      status: "True"
      type: ResolvedRefs
    name: https
    supportedKinds:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute

Our HTTPRoute looks like (after it has been applied):

apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  annotations:
    meta.helm.sh/release-name: sanity-test
    meta.helm.sh/release-namespace: test
  creationTimestamp: "2023-04-19T09:31:19Z"
  generation: 1
  labels:
    app.kubernetes.io/managed-by: Helm
  managedFields:
  - apiVersion: gateway.networking.k8s.io/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:meta.helm.sh/release-name: {}
          f:meta.helm.sh/release-namespace: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/managed-by: {}
      f:spec:
        .: {}
        f:parentRefs: {}
        f:rules: {}
    manager: Go-http-client
    operation: Update
    time: "2023-04-19T09:31:19Z"
  - apiVersion: gateway.networking.k8s.io/v1alpha2
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        .: {}
        f:parents: {}
    manager: consul-api-gateway
    operation: Update
    subresource: status
    time: "2023-04-19T09:31:22Z"
  name: frontend-test-service-dev-02
  namespace: test
  resourceVersion: "1209855"
  uid: 73fb4913-55ce-4c4a-adcc-b537ecd1c7ee
spec:
  parentRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: consul-api-gateway
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: frontend-test-service-dev-02
      namespace: test
      port: 80
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /frontend-test-service-dev-02
status:
  parents:
  - conditions:
    - lastTransitionTime: "2023-04-19T09:31:22Z"
      message: Route accepted.
      observedGeneration: 1
      reason: Accepted
      status: "True"
      type: Accepted
    - lastTransitionTime: "2023-04-19T09:31:22Z"
      message: ResolvedRefs
      observedGeneration: 1
      reason: ResolvedRefs
      status: "True"
      type: ResolvedRefs
    controllerName: hashicorp.com/consul-api-gateway-controller
    parentRef:
      group: gateway.networking.k8s.io
      kind: Gateway
      name: consul-api-gateway

The service frontend-test-service-dev-02 running in Consul Service Mesh is up/running/healthy.

Any suggestions would be greatly appreciated.

Thanks for help in advance.

Dominik

http-route does not support Consul cluster peering

http-route configuration entry does not support Consul cluster peering.

Feature Description

I would like to use the API gateway for imported services. Imported services via cluster peering.
The documentation
does not say anything about cluster peering.

Adding "Peer" directive would be nice, something like this:

Rules = [
      {
        Matches = [
          {
            Path = {
              Match = "prefix"
              Value = "/"
            }
          }
        ]
        Services = [
          {
            Name = "hashicups-frontend"
            Peer = "cluster-02"
          },
        ]
      },
]

Parsing Resource Version Is Not Allowed

Overview of the Issue

The Kubernetes API explicitly disallows parsing the metadata.resourceVersion field on an object as an integer. This project, however, does so as of this commit: cb0486e

During a review of code that would be affected by non-numerical resourceVersions, I came upon this code. I'm hoping to better understand what this codebase is using the parsing logic for and if there are other means to that end without using non-supported API behavior. I can see that uses of the parsing seem to all be in precondition functions passed to Upsert* methods on a store. This looks analogous to the resource version precondition provided by the Kubernetes API server, but the only implementation of the store I found was an in-memory one here - so I'm not entirely sure how the Kubernetes client and server get involved here.

Feature Request: Support for GRPCRoute

Is your feature request related to a problem? Please describe.

Please support for GRPCRoute ASAP.

There are no workarounds, except to use an alternative solution that is non-Consul. I have been able to implement something similar with non-Consul solutions such as NGINX service mesh with NGINX Kubernetes ingress controller.

Feature Description

For ingress traffic into the service mesh, I would like to support gRPC traffic, as this protocol is highly performant, especially for solutions like Dgraph, a distributed graph database.

Use Case(s)

Dgraph, a distributed graph database, can communicate through a mixture of HTTP and gRPC using its own language, DQL, a superset of GraphQL. Through HTTP, it supports administrative operations (REST or GraphQL) or queries and mutations (GraphQL or DQL). The optimal method for queries and mutations is through gRPC. This is especially true for clients with large databases, especially in the billions of predicates. When offering a managed solution with Dgraph Cloud, both interfaces (gRPC and HTTP) need to be supported, so lack of gRPC is a show stopper or non-solution for us and our customers.

For an ingress/gateway solution to be considered, it would need to support both gRPC and HTTP, terminate public CA TLS at the edge, and encrypt/auth traffic between the ingress and service-mesh.

Contributions

I work in the role of platform engineer, so I can help with testing (QA) the solution, as well as any documentation and tutorials.

Lack of Documentation/Tutorials for Helm Charts

Is your feature request related to a problem? Please describe.

There are no examples or instructions using purely Helm Charts and Kubernetes manifests. Currently, the one tutorial uses Terraform,, https://learn.hashicorp.com/tutorials/consul/kubernetes-api-gateway that uses a mixture of cloud resources (for EKS), helm chart, and kustomize. Terraform, though popular for cloud resources, it is not as popular for Kubernetes resources, as Kubernetes manifests and Helm charts are by far more popular. It would be nice to have a tutorial and instructions that use only manifests and helm charts (no kustomize), as the current instructions adds further complexity and friction, especially compared to the complexity of underlying Consul Connect Service Mesh.

Right now I am looking for a solution that can integrate with Consul Connect SM with upstream support, i.e. no-transparent proxy as I have a multi-port solution that is further restricted to localhost. I would like to have API Gateway that can connect the mesh service, with the traffic between the gateway and meshed service using strict MutualTLS and also ACLs (so using a token).

Feature Description

Ultimately I would want to tutorial that uses Helm charts + Kubernetes manifests, without Terraform and without Kustomize. Customers should should standup whatever cluster, AKS, GKE, EKS, and be able to use this, not have a tutorial locked to a particular cloud Kubernetes, and shouldn't have to wade through another layer of complexity (Terraform + Kustomize) on top of complexity of working with API Gateway and Consul Connect Service Mesh.

Use Case(s)

I developed a solution on GKE using documentation from Consul Connect Service Mesh, which uses Helm charts and Kubernetes manifests. This guide uses Terraform + Kustomize, which is inconsistent with current documentation from Consul Connect Service Mesh. I am looking for clear documentation that uses just Helm charts and Kubernetes manifests, and something that will integrate to my existing GKE cluster.

Also the current example, within the scope of Terraform should be two separate modules, one for the EKS/VPC alone, and another one for just Kubernetes. The later should be able to run against any Kubernetes cluster. The current implementation adds layers of unnecessary complexity, which will send potential customers/users to other solutions.

Contributions

I can help test the solution and further documentation for accuracy. I can test it against GKE, AKS, and EKS.

Does consul api-gateway support redirect http to https

Hello,
I have the following api-gateway configuration"

apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: consul-experimental-api-gateway
  namespace: consul
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    service.beta.kubernetes.io/azure-load-balancer-ipv4: 10.162.75.172 
spec:
  gatewayClassName: consul
  listeners:
  - name: http-all
    port: 80
    protocol: HTTP
    allowedRoutes:
      namespaces:
        from: "All"
  
  - name: https-balticit
    port: 443
    protocol: HTTPS
    allowedRoutes:
      namespaces:
        from: "All"
    hostname: "*.balticit.ifint.biz"
    tls:
      certificateRefs:
      - kind: Secret
        group: ""
        name: balticit-ifint-biz
        namespace: ingress-nginx
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
  name: allow-consul-gateways-to-balticit-tls-secret
  namespace: ingress-nginx
spec:
  from:
  - group: gateway.networking.k8s.io
    kind: Gateway
    namespace: consul
  to:
  - group: ""
    kind: Secret
    name: balticit-ifint-biz

According k8s gateway api docs in order to configure http to https redirect we need to have:
1 . A httpRoute for http-liestener with redirect stanza:

apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: http-to-https-redirect-for-blue-service
  namespace: blue-green
spec:
  parentRefs:
  - name: consul-experimental-api-gateway
    sectionName: http-all # select a http-all listener defined in api-gateway
    namespace: consul
  hostnames:
  - "demo.balticit.ifint.biz"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /blue
    filters:
    - type: RequestRedirect
      requestRedirect:
        scheme: https
        statusCode: 301
  1. A httpRoute with https-listener referencing to our backend service:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: blue-route
  namespace: blue-green
spec:
  parentRefs:
  - name: consul-experimental-api-gateway
    namespace: consul
    sectionName: https-balticit
  hostnames:
  - "demo.balticit.ifint.biz"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /blue
    filters:
      - type: URLRewrite
        urlRewrite:
          path:
            type: ReplacePrefixMatch
            replacePrefixMatch: /
    backendRefs:
    - name: blue 
      kind: Service
      port: 8080
      namespace: blue-green

In my case only https://demo.balticit.ifint.biz/blue is working and http is not.
Any suggestions or thougts ?
Thanks in advance.

URLRewrite support

Is your feature request related to a problem? Please describe.

HTTPRoute URLRewrite filter support is a deal breaker for the company I work for.
I have tried to use it following Gateway API spec, but had no success.

The supported features doc does not confirm support as well.

Is it actually supported or planned?

Feature Description

Use Case(s)

Given that "/orders" redirect to order service
When I reach "/orders/10"
Then orders service request path must be "/10"
And request path should not have "/orders" prefix

apiVersion: gateway.networking.k8s.io/v1alpha2
kind: HTTPRoute
metadata:
  name: orders
  namespace: consul
spec:
  parentRefs:
  - name: api-gateway
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /orders
      backendRefs:
        - kind: Service
          name: orders
          namespace: orders
          port: 80
          weight: 100
      filters:
        - type: URLRewrite
          urlRewrite:
            path:
              type: replacePrefixMatch
              value: /orders

Contributions

There doesn't appear to be a way to create an API Gateway, or Gateway per cluster in a federated WAN

Overview of the Issue

I don't seem to be able to set up API gateway in such a way that I can either have access to all mesh services from a single API Gateway, or using and API Gateway per cluster.

Reproduction Steps

  1. Set up an initial cluster using HELM charts and creating an API Gateway (this all works as expected)
  2. Set up a second federated cluster following the instructions here: https://www.consul.io/docs/k8s/installation/multi-cluster/kubernetes
  3. Services in the second datacenter are not accessible to the API Gateway created in the first datacenter cluster.
  4. Using the federated setup, creating a new API Gateway to access services in the second datacenter fail with SSL connection issues.

Logs

Error when trying to add mesh service from second cluster to API Gateway in first cluster

k get httproute/test-service-route -n test -o jsonpath='{.status}' | jq
{
  "parents": [
    {
      "conditions": [
        {
          "lastTransitionTime": "2022-08-08T07:38:16Z",
          "message": "1 error occurred:\n\t* route is in an invalid state and cannot bind\n\n",
          "observedGeneration": 2,
          "reason": "BindError",
          "status": "False",
          "type": "Accepted"
        },
        {
          "lastTransitionTime": "2022-08-08T07:38:16Z",
          "message": "k8s: service test/test-service not found",
          "observedGeneration": 2,
          "reason": "ServiceNotFound",
          "status": "False",
          "type": "ResolvedRefs"
        }
      ],
      "controllerName": "hashicorp.com/consul-api-gateway-controller",
      "parentRef": {
        "group": "gateway.networking.k8s.io",
        "kind": "Gateway",
        "name": "api-gateway",
        "namespace": "consul"
      }
    }
  ]
}

Error when trying to connect to a second API Gateway in the second datacenter cluster.

curl -vvi -k --header "Host: test-service.api.gateway" "https://${API}:8443/
* TCP_NODELAY set
* Connected to X.X.X.X (X.X.X.X) port 8443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to X.X.X.X:8445
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to X.X.X.X:8445

Expected behavior

There is a documented solution for setting up API Gateways across federated clusters.

Environment details

Additional Context

I suspect this is a simple case of me not seeing the specific documentation required to set this up correctly, but I'm having a lot of problems getting the API Gateway up and running across multiple clusters.

HTTPRoute automatic service intention config entry

Overview of the Issue

Applying an HTTPRoute automatically creates an accompanying service intention config entry for the destination service directly through the Consul API.

Attempting to manage the service intentions for the same destination service afterwards, using a ServiceIntentions custom resource, will silently fail as the Consul K8s controller responsible for reconciliation of ServiceIntentions does not merge an existing config entry.

Reproduction Steps

  1. Create API-Gateway CRDs
  2. helm install consul ...
  3. Deploy API Gateway resource agw
  4. Deploy public-api service
  5. Create HTTPRoute for public-api
  6. Deploy frontend service
  7. Attempt to create ServiceIntentions for public-api to add frontend to sources array

Logs

Logs

config entry output after HTTPRoute creation:

#consul config read -kind=service-intentions -namespace=hashicups -name=public-api
{
    "Kind": "service-intentions",
    "Name": "public-api",
    "Partition": "default",
    "Namespace": "hashicups",
    "Sources": [
        {
            "Name": "agw",
            "Partition": "default",
            "Namespace": "api-gateway",
            "Action": "allow",
            "Precedence": 9,
            "Type": "consul",
            "Description": "Allow traffic from Consul API Gateway. Reconciled by controller at 2023-03-10T23:29:36Z."
        }
    ],
    "CreateIndex": 7252,
    "ModifyIndex": 7252
}

output from 'kubectl logs':

consul-connect-injector-7c8985d9fb-cpq6s sidecar-injector 2023-03-10T23:38:34.028Z	ERROR	controller.serviceintentions	Reconciler error	{"reconciler group": "consul.hashicorp.com", "reconciler kind": "ServiceIntentions", "name": "public-api", "namespace": "hashicups", "error": "config entry already exists in Consul"}
consul-connect-injector-7c8985d9fb-cpq6s sidecar-injector sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
consul-connect-injector-7c8985d9fb-cpq6s sidecar-injector 	/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266
consul-connect-injector-7c8985d9fb-cpq6s sidecar-injector sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
consul-connect-injector-7c8985d9fb-cpq6s sidecar-injector 	/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227
consul-connect-injector-7c8985d9fb-cpq6s sidecar-injector 2023-03-10T23:38:34.055Z	ERROR	controller.serviceintentions	sync failed	{"request": "hashicups/public-api", "error": "config entry already exists in Consul"}

Expected behavior

Capability to create an HTTPRoute while maintaining the ability to define [additional] service intentions for the same destination service through the ServiceIntentions CRD.

Environment details

  • consul-api-gateway version: 0.5.1
  • configuration used to deploy the gateway controller:
    # consul 1.0.4 helm chart
    apiGateway:
      enabled: true
      image: hashicorp/consul-api-gateway:0.5.1
      imageEnvoy: envoyproxy/envoy:v1.24.2
  • Kubernetes version: v1.23.12
  • Consul Server version: hashicorp/consul-enterprise:1.14.4-ent-ubi
  • Consul-K8s version: hashicorp/consul-k8s-control-plane:1.0.4-ubi
  • Cloud Provider: OpenShift 4.10.40
  • Networking CNI plugin in use: OpenShift SDN with Multus + Consul CNI

GitHub Actions - deprecated warnings found - action required!

Workflow Name: ci
Branch: main
Run URL: https://github.com/hashicorp/consul-api-gateway/actions/runs/4960983980

save-state deprecation warnings: 0
set-output deprecation warnings: 1
node12 deprecation warnings: 2

Please review these deprecation warnings as soon as possible and merge in the necessary updates.

GitHub will be removing support for these commands and plan to fully disable them on 31st May 2023. At this time, any workflow that still utilizes these commands will fail. See https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/.

GitHub have not finalized a date for deprecating node12 yet but have indicated that this will be summer 2023. So it is advised to switch to node16 asap. See https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/.

If you need any help, please reach out to us in #team-rel-eng.

GitHub Actions - deprecated warnings found - action required!

Workflow Name: build
Branch: main
Run URL: https://github.com/hashicorp/consul-api-gateway/actions/runs/4960983979

save-state deprecation warnings: 0
set-output deprecation warnings: 1
node12 deprecation warnings: 3

Please review these deprecation warnings as soon as possible and merge in the necessary updates.

GitHub will be removing support for these commands and plan to fully disable them on 31st May 2023. At this time, any workflow that still utilizes these commands will fail. See https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/.

GitHub have not finalized a date for deprecating node12 yet but have indicated that this will be summer 2023. So it is advised to switch to node16 asap. See https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/.

If you need any help, please reach out to us in #team-rel-eng.

“rules.matches.method” only accepts single entry (string)

Overview of the Issue

Unable to add more than 1 entry in the "rules.matches.method”
Documentation states this should be provided as list, but only accepts a string.

See related Discuss

Reproduction Steps

  1. Deploy API Gateway. The following example HTTPRoute configuration only works for 1 x "method" (replace ns/svc as needed):
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: whoami
  namespace: default
spec:
  hostnames: []
  parentRefs:
  - name: api-gateway
  rules:
  - matches:
    - method: HEAD
      path:
        type: PathPrefix
        value: /whoareyou
      headers:
      - name: Host
        type: RegularExpression
        value: "^localhost:.+$"
    backendRefs:
    - kind: Service
      name: whoami
      namespace: default
      port: 80
      weight: 100
    filters:
    - type: URLRewrite
      urlRewrite:
        path:
          type: ReplacePrefixMatch
          replacePrefixMatch: /whoami

Logs

kubectl output

The HTTPRoute "whoami" is invalid: 
* spec.rules[1].matches[0].method: Invalid value: "array": spec.rules[1].matches[0].method in body must be of type string: "array"
* spec.rules[1].matches[0].method: Unsupported value: []interface {}{"HEAD", "GET"}: supported values: "GET", "HEAD", "POST", "PUT", "DELETE", "CONNECT", "OPTIONS", "TRACE", "PATCH"

Expected behavior

Able to add more than 1 method, as stated in documentation

Environment details

  • consul-api-gateway version: 0.5.4
  • Kubernetes version: v1.25.9 (docker desktop, Windows)
  • Consul Server version: v1.15.2
  • Deployment: Helm Chart v1.1.1 (Kustomized)
  • Cloud Provider: Docker desktop (Windows 11/wsl2)

Additional Context

N/A

Consul api gateway on vm based installation

Consul api gateway installation and configuration should be on vm based installation also which is binary based itself.
Otherwise it will only stick to k8s not with vm based workloads

Better Service Resolution Error Messages

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you!
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Is your feature request related to a problem? Please describe.

Our error messaging is overly terse for consul service resolution errors at the moment. For example, when a service fails to be resolved, the controller starts spitting out: "consul service not found" devoid of any context of what consul service is unable to be resolved. We should add some more specificity to the resolution errors where we can.

API Gateway pods fail to start if namespace mirroring enabled and destination namespace doesn't exist

Overview of the Issue

When the Connect Injector job has been deployed in namespace-mirroring mode, new API Gateway pods get stuck in a crash loop during initial auth/token exchange if the matching namespace of the underlying kube Gateway resource has not already been created in Consul.

Reproduction Steps

  1. Deploy API Gateway CRDs
  2. Deploy Consul Enterprise via Helm with API gateway, ACLs, and namespace mirroring enabled
  3. Deploy a new gateway.networking.k8s.io/v1alpha2 Gateway resource in a non-default kube namespace

Logs

{"@caller":"/home/runner/work/consul-api-gateway/consul-api-gateway/internal/consul/auth.go:55","@level":"error","@message":"error authenticating","@module":"consul-api-gateway-exec.authenticator","@timestamp":"2022-06-27T22:19:07.025849Z","error":"Unexpected response code: 500 (rpc error making call: rpc error making call: Namespace \"consul\" does not exist in Partition \"default\")"}

Expected behavior

Following the same pattern established by the connect-inject process, I would expect consul-api-gateway-controller or an init container to check for the existence of the destination Consul namespace, and create one if it doesn't exist.

Environment details

  • consul-api-gateway version: 0.3.0
  • Kubernetes version: v1.22.8-gke.202
  • Consul Server version: v1.12.2-ent
  • Consul-K8s version: v0.45.0
  • Cloud Provider: GCP GKE

Helm values:

global:
  enabled: true
  name: consul
  domain: consul
  peering:
    enabled: true
  adminPartitions:
    enabled: true
    name: "default"
  image: "hashicorp/consul-enterprise:1.12.2-ent"
  imageK8S: "hashicorp/consul-k8s-control-plane:0.45.0"
  datacenter: gke
  gossipEncryption:
    autoGenerate: true
  tls:
    enabled: true
    enableAutoEncrypt: false
  enableConsulNamespaces: true
  acls:
    manageSystemACLs: true
    createReplicationToken: true
  enterpriseLicense:
    secretName: consul-license
    secretKey: key
    enableLicenseAutoload: true
  imageEnvoy: "envoyproxy/envoy:v1.22.2"
server:
  enabled: true
  replicas: 3
  storage: 10Gi
  storageClass: premium-rwo
  connect: true
  updatePartition: 0
client:
  enabled: true
dns:
  enabled: true
ui:
  enabled: true
connectInject:
  enabled: true
  transparentProxy:
    defaultEnabled: false
  consulNamespaces:
    mirroringK8S: true
controller:
  enabled: true
meshGateway:
  enabled: true
  replicas: 2
  service:
    enabled: true
    type: LoadBalancer
    port: 443
apiGateway:
  enabled: true
  image: "hashicorp/consul-api-gateway:0.3.0"
  managedGatewayClass:
    enabled: true
  controller:
    replicas: 1

GatewayClassConfig can become stale

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you!
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Overview of the Issue

It appears as though our dirty-checking for GatewayClassConfigs has a bug and our GatewayClassConfig values can become stale.

Reproduction Steps

  • Deploy the gateway controller via helm with a managed gateway class config.
  • Update the helm deployment with a slightly different configuration for the managed class config (i.e. change the serviceType value)
  • Deploy a gateway
  • You should see the old class config being used and serialized onto the Gateway annotations, meaning we're holding a stale reference to the configuration

Note: restarting the controller and re-creating the gateway uses the latest class config.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.