Giter VIP home page Giter VIP logo

aws-app-mesh-controller-for-k8s's Introduction

contributions welcome GitHub issues GitHub

Build Go Report Card

App Mesh Logo

AWS App Mesh Controller For K8s

AWS App Mesh Controller For K8s is a controller to help manage App Mesh resources for a Kubernetes cluster and injecting sidecars to Kubernetes Pods. The controller watches custom resources for changes and reflects those changes into the App Mesh API. The controller maintains the custom resources (CRDs): meshes, virtualnodes, virtualrouters, virtualservices, virtualgateways and gatewayroutes. The custom resources map to App Mesh API objects.

Note: For v0.5.0 or older versions of the controller, please refer to legacy-controller branch

Security disclosures

If you think you’ve found a potential security issue, please do not post it in the Issues. Instead, please follow the instructions here or email AWS security directly.

Documentation

Checkout our Live Docs!

aws-app-mesh-controller-for-k8s's People

Contributors

achevuru avatar bcelenza avatar bendu avatar bennettjames avatar cgchinmay avatar dependabot[bot] avatar dileepng avatar flashyang avatar haouc avatar jeremymill avatar joesbigidea avatar karanvasnani avatar kishorj avatar lavignes avatar liubnu avatar m00nf1sh avatar melaniesifen avatar paulyeo21 avatar rajal-amzn avatar rakeb avatar shenjianan97 avatar srinivas-kini avatar sshver avatar suniltheta avatar thomashoffman avatar vikasmb avatar y0username avatar ysdongamazon avatar ytsssun avatar yutachaos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-app-mesh-controller-for-k8s's Issues

v0.1.2: unable to create Virtual nodes or Virtual routers

The latest release does not create Virtual Routers and Virtual Services.
Error in the containers logs:

E0908 19:05:20.307544       1 controller.go:443] error syncing 'appmesh-demo/colorteller-black': error updating virtual node status: Operation cannot be fulfilled on virtualnodes.appmesh.k8s.aws "colorteller-black": the object has been modified; please apply your changes to the latest version and try again

I0908 19:05:20.442223       1 virtualnode.go:96] Created virtual node colorteller-blue-appmesh-demo

E0908 19:05:20.462389       1 controller.go:443] error syncing 'appmesh-demo/colorteller-blue': error updating virtual node status: Operation cannot be fulfilled on virtualnodes.appmesh.k8s.aws "colorteller-blue": the object has been modified; please apply your changes to the latest version and try again
(...)
E0908 19:07:49.287511       1 controller.go:443] error syncing 'appmesh-demo/colorteller.appmesh-demo': error creating virtual router: InvalidParameter: 1 validation error(s) found.

Cascade deletion of mesh objects

When a mesh object is deleted the controller should cleanup all objects belonging to that mesh like routes, routers, virtual nodes and services. Currently a mesh deletion fails if there are any objects attached to it.

Some resources get abandoned

Currently, in specific circumstances (if they are renamed), certain resources get abandoned. I know of at least virtual routers and cloud map services. I think if the controller created the resource, then it should clean it up, so in that case we need to remember which resources we've created. I don't think either resource supports tags, but we might be able to take advantage of the custom resource status fields.

Better handling of missing permissions

If you define a mesh in a yaml, and the ec2 worker node running the controller pod does not have appmesh:? access to create the mesh object on the aws-side, it should log an error to the controller log indicating the operation failed, and what perm it needs.

Currently, it appears to fail silently.

Virtual service name not used for virtual router

If an explicit name is not provided as part of a virtual router configuration in the resource definition, the virtual service name should be used. However, this is not happening, so if multiple virtual services and routers are created, the virtual routers will share whichever route was created first by the controller.

See also #67 for a temporary fix in the existing example.

Clean up resources

Currently, out of all the App Mesh resources that are created by the controller, only routes are cleaned up, and they are only cleaned up if the routes: field is removed from the virtual service and the virtual service is not deleted, or if the routes field is modified, then the removed routes will be deleted.

We need to clean up virtual nodes, virtual services, virtual routers, and meshes when the corresponding custom resource is deleted. When a virtual node, virtual service or mesh custom resource is deleted, a finalizer should be used to clean up the related object(s) via the App Mesh API.

We also need to remove a virtual router if the virtualRouterName field is added to or modified on the virtual service custom resource. This could be done by adding the virtualRouterARN to the status of the virtual service custom resource (which I planned to do) and comparing the new value of virtualRouterName to the current ARN on virtual service updates and adds. This is better than comparing the new and old value of virtualRouterName only on updates, because updates can sometimes be dropped and we would only see an add, which would leak the virtual router.

Additionally, we might want to clean up all the custom resources present in a mesh when a user deletes a mesh custom resource. The most straight forward way to do that is with garbage collection and owner references, but we currently allow meshes to reside in different namespaces than other resources, and cross-namespace owner references are not allowed. Instead, we could search for everything with a corresponding meshName and delete it explicitly.

Alternatively, we could leave all the custom resources around when the user deletes the mesh object, but delete all the App Mesh objects, and add a condition to all the dependent custom resources to mark them as inactive/deleted.

ARN of resource created in status

In order to facilitate finding the corresponding resources and debugging, the ARN of every resource created by the controller should be stored in the custom resource status field.

Deleting a virtual service fails if another virtual service uses the same router

Currently, if two virtual services share a router, and the user attempts to delete a virtual service, the deletion will hang on trying to delete the virtual router.

E0324 19:09:33.876104       1 controller.go:422] error syncing 'appmesh-demo/colorteller.appmesh-demo.svc.cluster.local': failed to clean up virtual router color-router-appmesh-demo for virtual service colorteller.appmesh-demo.svc.cluster.local during deletion: ResourceInUseException: VirtualRouter with name color-router-appmesh-demo cannot be deleted because it is a VirtualService provider

We should not try to delete a virtual router that is a provider for another virtual service, we should only delete a router that is only used by the virtual service being deleted.

Support App Mesh Preview Channel

Allow the controller to be used for the App Mesh Preview Channel, so new features can be tested more easily with Kubernetes.

Currently this package relies solely on the published SDK Go client, which will not contain all of the features in preview.

ColorApp Question: What's the purpose of the colorgateway VirtualService

As a new App Mesh user, I spend a lot of time poring over the ColorApp example. I'm struggling to understand the purpose of the colorgateway VirtualService.

It seems that VirtualService routes are only relevant for communication between VirtualNodes within a mesh. Correct so far?

In this case, why define a VirtualService for the colorgateway since it only receives traffic from outside of the mesh. At least that's how it's described in the EKS walkthrough in the aws-app-mesh-examples repo. The curler pod makes requests to the colorgateway from outside of the mesh (i.e. it's deployed without an exposed port so no sidecar; also doesn't define a VirtualNode).

When you kubectl exec curler-dc65c8c79-zbtkb -- curl colorgateway:9080/color, the VirtualService config does not have an effect. For instance, I changed it to route traffic to some-other-virtual-node but it had no effect:

apiVersion: appmesh.k8s.aws/v1beta1
kind: VirtualService
metadata:
  name: colorgateway.appmesh-demo
  namespace: appmesh-demo
spec:
  meshName: color-mesh
  virtualRouter:
    listeners:
      - portMapping:
          port: 9080
          protocol: http
  routes:
    - name: gateway-route
      http:
        match:
          prefix: /color
        action:
          weightedTargets:
            - virtualNodeName: some-other-virtual-node
              weight: 1

kubectl exec curler-dc65c8c79-zbtkb -- curl colorgateway:9080/color still routed to the colorgateway service as before.

Please let me know what I'm missing. Thanks in advance.

Injection for egress-only pods

The current documentation is really confusing for pods that only have egress-only traffic.

Injection only occurs when a pod has ports set (which by default are grabbed from the pod container definition), however in an egress-only scenario these won't be set.

The documentation is not clear as to whether I should annotate with appmesh.k8s.aws/ports, or whether this is just for ingress.

[request]: Security feature request to be able to run"proxy/init" container outside the "App mesh Injector"

Tell us about your request
What do you want us to build?
"proxy/init" container requires elevated permission using NET_ADMIN capability and it also requires the "allowPrivilegeEscalation=true" property as well which is the blocker for us to apply PSP restricted policy to API containers in eks cluster.

Expected behaviour : we should be able to run "proxy/init" container functionality out side the "appMesh Sidecar" or pod which allow us to apply restricted PSP on pods without effecting "proxy/init" container functionality.

Which integration(s) is this request for?
It is mainly requires for EKS and Kubernetes.

Are you currently working around this issue?
How are you currently solving this problem?
No work around except manually making sure that API containers don't run with un wanted security contexts inside the cluster.

Additional context
Because of appmesh "proxyinit" privilege mode requirement, we need to run whole pod with "allowPrivilegeEscalation=true" . which means we cant apply restricted PSP (pod security policy) on api containers.
It is a big security risk for the whole EKS cluster because as a part of our security measure we want to run all api containers in a restricted mode.

is it possible to run appmesh proxy/init container outside the pod as Daemonset ?or any other option to mitigate this issue. istio already have a fix for https://github.com/sabre1041/istio-pod-network-controller.

Attachments
If you think you might have additional information that you'd like to include via an attachment, please do - we'll take a look. (Remember to remove any personally-identifiable information.)

Feature Request: Helm charts official docs

Tell us about your request
A lot of features (side car injection, CRDs, Jaeger/Prometheus) is available via the chart - but is not documented in the official docs.

Which integration(s) is this request for?
EKS/Kubernetes

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
See aws/aws-app-mesh-roadmap#127, aws/aws-app-mesh-roadmap#126

Are you currently working around this issue?
Filing issues :)

Additional context
Anything else we should know?

Attachments
If you think you might have additional information that you'd like to include via an attachment, please do - we'll take a look. (Remember to remove any personally-identifiable information.)

Proposal: Add support for AWS Cloud Map as service-discovery

Summary

Add support for AWS Cloud Map service-discovery with App Mesh virtual-nodes when using aws-app-mesh-controller-for-k8s.

Motivation

AWS Cloud Map is a cloud resource discovery service. With Cloud Map, you can define custom names for your application resources, and it maintains the updated location of these dynamically changing resources. This increases your application availability because your web service always discovers the most up-to-date locations of its resources. AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure.

AWS App Mesh recently announced the support for AWS Cloud Map as a service-discovery option for virtual-nodes. In App Mesh *client *virtual-nodes add backends to define their service dependencies. These backends point to virtual-service that is backed by virtual-router with routes to each of the corresponding service virtual-nodes. Now for client to communicate with service, it will use the DNS specified on virtual-nodes. So prior to Cloud Map users need to add DNS records for each of the virtual-nodes. With Cloud Map users can create one DNS service for virtual-service and use attributes to subset the endpoints behind that DNS to target specific virtual-nodes. Additionally, App Mesh provides Envoy EDS that can be used by Envoy to discover endpoints for upstream cluster with minimal propagation latencies.

Currently the way to integrate Kubernetes cluster with Cloud Map is to use external-dns. However, external-dns lacks functionality to propagate attributes for endpoints (i.e. pod labels) to Cloud Map. To support App Mesh with Cloud Map aws-app-mesh-controller need to provide native support for Cloud Map.

User Stories

  1. User should be able to use AWS Cloud Map as service-discovery for pods associated with virtual-node.
  2. User should be able to use connect applications in Kubernetes cluster in AWS with services running outside of Kubernetes cluster.

Design

UX

Below we will discuss on how customers will use AWS Cloud Map in the context of aws-app-mesh-controller-for-k8s.

  • Lets assume customer has the following K8s namespace
        apiVersion: v1
        kind: Namespace
        metadata:
          labels:
            appmesh.k8s.aws/sidecarInjectorWebhook: enabled
          name: color
  • Customer creates mesh using the following spec
        apiVersion: appmesh.k8s.aws/v1beta1
        kind: Mesh
        metadata:
          name: eks-mesh
  • Customer creates a virtual-node using the following spec
        apiVersion: appmesh.k8s.aws/v1beta1
        kind: VirtualNode
        metadata:
          name: colorteller-red
          namespace: color
        spec:
          meshName: eks-mesh
          listeners:
            - portMapping:
                port: 9080
                protocol: http
          serviceDiscovery:
            cloudMap:
              serviceName: colorteller
              namespaceName: prod.svc.aws.local
  • Customer creates a virtual-service using the following spec (No Change)
        apiVersion: appmesh.k8s.aws/v1beta1
        kind: VirtualService
        metadata:
          name: colorteller.prod.svc.aws.local
          namespace: color
        spec:
          meshName: eks-mesh
          virtualRouter:
            name: colorteller-router
            listeners:
              - portMapping:
                  port: 9080
                  protocol: http
          routes:
            - name: color-route
              http:
                match:
                  prefix: /
                action:
                  weightedTargets:
                    - virtualNodeName: colorteller-red
                      weight: 1       
  • Customer then can create K8s deployment (No change)
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: colorteller-red
          namespace: color
        spec:
          replicas: 1
          selector:
            matchLabels:
              app: colorteller
              version: red
          template:
            metadata:
              annotations:
                appmesh.k8s.aws/mesh: eks-mesh
              labels:
                app: colorteller
                version: red
            spec:
              containers:
                - name: colorteller
                  image: ${COLOR_TELLER_IMAGE}
                  ports:
                    - containerPort: 9080
                  env:
                    - name: "SERVER_PORT"
                      value: "9080"
                    - name: "COLOR"
                      value: "red"

Controller

Behind the scenes aws-app-mesh-controller will perform the following actions

  • Watches for Mesh CRD
    • Creates mesh using aws.appmesh.CreateMesh
  • Watches for VirtualNode CRD
    • Creates virtual-node using aws.appmesh.CreateVirtualNode
    • Creates cloudmap service using *aws.servicediscovery.CreateService *using the service-name and cloudmap-namespace-name.
  • Watches for pods (corresponding to K8s deployment)
    • Determine the virtual-node and mesh corresponding to the pod.
    • (Pod Added) If virtual-node uses CloudMap then register pod endpoint with corresponding cloudmap service using *aws.servicediscovery.RegisterInstances *api with attributes “appmesh.k8s.aws/virtualNode" and "appmesh.k8s.aws/mesh" set appropriately.
    • (Pod Deleted) If virtual-node uses CloudMap then deregister pod endpoint by calling aws.servicediscovery.DeregisterInstances.
  • Reconciles K8s pods and CloudMap instances periodically as updates to virtual-node take place.

Following App Mesh resources are created for above flow

  • MESH
$ aws appmesh describe-mesh \
    --mesh-name eks-mesh
{
    "mesh": {
        "meshName": "eks-mesh",
        "metadata": {
            "arn": "arn:aws:appmesh:us-west-2:1234567890:mesh/eks-mesh",
            "createdAt": 1560962735.095,
            "lastUpdatedAt": 1560962735.095,
            "uid": "7a31dd56-ef1e-44d5-b7f3-db98d109315c",
            "version": 1
        },
        "spec": {},
        "status": {
            "status": "ACTIVE"
        }
    }
}
  • VIRTUAL NODE
$ aws appmesh describe-virtual-node \
    --mesh eks-mesh \
    --virtual-node colorteller-red-color
{
    "virtualNode": {
        "meshName": "eks-mesh",
        "metadata": {
            "arn": "arn:aws:appmesh:us-west-2:1234567890:mesh/eks-mesh/virtualNode/colorteller-red-color",
            "createdAt": 1560962757.57,
            "lastUpdatedAt": 1564612052.233,
            "uid": "4e1e5288-3585-4abe-809b-5c9d9018cdf9",
            "version": 284
        },
        "spec": {
            "backends": [],
            "listeners": [
                {
                    "portMapping": {
                        "port": 9080,
                        "protocol": "http"
                    }
                }
            ],
            "serviceDiscovery": {
                "awsCloudMap": {
                    "attributes": [
                        {
                            "key": "appmesh.k8s.aws/mesh",
                            "value": "eks-mesh"
                        },
                        {
                            "key": "appmesh.k8s.aws/virtualNode",
                            "value": "colorteller-red-color"
                        }
                    ],
                    "namespaceName": "prod.svc.aws.local",
                    "roleArn": "arn:aws:iam::1234567890:role/aws-service-role/appmesh.amazonaws.com/AWSServiceRoleForAppMesh",
                    "serviceName": "colorteller"
                }
            }
        },
        "status": {
            "status": "ACTIVE"
        }
    }
}
  • VIRTUAL_ROUTER
$ aws appmesh describe-virtual-router \
    --mesh eks-mesh \
    --virtual-router colorteller-router-color
{
    "virtualRouter": {
        "meshName": "eks-mesh",
        "metadata": {
            "arn": "arn:aws:appmesh:us-west-2:1234567890:mesh/eks-mesh/virtualRouter/colorteller-router-color",
            "createdAt": 1561148137.763,
            "lastUpdatedAt": 1561148137.763,
            "uid": "5c34c86a-2dac-4b45-a99c-18389f2ea994",
            "version": 1
        },
        "spec": {
            "listeners": [
                {
                    "portMapping": {
                        "port": 9080,
                        "protocol": "http"
                    }
                }
            ]
        },
        "status": {
            "status": "ACTIVE"
        },
        "virtualRouterName": "colorteller-router-color"
    }
}
  • ROUTE
$ aws appmesh describe-route \
    --mesh $MESH_NAME \
    --virtual-router colorteller-router-color \
    --route color-rout
{
    "route": {
        "meshName": "eks-mesh",
        "metadata": {
            "arn": "arn:aws:appmesh:us-west-2:1234567890.:mesh/eks-mesh/virtualRouter/colorteller-router-color/route/color-route",
            "createdAt": 1561148137.814,
            "lastUpdatedAt": 1564613583.153,
            "uid": "a6d43813-793f-4096-900b-e5b2fb717788",
            "version": 14
        },
        "routeName": "color-route",
        "spec": {
            "httpRoute": {
                "action": {
                    "weightedTargets": [
                        {
                            "virtualNode": "colorteller-red-color",
                            "weight": 1
                        }
                    ]
                },
                "match": {
                    "prefix": "/"
                }
            }
        },
        "status": {
            "status": "ACTIVE"
        },
        "virtualRouterName": "colorteller-router-color"
    }
}
  • VIRTUAL_SERVICE
$ aws appmesh describe-virtual-service \
    --mesh eks-mesh \
    --virtual-service colorteller.prod.svc.aws.local
{
    "virtualService": {
        "meshName": "eks-mesh",
        "metadata": {
            "arn": "arn:aws:appmesh:us-west-2:1234567890:mesh/eks-mesh/virtualService/colorteller.prod.svc.aws.local",
            "createdAt": 1561148137.624,
            "lastUpdatedAt": 1561148137.911,
            "uid": "f63e9694-6b0d-4e96-80bf-996064aca29f",
            "version": 2
        },
        "spec": {
            "provider": {
                "virtualRouter": {
                    "virtualRouterName": "colorteller-router-color"
                }
            }
        },
        "status": {
            "status": "ACTIVE"
        },
        "virtualServiceName": "colorteller.prod.svc.aws.local"
    }
}

Following Cloud Map resources will be created by controller

  • SERVICE
$ aws servicediscovery get-service \
    --id srv-sxvjmn5we2fz6nqm
{
    "Service": {
        "Id": "srv-sxvjmn5we2fz6nqm",
        "Arn": "arn:aws:servicediscovery:us-west-2:1234567890:service/srv-sxvjmn5we2fz6nqm",
        "Name": "colorteller",
        "NamespaceId": "ns-omcos67xvs7tat4z",
        "DnsConfig": {
            "NamespaceId": "ns-omcos67xvs7tat4z",
            "RoutingPolicy": "MULTIVALUE",
            "DnsRecords": [
                {
                    "Type": "A",
                    "TTL": 300
                }
            ]
        },
        "CreateDate": 1564671683.11,
        "CreatorRequestId": "app-mesh-controller"
    }
}
  • INSTANCE(S)
$ aws servicediscovery discover-instances \
    --namespace prod.svc.aws.local \
    --service colorteller
{
    "Instances": [        
        {
            "InstanceId": "192.168.122.110",
            "NamespaceName": "prod.svc.aws.local",
            "ServiceName": "colorteller",
            "HealthStatus": "UNKNOWN",
            "Attributes": {
                "AWS_INSTANCE_IPV4": "192.168.122.110",
                "app": "colorteller",
                "appmesh.k8s.aws/mesh": "eks-mesh",
                "appmesh.k8s.aws/virtualNode": "colorteller-red-color",
                "k8s.io/namespace": "color",
                "k8s.io/pod": "colorteller-red-8c745484-sphc6",
                "pod-template-hash": "8c745484",
                "version": "red"
            }
        }
    ]
}

CRD Changes

  • CloudMapServiceDiscovery
        type CloudMapServiceDiscovery struct {
        -       CloudMapServiceName string `json:"cloudMapServiceName"`
        +       ServiceName   string            `json:"serviceName"`
        +       NamespaceName string            `json:"namespaceName"`
        +       Attributes    map[string]string `json:"attributes,omitempty"`
         }
  • VirtualNodeStatus
        type VirtualNodeStatus struct {
                // VirtualNodeArn is the AppMesh VirtualNode object's Amazon Resource Name
                // +optional
                VirtualNodeArn *string                `json:"virtualNodeArn,omitempty"`
        -       // CloudMapServiceArn is a CloudMap Service object's Amazon Resource Name
        +       Conditions     []VirtualNodeCondition `json:"conditions"`
        +       // CloudMapService is AWS CloudMap Service object's info
                // +optional
        -       CloudMapServiceArn *string `json:"cloudMapServiceArn,omitempty"`
        +       CloudMapService *CloudMapServiceStatus `json:"cloudmapService,omitempty"`
        +}
        +
        +// CloudMapServiceStatus is AWS CloudMap Service object's info
        +type CloudMapServiceStatus struct {
        +       // ServiceID is AWS CloudMap Service object's Id
                // +optional
        -       QueryParameters map[string]string      `json:"queryParameters,omitempty"`
        -       Conditions      []VirtualNodeCondition `json:"conditions"`
        +       ServiceID *string `json:"serviceId,omitempty"`
        +       // NamespaceID is AWS CloudMap Service object's namespace Id
        +       // +optional
        +       NamespaceID *string `json:"namespaceId,omitempty"`
         }

Future Work

  1. Automatic cleanup of Cloud Map services
  2. Automatic creation and management of Cloud Map namespaces
  3. Automatic registration of pod endpoints outside the scope of App Mesh

TooManyRequestsException being triggered

I've been seeing TooManyRequestsExceptions being returned from App Mesh frequently in the controller logs.

What sort of rate-limiting or back-off does the controller do? Is it configurable?

example.md: color example return 404

While trying to install the controller I received this error
https://raw.githubusercontent.com/aws/aws-app-mesh-controller-for-k8s/v0.1.2/examples/color.yaml return 404

Improve experience when CloudMap Namespace doesn't exist

Currently, the CloudMap Namespace has to be created out of band from the rest of the resources. There should be an automated way to create it, whether its a separate custom resource like the Mesh (which can be auto created by the injector) or something else.

The controller doesn't work for k8s 1.10 version

Logs from controller when running on a k8s 1.10 cluster. Verified things would work after stopping reading from or writing to the status of Mesh, VirtualService, and VirtualNode custom resources.

E0326 15:08:12.934550   41033 controller.go:422] error syncing 'appmesh-demo/colorgateway': error adding finalizer virtualNodeDeletion.finalizers.appmesh.k8s.aws to virtual node colorgateway-appmesh-demo: VirtualNode.appmesh.k8s.aws "colorgateway" is invalid: []: Invalid value: map[string]interface {}{"metadata":map[string]interface {}{"resourceVersion":"4872", "generation":1, "clusterName":"", "creationTimestamp":"2019-03-26T22:08:12Z", "selfLink":"/apis/appmesh.k8s.aws/v1beta1/namespaces/appmesh-demo/virtualnodes/colorgateway", "uid":"a58952ce-5013-11e9-a932-0e2006f6ecd2", "finalizers":[]interface {}{"virtualNodeDeletion.finalizers.appmesh.k8s.aws"}, "name":"colorgateway", "annotations":map[string]interface {}{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"appmesh.k8s.aws/v1beta1\",\"kind\":\"VirtualNode\",\"metadata\":{\"annotations\":{},\"name\":\"colorgateway\",\"namespace\":\"appmesh-demo\"},\"spec\":{\"backends\":[{\"virtualService\":{\"virtualServiceName\":\"colorteller.appmesh-demo\"}}],\"listeners\":[{\"portMapping\":{\"port\":9080,\"protocol\":\"http\"}}],\"meshName\":\"color-mesh\",\"serviceDiscovery\":{\"dns\":{\"hostName\":\"colorgateway.appmesh-demo.svc.cluster.local\"}}}}\n"}, "namespace":"appmesh-demo"}, "spec":map[string]interface {}{"listeners":[]interface {}{map[string]interface {}{"portMapping":map[string]interface {}{"port":9080, "protocol":"http"}}}, "serviceDiscovery":map[string]interface {}{"dns":map[string]interface {}{"hostName":"colorgateway.appmesh-demo.svc.cluster.local"}}, "backends":[]interface {}{map[string]interface {}{"virtualService":map[string]interface {}{"virtualServiceName":"colorteller.appmesh-demo"}}}, "meshName":"color-mesh"}, "status":map[string]interface {}{"conditions":interface {}(nil)}, "kind":"VirtualNode", "apiVersion":"appmesh.k8s.aws/v1beta1"}: validation failure list:
status.conditions in body must be of type array: "null"

Error updating virtual node status

I saw this error when I had the wrong mesh name on a virtual node which had cloud map service discovery enabled. Once I updated the mesh name, the controller processed the virtual node, and was able to update the status successfully.

I1127 20:08:07.746641       1 virtualnode.go:338] mesh doesn't exist, skipping processing virtual node ingress-demo
I1127 20:08:07.746649       1 virtualnode.go:73] skipping processing virtual node ingress-demo
I1127 20:08:17.394267       1 virtualnode.go:396] Created CloudMap service ingress (id:srv-lf6hlbmg5ejn3gbi)
I1127 20:08:17.398180       1 virtualnode.go:396] Created CloudMap service ingress (id:srv-lf6hlbmg5ejn3gbi)
E1127 20:03:27.212049       1 virtualnode.go:405] Error updating CloudMapServiceStatus on virtual node ingress: VirtualNode.appmesh.k8s.aws "ingress" is invalid: []: Invalid value: map[string]interface {}{"apiVersion":"appmesh.k8s.aws/v1beta1", "kind":"VirtualNode", "metadata":map[string]interface {}{"annotations":map[string]interface {}{"fluxcd.io/sync-checksum":"963dae00cbb6af6d3375291ababa9c38deec1774", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"appmesh.k8s.aws/v1beta1\",\"kind\":\"VirtualNode\",\"metadata\":{\"annotations\":{\"fluxcd.io/sync-checksum\":\"963dae00cbb6af6d3375291ababa9c38deec1774\"},\"labels\":{\"app\":\"ingress\",\"fluxcd.io/sync-gc-mark\":\"sha256.otpXUEaBsAfLJzV4bJ8zidULKrdkLz5IyeNbYc_Qtbo\"},\"name\":\"ingress\",\"namespace\":\"demo\"},\"spec\":{\"backends\":[{\"virtualService\":{\"virtualServiceName\":\"podinfo.demo\"}}],\"listeners\":[{\"portMapping\":{\"port\":444,\"protocol\":\"http\"}}],\"logging\":{\"accessLog\":{\"file\":{\"path\":\"/dev/stdout\"}}},\"meshName\":\"appmesh\",\"serviceDiscovery\":{\"cloudMap\":{\"namespaceName\":\"aws-eks-appmesh-cloudmap-demo-dns\",\"serviceName\":\"ingress\"}}}}\n"}, "creationTimestamp":"2019-11-27T18:49:01Z", "finalizers":[]interface {}{"virtualNodeDeletion.finalizers.appmesh.k8s.aws"}, "generation":1, "labels":map[string]interface {}{"app":"ingress", "fluxcd.io/sync-gc-mark":"sha256.otpXUEaBsAfLJzV4bJ8zidULKrdkLz5IyeNbYc_Qtbo"}, "name":"ingress", "namespace":"demo", "resourceVersion":"64963", "uid":"93d9f1fb-1146-11ea-817a-0e7d332047b0"}, "spec":map[string]interface {}{"backends":[]interface {}{map[string]interface {}{"virtualService":map[string]interface {}{"virtualServiceName":"podinfo.demo"}}}, "listeners":[]interface {}{map[string]interface {}{"portMapping":map[string]interface {}{"port":444, "protocol":"http"}}}, "logging":map[string]interface {}{"accessLog":map[string]interface {}{"file":map[string]interface {}{"path":"/dev/stdout"}}}, "meshName":"appmesh", "serviceDiscovery":map[string]interface {}{"cloudMap":map[string]interface {}{"namespaceName":"aws-eks-appmesh-cloudmap-demo-dns", "serviceName":"ingress"}}}, "status":map[string]interface {}{"cloudmapService":map[string]interface {}{"namespaceId":"ns-3rfrhbcoee5ezeo6", "serviceId":"srv-lf6hlbmg5ejn3gbi"}, "conditions":interface {}(nil)}}: validation failure list:
status.conditions in body must be of type array: "null"

I don't think the controller should process cloudmap service discovery if its not processing the virtualnode due to the mesh custom resource not existing (in this case, the mesh existed in App Mesh, but not in the cluster because I hadn't updated the virtualnode custom resource from a different deployment). It seems that this is due to the cloud map reconciliation happening in a different place:

func (c *Controller) reconcileServices(ctx context.Context) error {

Reflect App Mesh object state in status

In order to clean up objects in App Mesh when custom resources are deleted (#10), we need to reflect those objects state in the status field for custom resources. App Mesh resources currently have their own Status field, which can be ACTIVE, INACTIVE, or DELETED. My current thoughts are we can have one of:

  • a condition for each of these possible states (each condition value can be True, False, or Unknown),
  • or a top level appMeshStatus which just reflects the App Mesh status.

I am leaning towards using a condition for each state, because although its a bit of a strange translation, it reflects the direction that the Kubernetes API is moving (conditions instead of things like Pod phase). Its also easier to add conditions without breaking client assumptions if the App Mesh API changes in the future or we want to add other similar states.

Distinguish Router port from Node port

Today, in the custom resource definition for a virtual service we define routes and a virtual router name, but do not support pass-through of router listeners. This makes it difficult for users to create a 1-1 correspondence between their k8s services/pods and their App Mesh VirtualService+VirtualRouter/VirtualNodes when the service port differs from the container port.

It would make sense to extend the spec of a control Virtual Service to something like

...
      properties:
        spec:
          properties:
            meshName:
              type: string
            virtualRouter:
              type: object
              properties:
                name:
                  type: string
            listeners:
              type: array
              items:
                type: object
                properties:
                  portMapping:
                    properties:
                      port:
                        type: integer
                      protocol:
                        type: string
                        enum:
                          - tcp
                          - http
                          - grpc
                          - http2
                          - https
            routes:
...

which could then be passed through to VRouter create/update calls. It would also make sense in a new revision of the controller resource spec to make this field required, as users may find the behavior as exists surprising.

Some context on why/how things work as currently implemented.

To be backwards compatible with resources from the preview API that did not define router listeners, App Mesh allows listener-less virtual routers to operate in a "legacy" mode for http routes. Specifically, for each route we merge the listener ports of the targeted virtual nodes and use that as the router listener port for that route. This has some notable drawbacks

  1. A user is unable to decouple changes between their node ports and their router port
  2. If the target virtual nodes behind a route disagree on their listener port, App Mesh ends up dropping the route altogether

with a router listener port defined, disagreement between node ports is abstracted from customers, and an update to the node port doesn't impact the port clients use for making requests.

Additionally, TCP routes enforce the presence of the listener because they did not exist in the preview API.

Multi-cluster multi-VPC mesh support

To accommodate a mesh that spans across clusters I see two options:

  1. Allow virtual nodes/services to refer a mesh that's not defined in the Kubernetes cluster but exists in App Mesh
  2. Discover existing meshes and create "readonly" Kubernetes objects

For both approaches handling the mesh deletion is problematic.

Finalizers deadlock

If I try to delete a virtual node that uses a mesh from another namespace the operator errors out with error syncing 'test/frontend-test': failed to clean up virtual node frontend-test during deletion finalizer: BadRequestException: MeshName must match ^[a-zA-Z0-9\-_]+$.. After that the Appmesh objects will deadlock preventing any operations. Deleting the CRDs will not work since the CR finaliser blocks it, basically you end up with a broken cluster.

Panic when creating mesh and virtual nodes at the same time

When creating a mesh and virtual nodes, it takes some time for the mesh to become active and the operator panics. I would label this as a bug since the mesh controller should wait for the mesh to become active and not crash the whole process.

I0313 12:52:26.429780   34169 controller.go:208] Mesh Added
I0313 12:52:26.633567   34169 controller.go:244] Mesh Updated
I0313 12:52:30.465397   34169 controller.go:260] Virtual Node Added
E0313 12:52:30.668255   34169 controller.go:370] error syncing 'test/backend': mesh global must be active for virtual node backend
I0313 12:52:30.668744   34169 controller.go:268] Virtual Node Updated
E0313 12:52:30.670646   34169 controller.go:370] error syncing 'test/backend': mesh global must be active for virtual node backend
I0313 12:52:31.200551   34169 controller.go:260] Virtual Node Added
E0313 12:52:31.401782   34169 controller.go:370] error syncing 'test/backend-primary': mesh global must be active for virtual node backend-primary
I0313 12:52:31.401984   34169 controller.go:268] Virtual Node Updated
E0313 12:52:31.402028   34169 controller.go:370] error syncing 'test/backend-primary': mesh global must be active for virtual node backend-primary
E0313 12:52:31.636186   34169 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/Users/aleph/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:76
/Users/aleph/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:65
/Users/aleph/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:51
/usr/local/Cellar/go/1.11.4/libexec/src/runtime/asm_amd64.s:522
/usr/local/Cellar/go/1.11.4/libexec/src/runtime/panic.go:513
/usr/local/Cellar/go/1.11.4/libexec/src/runtime/panic.go:82
/usr/local/Cellar/go/1.11.4/libexec/src/runtime/signal_unix.go:390
/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/aws/appmesh.go:69
/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller/mesh.go:44
/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller/controller.go:308
/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller/controller.go:359
/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller/controller.go:367
/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller/controller.go:325
/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller/controller.go:308
/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller/controller.go:164
/Users/aleph/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
/Users/aleph/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:134
/Users/aleph/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
/usr/local/Cellar/go/1.11.4/libexec/src/runtime/asm_amd64.s:1333
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1788c1e]

goroutine 59 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/Users/aleph/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:58 +0x108
panic(0x1f6b340, 0x2e1a3d0)
	/usr/local/Cellar/go/1.11.4/libexec/src/runtime/panic.go:513 +0x1b9
github.com/aws/aws-app-mesh-controller-for-k8s/pkg/aws.(*Cloud).GetMesh(0xc000322630, 0x2316180, 0xc0001fe360, 0xc0004a2ff0, 0x6, 0x0, 0x0, 0x0)
	/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/aws/appmesh.go:69 +0x18e
github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller.(*Controller).handleMesh(0xc0003409a0, 0xc0004a3070, 0xb, 0x18, 0xc0004c6d18)
	/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller/mesh.go:44 +0x226
github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller.(*Controller).handleMesh-fm(0xc0004a3070, 0xb, 0xc00041ad80, 0x1ef49e0)
	/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller/controller.go:308 +0x3e
github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller.processNextWorkItem.func1(0x23263e0, 0xc00041ad80, 0xc0004c6e78, 0x1ef49e0, 0xc00066ade0, 0x0, 0x0)
	/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller/controller.go:359 +0xd5
github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller.processNextWorkItem(0x23263e0, 0xc00041ad80, 0xc0004c6e78, 0x0)
	/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller/controller.go:367 +0x75
github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller.(*Controller).processNext(0xc0003409a0, 0x23263e0, 0xc00041ad80, 0xc0004c6e78, 0xc0004c6ea0)
	/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller/controller.go:325 +0x3f
github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller.(*Controller).meshWorker(0xc0003409a0)
	/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller/controller.go:308 +0x68
github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller.(*Controller).meshWorker-fm()
	/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller/controller.go:164 +0x2a
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0005f8df0)
	/Users/aleph/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x54
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005f8df0, 0x3b9aca00, 0x0, 0x1, 0x0)
	/Users/aleph/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:134 +0xbe
k8s.io/apimachinery/pkg/util/wait.Until(0xc0005f8df0, 0x3b9aca00, 0x0)
	/Users/aleph/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 +0x4d
created by github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller.(*Controller).Run
	/Users/aleph/go/src/github.com/aws/aws-app-mesh-controller-for-k8s/pkg/controller/controller.go:164 +0x2ff

Helm chart

For convenience, it would be nice to have a Helm chart that takes care of deploying the controller and manage other lifecycle-related tasks.

Cluster wide mesh object

In the current implementation a mesh object is namespaced. A Kubernetes users would assume that a mesh can be created in different namespace with the same name but App Mesh has no namespace concept. To avoid confusion, I would change the mesh CRD to be cluster wide or use the namespace as a sufix to avoid name collisions.

Instances are not removed from Cloud Map when deleting the virtualNode

If you create a VirtualNode in the kubernetes with spec.serviceDiscovery.cloudMap the controller register the Pods IP Addresses within the Cloud Map namespace, but when you delete the virtualNode the pods are still registered in the Cloud Map namespace.

I was using this VirtualNode spec:

apiVersion: appmesh.k8s.aws/v1beta1
kind: VirtualNode
metadata:
  name: nodejs-app
  namespace: appmesh-workshop-ns
spec:
  meshName: appmesh-workshop
  listeners:
    - portMapping:
        port: 3000
        protocol: http
  serviceDiscovery:
    cloudMap:
      namespaceName: appmeshworkshop.pvt.local
      serviceName: nodejs

The way to get the instances deregistered from Cloud Map was using the Cloud Map api:

SERVICE_ID=$(aws servicediscovery list-services --filters Name="NAMESPACE_ID",Values=$NAMESPACE,Condition="EQ" | jq -r ' .Services[] | [ .Id ] | @tsv ' )
aws servicediscovery list-instances --service-id $SERVICE_ID | jq -r ' .Instances[] | [ .Id ] | @tsv ' |\
  while IFS=$'\t' read -r instanceId; do 
    aws servicediscovery deregister-instance --service-id $SERVICE_ID --instance-id $instanceId
  done

AppMesh naming conventions

The virtual nodes and virtual services could be referenced using Kubernetes naming name.namespace.

Assuming the webhook injects the virtual node name as controllerName-namespace, a virtual service for A/B testing across namespaces would look like:

apiVersion: appmesh.k8s.aws/v1alpha1
kind: VirtualService
metadata:
  name: frontend # appmesh name: frontend.prod.svc.cluster.local
  namespace: prod
spec:
  meshName: global.appmesh-system
  virtualRouter:
    name: frontend-router
  routes:
    - name: frontend-route
      http:
        match:
          prefix: /
        action:
          weightedTargets:
            - virtualNodeName: frontend.prod # appmesh name: frontend-prod
              weight: 50
            - virtualNodeName: frontend.test # appmesh name: frontend-test
              weight: 50
---
apiVersion: appmesh.k8s.aws/v1alpha1
kind: VirtualNode
metadata:
  name: frontend # appmesh name: frontend-test
  namespace: test
spec:
  meshName: global.appmesh-system
  listeners:
    - portMapping:
        port: 9898
        protocol: http
  serviceDiscovery:
    dns:
      hostName: frontend.test.svc.cluster.local

The virtual service will resolve frontend.prod - frontend.prod.svc.cluster.local:

apiVersion: appmesh.k8s.aws/v1alpha1
kind: VirtualNode
metadata:
  name: ingress
  namespace: appmesh-system
spec:
  meshName: global.appmesh-system
  listeners:
    - portMapping:
        port: 8080
        protocol: http
  serviceDiscovery:
    dns:
      hostName: ingress.appmesh-system.svc.cluster.local
  backends:
    - virtualService:
        virtualServiceName: frontend.prod # appmesh name: frontend.prod.svc.cluster.local

Fails to create a VirtualService with TCP weightedTargets

TCP routes are ignored.

Logs

...
I0614 09:11:37.120839       1 controller.go:320] Virtual Node Updated
I0614 09:11:37.121324       1 controller.go:320] Virtual Node Updated
E0614 09:11:37.127253       1 controller.go:422] error syncing 'default/myapp': error updating resource while adding finalizer virtualServiceDeletion.finalizers.appmesh.k8s.aws to virtual service myapp: VirtualService.appmesh.k8s.aws "myapp" is invalid: ... "kind":"VirtualService"}: validation failure list:
spec.routes.http.action.weightedTargets in body must be of type array: "null"
I0614 09:11:37.130980       1 controller.go:212] Pod Updated
I0614 09:11:37.130996       1 controller.go:212] Pod Updated
...

Config

apiVersion: appmesh.k8s.aws/v1beta1
kind: VirtualService
metadata:
  name: myapp
spec:
  meshName: mymesh
  routes:
    - name: default-route
      tcp:
        action:
          weightedTargets:
            - virtualNodeName: myapp-v1
              weight: 1
            - virtualNodeName: myapp-v2
              weight: 1

Note: This works as expected with an http route.

Update: Validation error seems to be because apis/appmesh/v1beta1/types.go Route's HttpRoute does not have omitempty.

Proposal: Add Support to IAM Roles For Service Account - EKS

Right now i am adding the below policy to EKS worker nodes to work with App Mesh

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "servicediscovery:DeregisterInstance",
                "route53:CreateHealthCheck",
                "route53:ChangeResourceRecordSets",
                "route53:GetHealthCheck",
                "route53:UpdateHealthCheck",
                "route53:DeleteHealthCheck",
                "servicediscovery:RegisterInstance",
                "servicediscovery:GetService",
                "appmesh:*",
                "servicediscovery:CreateService",
                "servicediscovery:ListInstances"
            ],
            "Resource": "*"
        }
    ]
}

AWS EKS, AWS SDK, Cluster AutoScaler already support IAM Roles for Service Accounts

Is it possible to add support to App Mesh Controller

Support Logging File Path in VirtualNode configuration

This currently works by setting it manually from the aws console after a virtual node is created by the controller e.g.:

Logging
HTTP access logs path: /dev/stdout

Ideally would like to be able to specify this from the K8s CRDs.

Proposal: Add injector CRD

The injector configuration could be done dynamically via a Kubernetes custom resource. This would allow different configurations without installing the webhook in each namespace.

Example:

apiVersion: appmesh.k8s.aws/v1beta1
kind: Injector
metadata:
  name: envoy-x-ray
  namespace: prod
spec:
  meshName: my-mesh
  routing:
    version: "v1.0.0" # iptables init container tag
    ignoredIPs: "169.254.169.254"
  proxying:
    version: "v1.9.0.0-prod" # Envoy sidecar container tag
    logLevel: info
    egress:
      - "*.amazonaws.com" # external 
      - "*.my-namespace.svc.cluster.local" # internal
  tracing:
    enabled: true
    version: "v1.0.0" # X-Ray sidecar container tag
    sampling: 0.20 # percentage of trace sampling range (0.00 - 100.00)

Add optional limits to injected Containers

We deploy ResourceQuotas in each of our application namespaces, in order to force teams to put memory/CPU requests on all of their pods. As a result, our injected pods cannot start, because the proxyinit initContainer does not specify any resources requests or limits. We request that the init container has resources and limit specified.

Support Kubernetes DNS resolver

App Mesh virtual services should behave similar to the Kubernetes DNS resolver. If I create a virtual service called backend in the test namespace, the backend app should be reachable inside the mesh at the following addresses:

  • http://backend (if the caller is in the same namespace as the virtual service)
  • http://backend.test
  • http://backend.test.svc.cluster.local

Reusing a VirtualNode name results in stale Envoy configuration

Summary

When deleting and re-creating a VirtualNode with the same name (e.g. my-virtual-node under a Mesh named my-mesh), an Envoy identified as that VirtualNode name (e.g. mesh/my-mesh/virtualNode/my-virtual-node) which connects shortly after re-creating the VirtualNode may receive the previous VirtualNode's configuration.

This typically happens when you repeatedly start and tear down your application with the same names.

App Mesh team is working on a solution for this bug that will periodically check for this state and request the Envoy to reconnect to receive updated configuration.

Related App Mesh bugs.
aws/aws-app-mesh-roadmap#49
aws/aws-app-mesh-roadmap#50

Add Envoy readiness probe

Envoy cluster warming takes at least one second and all requests received during bootstrap will fail. We should prevent Kubernetes from registering the pod using a readiness probe that checks the Envoy state e.g. curl -s http://localhost:9901/server_info | grep state | grep -q LIVE

Mesh custom resource can't be deleted if not owned

When using a mesh that isn't owned by the controller (we only create the mesh if it doens't already exist) the controller then assumes it owns the mesh, and we cannot delete the mesh object unless we can delete everything inside the mesh.

We should allow the mesh to be used by the controller, but not owned. This means the controller should not try to delete the mesh when the mesh custom resource is deleted.

I propose adding:

apiVersion: appmesh.k8s.aws/v1beta1
kind: Mesh
metadata:
  name: my-mesh
spec:
  ownership: shared

Update Active to False in more situations

The Active condition for each resource is created and updated to true when the resource is created. Additionally, it is updated to false in the case where a describe call returns "Deleted" or "Inactive". However, in the case where the Mesh custom resource is deleted, which in turn deletes all resources in the mesh from the App Mesh API, we do not mark the statuses to false because the resource processing quits early due to this check in virtual node (a similar one exists in virtual service):

	if processVNode := c.handleVNodeMeshDeleting(ctx, vnode); !processVNode {
		klog.Infof("skipping processing virtual node %s", vnode.Name)
		return nil
	}

There are probably more cases where we should.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.