Giter VIP home page Giter VIP logo

admiral's People

Contributors

aattuluri avatar adilfulara avatar asushanthk avatar frankmariette avatar gaopan233 avatar josephpeacock avatar jwebb49 avatar kpharasi avatar levaitamas avatar mengying-li avatar nickborysov avatar nirvanagit avatar pnovotnak avatar rdkr avatar saradhis avatar shriramsharma avatar sivathiru1 avatar vinay-g avatar vrushalijoshi avatar wejson avatar yaron-idan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

admiral's Issues

[BUG] Always route to local instance of a service from istio-ingressgateway

Describe the bug
When a service name is accessed in a cluster, it's possible for it to bounce between clusters.

Steps To Reproduce
Deploy Service 1 to Cluster 1 and Cluster 2
Deploy Service 2 to Cluster 2
Make a call from Service 2 to Service 1 using global name created by admiral
Notice that some of the calls made from Service 2 to Service 1 take very long time. This is because at the istio-ingressgateway they might be routed back to the original cluster where the request is originating from as that's also a possible destination for that service.

Expected behavior
Service calls using global names should not bounce between clusters.

Proposed solution:
Create a virtual service that always routes calls to the local instance of a service and attach it to istio-ingressgateway gateway
Example:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: default.greeting.global-default-vs
  namespace: admiral-sync
spec:
  exportTo:
  - '*'
  gateways:
  - istio-multicluster-ingressgateway
  hosts:
  - default.greeting.global
  http:
  - route:
    - destination:
        host: greeting.sample.svc.cluster.local
        port:
          number: 8080

[FEATURE] Compatibility about istio

After reading the doc and code in Admiral. It mainly auto-produce ServiceEntry and DestinationRule for routing, and I noticed that the created DestinatioRule contains the new field - Distribute, it decides that weight. So how can Admiral be compatible with low version istio(like v1.2.3) which did not have Distribute field at all.

[FEATURE] Support new CRD group admiralproj.io

Is your feature request related to a problem? Please describe.
Migrate the CRD group from admiral.io to admiralproj.io

Describe the solution you'd like
The CRDs dependency and globaltrafficpolicy should use new group admiralproj.io

Also make sure the fake client generated for these new CRDs works as expected.

[BUG] SEs created for a service in it's cluster skips has no remote endpoints

Describe the bug
When a SE is created for a deployment/service in it's cluster with another instance running remotely, the SE has only one endpoint

Steps To Reproduce
Create a deployment 1 in two clusters (cluster 1 and cluster 2)
Create a dependent deployment 2 in cluster 1
Create a dependency for deployment 2 on deployment 1
See that SE for deployment 1 has only one endpoint pointing to cluster 1

Expected behavior
SE for deployment 1 has i) one local endpoint pointing to cluster 1 and ii) one remote endpoint pointing to cluster 2

[FEATURE] GlobalTrafficPolicy differentiation by URI (Request Path)

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is.

Describe the solution you'd like
A clear and concise description of what you want to happen.
for example,where app A access app B(A us-east to B us-east,A us-west to B us-west ),but now i find flow (url prefix=/api/v1) has problem,i want to shifting app A 50% in us-east to B us-west(but only affect flow which url's prefix eq /api/v1 ),how to achieve by GlobalTrafficPolicy?
thanks!

[FEATURE] Creation of ServiceEntry to all watched clusters by default

Is your feature request related to a problem? Please describe.
While the creation of Dependency CR is optional, we need to provide default behavior in order to make inter-cluster service communication feasible. This will enable service which needs to call to other services in another cluster without adding any configuration

Describe the solution you'd like
ServiceEntry of service to be created in all watched clusters by default

[BUG] Route to the in-cluster service/pod by default

Describe the bug
If there are multiple deployments running, in multiple clusters, with the same Identity, in the same locality, Admiral currently creates a service entry with endpoints pointing to every single one. This weights the cross-cluster calls equally to the in-cluster calls.

Steps To Reproduce
Spin up two clusters in the same AWS region and deploy the same app into both. Examine the service entry, and you'll see one endpoint pointing to *.svc.cluster.local, and one to the ingress-gateway load balancer DNS name.

Expected behavior
If a destination is present in the same cluster as the source, Admiral's default behavior should be to always route the request to the local destination, regardless of other possible destinations with the same identity.

Possibly related to #62

[BUG] Service entry needs to be cleaned up after annotation update

Describe the bug
When a admiral ignores a service which was previously synced, service entry does not get updated in admiral-sync namespace.

Steps To Reproduce
Pick a service which is currently not ignored by admiral (admiral.io/ignorea annotation is false). Update the annotation to be true, admiral now will not sync this service, but corresponding service entry does not get updated in admiral-sync namespace

Expected behavior
Update to deployment annotation should update the service entry accordingly

[BUG] curl: (6) Could not resolve for sample application

Describe the bug
After deploying the sample service in the cluster and running the command
"kubectl exec --namespace=sample -it $(kubectl get pod -l "app=webapp" --namespace=sample -o jsonpath='{.items[0].metadata.name}') -c webapp -- curl -v http://default.greeting.global" throws following error:
curl: (6) Could not resolve host: default.greeting.global
Steps To Reproduce
k8 cluster running in AWS with ISTIO version 1.5.2. Installed admiral and followed the steps till installing the sample app and ran the above command.
Expected behavior
The deployed sample app should be resolved by admiral.

Unable to proceed further with multi-cluster deployment as the above steps are not resolving the sample app. Any help much appreciated!

Not work very well with kubernetes 1.16 and istio 1.4.3 deployed via rancher[BUG]

Describe the bug
After setup the sync step by step from README,the connection between cluster not working.
curl only get 503 (upstream connect error or disconnect/reset before headers).
If I trying to restart the gatewayingress pod,the gateway will stuck at
Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?)
and won't start up successes until I remove those ServiceEntry config created by Admiral,it looks like some kind of config conflicted or broken,but I can't find any useful log from other istio component. This problem occurred on my two cluster at the same time.
The Istio mesh spanning multiple cluster can work if I trying to setup manually via istio office document.
In the end i can only remove admiral and remove those auto-craft configs and use the manually config way to setup those service. Still looking forward to further development of Admiral, this project looks great.

other notes:
Admiral will crash several times while startup,sorry I forget to save the log. Might related to kubernetes version? I wondering what kubernetes version current Admiral developing on. It looks like Admiral not really work well with kubernetes 1.16.

Steps To Reproduce
Not sure if this can reproduce on similar environment
Cluster 1 rancher deployed kubernetes 1.16.6
Cluster 2 azure aks 1.16.4
All with rancher istio 1.4.3
Deployed sample from README

Expected behavior
Sample deployed should work as excepted.

[FEATURE] AWS EKS support

Hi!
As u know, for amazon eks kubeconfig is not enough to get access to the cluster , aws credential and iam-auth binary needed as well

Describe the solution you'd like
Maybe it's misunderstanding , but looks like now there is no option to connect bare eks cluster to admiral using standard EKS auth scheme.

[FEATURE] DOCS: Documentation site

Is your feature request related to a problem? Please describe.
Docs are mostly in the README. It would be good to separate the documentation for easy access and organization.

Describe the solution you'd like
Documentation similar to https://argoproj.github.io/argo-cd/ would be good. This could also host the Admiral API documentation.

[FEATURE] Add ability to customize name suffix and identity label

Is your feature request related to a problem? Please describe.
Currently, admiral generates names that end in .global (Ex: stage.greeting.global) which works for default istio installation but not if we want to customize it to something like .mymesh (Ex: stage.greeting.mymesh)

Also, greeting comes from identity label on service and deployment. Short term we can allow this to be parameterized.

Describe the solution you'd like
The name suffix should be configurable with --name-suffix start up command option to admiral. Ex: --name-suffix mymesh
The identity label should be configurable with --identity-label

Document admiral features support for different Istio versions

Currently there isn't any documentation on Admiral's features supported for a given Istio version. While we will always stay on latest Istio API, the cluster on which admiral operates might be running on an older Istio version and the feature might not work as expected.

[FEATURE] Add support for dependency update

Is your feature request related to a problem? Please describe.
Currently, when a dependency is updated, it doesn't take effect

Describe the solution you'd like
A dependency update should be handled, similar to a dependency add.

Automate admiral releases for new tags

Currently, creating tags isn't triggering a release build in circle ci.

We need Makefile to do the following when a new_tag is created:

i) Publish an image whenever a new tag (say new_tag) is created
ii) Package an artifact with the output of make gen-yaml into admiral-install-{new_tag}.tar.gz
iii) Create a release with the name new_tag and attach artifact from ii) to this release

[BUG] demo deploy RBAC role is incomplete

Describe the bug
Flow the instruction from README via the admiral-install-v0.1-beta.tar.gz to deploy single cluster demo on kubernetes 1.16.6 and istio 1.4.3(deployed via rancher),the admiral will keep report about the error.

020-02-07T08:35:50.643231Z info Waiting for informer caches to sync
2020-02-07T08:35:50.647657Z warn Failed to refresh configmap state Error: configmaps "se-address-configmap" is forbidden: User "system:serviceaccount:admiral:admiral" cannot get resource "configmaps" in API group "" in the namespace "admiral-sync"
2020-02-07T08:35:50.647701Z info getting kubeconfig from: ""
ERROR: logging before flag.Parse: W0207 08:35:50.647710 1 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2020-02-07T08:35:50.648535Z info Initializing default secret resolver
2020-02-07T08:35:50.648547Z info Setting up event handlers
...
...
...
2020-02-07T08:35:50.852004Z info op=Event type=service name=expose-operator-metrics cluster= message=Received, doing nothing
2020-02-07T08:35:50.852777Z error Could not get unique address after 3 retries. Failing to create serviceentry name=default.webapp.global
2020-02-07T08:35:50.852832Z info op=GetMeshPorts type=service name=webapp cluster=enmd message=No mesh ports present, defaulting to first port
2020-02-07T08:35:50.852848Z info op=Event type=deployment name=greeting cluster= message=Received
2020-02-07T08:35:50.852854Z info op=GetMeshPorts type=service name=greeting cluster=enmd message=No mesh ports present, defaulting to first port
2020-02-07T08:35:50.853501Z error Could not get unique address after 3 retries. Failing to create serviceentry name=default.greeting.global
2020-02-07T08:35:50.853516Z info op=GetMeshPorts type=service name=greeting cluster=enmd message=No mesh ports present, defaulting to first port
2020-02-07T08:35:51.776399Z info op=Event type=add name=konga cluster=enmd message=No dependent clusters found
2020-02-07T08:35:51.776579Z info op=Event type=add name=test cluster=enmd message=No dependent clusters found

Steps To Reproduce
Simply flow instruction in README

Expected behavior
Admiral should success generate ServiceEntry.

Note
I fix it via add RBAC role myself,not sure what is the best practice to set it so I give admiral full permission to configmaps in admiral-sync.
This problem might come from this PR which lost the RBAC part
#42

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: admiral-configmap-role
  namespace: admiral-sync
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  - list
  - create
  - update
  - delete
  - patch
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: admiral-configmap-role-binding
  namespace: admiral-sync
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: admiral-configmap-role
subjects:
- kind: ServiceAccount
  name: admiral
  namespace: admiral

and here is another problem here

if err != nil {
log.Errorf("Could not get unique address after %v retries. Failing to create serviceentry name=%v", maxRetries, globalFqdn)
return nil
}

It ignore the source err and only report cannot not get unique address,and make it harder to find out what happen. It should log the source error to point out the real problem.

Example Service Entry load balancing issue and mTLS connection

Describe the bug
This is not a bug for usage of admiral per se. I am following the docs: https://istio.io/latest/blog/2020/multi-cluster-mesh-automation/ to understand the idea behind Admiral but encountered the following issue:

  1. After creating the service entry:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: se-test
  namespace: caas-sentinel
spec:
  endpoints:
  - address: sample-app.caas-sentinel.svc.cluster.local
    locality: jpe1/jpe1b
    ports:
      http: 80
  hosts:
  - productpage.global
  location: MESH_INTERNAL
  ports:
  - name: http
    number: 80
    protocol: http
  addresses:
  - 240.0.0.10
  resolution: DNS

the Envoy configuration does not look correct to me:

         "lb_endpoints": [
          {
           "endpoint": {
            "address": {
             "socket_address": {
              "address": "sample-app.caas-sentinel.svc.cluster.local",
              "port_value": 80
             }
            }
           },
           "load_balancing_weight": 1
          }
         ],
         "load_balancing_weight": 1
        }

After I changed to using STATIC and actual pod IP, the configuration looks correct. sidecar proxy will do the direct pod load balancing. I am not sure whether this is a bug(probably by Istio) or by design. But it will be great if someone can help to confirm.

Second issue is, with the same service entry above, the mTLS connection to sample-app.caas-sentinel does not work. I got the upstream connect error or disconnect/reset before headers. reset reason: connection termination error.

Steps To Reproduce
Istio 1.6
Create above service entry
Turn on target remote service mTLS as shown here: https://istio.io/latest/docs/tasks/security/authentication/authn-policy/

Expected behavior
sidecar proxy should do the pod load balancing instead of calling the service FQDN directly.
mTLS should work with local service.

Thanks a lot for your help!

Missing name in service port

If the name field is missing in the service port definition then the service entry is generated without a port name and is invalid.

Example

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
    identity: nginx
  name: nginx
spec:
  ports:
    - port: 80
      targetPort: 80
  selector:
    app: nginx

[BUG] Infer ServiceEntry ports from destination service

The serviceEntry should add ports based on those present in the service it will be pointing to. Currently it is hardcoded to default http ports (and named http).

Thanks to Martin Baillie for the report!

for those at Intuit more familiar with the code, I was thinking these lines should not be hardcoded and should instead be inferred from the respective Service ports and port names?
https://github.com/istio-ecosystem/admiral/blob/master/admiral/pkg/clusters/serviceentry.go#L84-L85
since my Service ports are named grpc I am being bitten by: https://istio.io/docs/reference/config/networking/service-entry/#ServiceEntry-Endpoint

[BUG] Question: How to cluster load balancing work

Describe the bug
I follow multi cluster setup sample to setup my AKS cluster

Steps To Reproduce
Simply follow the steps in example. I use istio 1.15.1 and k8s 1.15.10. admiral 0.9

Expected behavior
After complete the step, I get "Hello World! - Admiral!!" and greeting from remote.

Actual behavior
It always comes back from local so I only see "Hello World! - Admiral!!". I also added GTR but it still same.

ServiceEntry is created as expected with correct values.
How do I troubleshoot?

[BUG] VirtualService configuration doesn't work for admiral CNAMEs when using Sidecar resource

Describe the bug
A clear and concise description of what the bug is.
Currently, admiral updates the istio Sidecar custom resource if configured. This only imports the cluster local endpoint and not the admiral generated CNAME resulting in VirtualService configuration created for a service by the namespace owner would not be imported to the client's namespace.

Steps To Reproduce

  • Start admiral with Sidecar updates enabled
  • Create two workloads (workload A and workload B) with sidecar injection enabled in two different namespaces
  • Create VirtualService with admiral generated host in workload B's namespace
  • See that the VirtualService configuration doesn't work when workload A calls workload B.

Expected behavior
A clear and concise description of what you expected to happen.
The VirtualService configuration is imported when using istio Sidecar custom resource for admiral generated CNAMEs.

[BUG] Low Grade Memory Growth in Cache Controller Refresh

Describe the bug
The updateCacheController hook is designed to delete and refresh the cache controllers for all tracked remote clusters. Right now, the goroutines allocated for the cache controllers aren't being released when this happens. Instead, they're being parked (https://golang.org/src/runtime/proc.go). Work is needed to figure out why these goroutines are being parked and resolve that issue. Made #123 to address the symptoms, but needs work to identify and fix the root cause.

Steps To Reproduce

Run admiral with at least one remote cluster (the more there are, the faster the leak) and pprof enabled, and take goroutine dumps periodically. You will see large (and increasing) numbers of parked goroutines and a corresponding memory increase.

Expected behavior
Admiral is able to refresh its controllers without seeing a permanent increase in parked goroutines or memory usage.

[FEATURE] Support for a generic identifier label for creating service names

Is your feature request related to a problem? Please describe.
Admiral uses identity label on k8s services and deployments as a hardcoded value (instead of using the identityLabel field in dependency record) when creating service names. If some organization is using a different label, then they would have to add a new label to all the services and deployments.

Describe the solution you'd like
Users should be able to specify any label as unique/global identifier in a dependency record/k8s service & deployment.

[BUG] Unable to use Admiral with Istio 1.6 installed through IstioOperator

Describe the bug
The multi-cluster setup for the IstioOperator automatically creates an EnvoyFilter that rewrites .global traffic. This conflicts with Admiral, which contains instructions to remove the EnvoyFilter. However, in the declarative setup for the IstioOperator, it is assumed that all configuration changes to Istio should be handled in its declarative spec.

For example, here is our IstioOperator config:

# https://istio.io/latest/docs/setup/additional-setup/cni/#basic-installation
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
  name: istio-controlplane
spec:
  # https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/#IstioOperatorSpec
  addonComponents:
    grafana:
      enabled: true
    istiocoredns:
      enabled: true

  components:
    cni:
      enabled: true
    egressGateways:
      - name: istio-egressgateway
        enabled: true

  values:
    cni:
      excludeNamespaces:
       - istio-system
       - kube-system
      logLevel: info

    gateways:
      istio-egressgateway:
        env:
          # Needed to route traffic via egress gateway if desired.
          ISTIO_META_REQUESTED_NETWORK_VIEW: "external"

    global:
      controlPlaneSecurityEnabled: true
      multiCluster:
        enabled: true

Steps To Reproduce
Install Istio 1.6+ using the IstioOperator (I used 1.6.5).
Follow the pre-req steps here: https://github.com/istio-ecosystem/admiral/blob/master/docs/Examples.md

Expected behavior
I think there should be a documented way to do at least one of the following:

  • Disable the EnvoyFilter through the IstioOperator spec.
  • Reconfigure the EnvoyFilter through the IstioOperator spec (to not conflict with Admiral).
  • Configure Admiral to not conflict with the EnvoyFilter installed with the IstioOperator.

Maybe there is a better solution.

For what it's worth, when I slept the IstioOperator controller, removed the EnvoyFilter, and tested Admiral on 1.6, it appeared to work as intended.

[BUG] Image is not pushed to docker hub for tags/releases

Describe the bug
The docker image for admiral is not pushed for release/tag builds

Steps To Reproduce
Create a tag/release and see that the docker hub doesn't have the image with this new tag

Expected behavior
Should see the new tag (release) in docker hub.

[FEATURE] Add support for argo rollouts as dependent workloads

Is your feature request related to a problem? Please describe.
Currently, admiral finds k8s deployments based on the dependency CR to create global CNAMEs in the dependent clusters. Argo rollouts is another way to deploy workloads and admiral doesn't support watching them as a dependent workload.

Describe the solution you'd like
An argo rollout CRD should be treated similar to k8s deployment when identifying the dependent workloads (based of the identifier mechanism used for deployments) when creating global CNAMEs for workloads

[FEATURE] Do not sync Istio resources across clusters unless exported to all namespaces

Is your feature request related to a problem? Please describe.
Admiral syncs Istio resources for a given hostname it generates across clusters. However, certain resources are intended for only the namespace where the workload runs, for example client side override for failover testing etc, which doesn't need to be replicated/synced to other clusters.

Describe the solution you'd like
Do not sync Istio resources only if:
exportTo: "." is set on a Istio resource

[BUG] Admiral crashes on GTP add followed by delete operation

Describe the bug
A clear and concise description of what the bug is.
Admiral crashes on GTP add followed by delete operation

Steps To Reproduce

  1. Start with admiral + create secret for cluster so that Admiral starts monitoring the cluster.
  2. Add a GTP with no matching deployment. I used the GTP yaml as is from https://github.com/istio-ecosystem/admiral/blob/master/docs/Architecture.md
  3. Wait for log msg mentioning the GTP was skipped as there was no matching deployment.
  4. Delete the GTP.
  5. Admiral panics and crashes

Expected behavior
A clear and concise description of what you expected to happen.
Admiral should not crash.

[BUG] Wrong image tag in releases artifact published

Describe the bug
Currently, the image for admiral is hardcoded to

Steps To Reproduce
Generate a release (by creating a tag) and then see that the installed artifact downloaded has image: docker.io/admiralproj/admiral:v0.1-alpha in the file yaml/demosinglecluster.yaml.

Expected behavior
When a release is published, the image should point to the new tag being released.
Ex: If a tag v0.1 is published the image filed should be image: docker.io/admiralproj/admiral:v0.1

This requires making the docker image tag parameterized under install/admiral/base/deployments.yaml with a default set to latest

[BUG] The GlobalTrafficPolicy doesn't failover when weights declared

Describe the bug
If the weight is declared, the 10 times of consecutive5xxErrors won't failover to the other region

Steps To Reproduce

apiVersion: admiral.io/v1alpha1
kind: GlobalTrafficPolicy
metadata:
  name: gtp-admiral-sample
  namespace: sample-admiral
  labels:
    env: default
    identity: webapp-sample-admiral
spec:
  policy:
  - dns: default.webapp-sample-admiral.global
    lbType: 1 #0 represents TOPOLOGY, 1 represents FAILOVER
    target:
    - region: us-west-2
      weight: 10
    - region: us-east-1
      weight: 90

Expected behavior
If a service returns 10 times 500, it won't get kicked off when GTP Weight(90 / 10 ) applied.

Without GTP, the failover will work with 10 consecutive 500 errors

[FEATURE] Add documentation for production admiral deployment

Is your feature request related to a problem? Please describe.
Right now the example documentation is based on admiral coexisting in one of the participant clusters and the crds in the installation are not well separated to support admiral running in a dedicated cluster

Describe the solution you'd like
The examples installation should have a good separation between what needed to admiral and whats needed in a cluster that admiral monitors. Once we have that, add a section to document the process to run admiral in a production setting.

[FEATURE] skip verifying tls on k8s api endpoints

Hi!
In our installation all cluster api endpoints located behind domain with self-signed certificate.
I tried to add our CA into default system cert store inside admiral image, but it still produce errors about untrusted certificate

Describe the solution you'd like
Ability to add CA into admiral cert store, and some flag like insecure-skip-tls-verify for admiral as well

P.S.
If i specify skip tls option in kubeconfig admiral returns error
"error during create of clusterID: some-cluster Error with GlobalTrafficController controller init: failed to create global traffic controller crd client: specifying a root certificates file with the insecure flag is not allowed"

[BUG] typo "custerID"

Typo

log.Infof("starting global traffic policy controller custerID: %v", clusterID)
rc.GlobalTraffic, err = admiral.NewGlobalTrafficController(stop, &GlobalTrafficHandler{RemoteRegistry: r}, clientConfig, resyncPeriod)
if err != nil {
return fmt.Errorf(" Error with GlobalTrafficController controller init: %v", err)
}
log.Infof("starting deployment controller custerID: %v", clusterID)
rc.DeploymentController, err = admiral.NewDeploymentController(stop, &DeploymentHandler{RemoteRegistry: r}, clientConfig, resyncPeriod)
if err != nil {
return fmt.Errorf(" Error with DeploymentController controller init: %v", err)
}
log.Infof("starting pod controller custerID: %v", clusterID)
rc.PodController, err = admiral.NewPodController(stop, &PodHandler{RemoteRegistry: r}, clientConfig, resyncPeriod)
if err != nil {
return fmt.Errorf(" Error with PodController controller init: %v", err)
}
log.Infof("starting node controller custerID: %v", clusterID)
rc.NodeController, err = admiral.NewNodeController(stop, &NodeHandler{RemoteRegistry: r}, clientConfig)
if err != nil {
return fmt.Errorf(" Error with NodeController controller init: %v", err)
}
log.Infof("starting service controller custerID: %v", clusterID)

"custerID" should be "clusterID"

2020-01-08T21:19:42.921401Z	info	starting service controller custerID: tmp.kubeconfig

[BUG] Admiral generates ServiceEntry only with cluster.local suffix

Describe the bug
We have some amount of self-hosted clusters that has unique name like test-env dev-env etc.
When admiral creates ServiceEntry it does not respect cluster name and every time in endpoints section i got some.service.svc.cluster.local instead of some.service.svc.your-cluster-name

Steps To Reproduce
Deploy admiral in cluster with unique name in terms of dns resolution

Expected behavior
Make admiral understand which name should be used, or provide some flag to specify it in admiral runtime

If deployment has dash (-), the env will be the last word of the deployment name, not default

Describe the bug
If there is "-" in the deployment name, then the env will not be default but lastword

Steps To Reproduce

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp-lastword
  namespace: sample
  labels:
    identity: webapp-identity
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webapp-lastword
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "true"
      labels:
        app: webapp-lastword
        identity: webapp-identity
    spec:
      containers:
      - command:
        - /bin/sleep
        - 3650d
        image: pstauffer/curl
        imagePullPolicy: IfNotPresent
        name: webapp-lastword

Expected behavior
The ServiceEntry created is lastword.webapp-identity.global

[FEATURE] Admiral RFC: Client API

Create a way for clients to set timeouts, retries, circuit breakers, faults, and time delays that only apply to that specific client of the service and no other clients. This API designs and introduction of a new type will prevent the client from having access to the more sensitive routing, security, and load balancing configuration used by the service and not relying on the service team to make changes on the client’s behalf.

Details

[BUG] Admiral shouldn't log errors when installed into a cluster without Argo Rollouts

Describe the bug
It doesn't seem to effect functionality, but there are repeated logs of Failed to list *v1alpha1.Rollout: the server could not find the requested resource (get rollouts.argoproj.io) when Admiral is installed into a cluster that doesn't have Argo Rollouts.

Steps To Reproduce

Follow the demoSingleCluster example in a cluster that lacks the rollouts CRD.

Expected behavior
Admiral shouldn't log errors when argo rollouts aren't present in a cluster.

[FEATURE] Add APIs for admiral

Is your feature request related to a problem? Please describe.
Currently, admiral doesn't expose any APIs for simple use cases like:
i) Clusters currently being monitored
ii) CRs created and last updated

Describe the solution you'd like
GET (READ) api for clusters being watched
GET (READ) for admiral generated endpoints (CNAMEs) created

The GTP doesn't update the destinationrule

I have Admiral server install in us-east-1. And both us-east-1 and us-west-2 added as a remote cluster. Try to apply to GTP, however, it doesn't get the destinationrule updated.

However, the logs show:
time="2020-07-31T07:03:13Z" level=info msg="op=Update type=VirtualService name=default.greeting.global-default-vs cluster=ps2-nonprod-dev-us-east-1, e=Success"

But no locality added in:

Name: default.greeting.global-default-dr
Namespace: admiral-sync
Labels:
Annotations:
API Version: networking.istio.io/v1beta1
Kind: DestinationRule
Metadata:
Creation Timestamp: 2020-07-28T19:51:13Z
Generation: 1
Resource Version: 165744955
Self Link: /apis/networking.istio.io/v1beta1/namespaces/admiral-sync/destinationrules/default.greeting.global-default-dr
UID: 491194c2-8fe8-4e88-afee-2fb627c6c2c2
Spec:
Host: default.greeting.global
Traffic Policy:
Outlier Detection:
Base Ejection Time: 120s
consecutive5xxErrors: 10
Interval: 5s
Tls:
Mode: ISTIO_MUTUAL
Events:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.