Giter VIP home page Giter VIP logo

skupper's Introduction

Skupper

skupper

Skupper enables cloud communication by enabling you to create a Virtual Application Network.

This application layer network decouples addressing from the underlying network infrastructure. This enables secure communication without a VPN.

You can use Skupper to create a network from namespaces in one or more Kubernetes clusters as described in the Getting Started. This guide describes a simple network, however there are no restrictions on the topology created which can include redundant paths.

Connecting one Skupper site to another site enables communication both ways. Communication can occur using any path available on the network, that is, direct connections are not required to enable communication.

Skupper supports anycast and multicast communication using the application layer network (VAN), allowing you to configure your topology to match business requirements.

Skupper does not require any special privileges, that is, you do not require the cluster-admin role to create networks.

Useful Links

Using Skupper

Developing Skupper

Licensing

Skupper uses the Skupper Router project and is released under the same Apache License 2.0.

skupper's People

Contributors

aii-nozomu-oki avatar ajssmith avatar akasurde avatar bartoval avatar c-kruse avatar crozzy avatar dendod96 avatar dependabot[bot] avatar ernieallen avatar fgiorgetti avatar gabordozsa avatar granzoto avatar grs avatar hash-d avatar iagodvsantos avatar jiridanek avatar karen-schoener avatar lynnemorrison avatar markmc avatar mathianasj avatar mgoulish avatar michaelscrawford avatar murphd40 avatar nicob87 avatar nicolacdnll avatar nluaces avatar pwright avatar thomask33 avatar trumbaut avatar waldner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

skupper's Issues

A --debug option for the status subcommand

Or this could have its own entrypoint such as debug or debug-dump. The idea is to capture as much information as we can (within reason) about a live Skupper installation. This could be used for ad hoc debugging and also for automated periodic snapshots.

It should include a timestamp and all the info from the version subcommand. And of course, many more details about the network, proxies, and annotated services.

It would be helpful to have a -o yaml|json option to facilitate programmatic use.

Feature: -o yaml support

As an admin using federation (whether push like KubeFed or pull like GitOps)
I really would like to have access to the yaml created by skupper
so I can manage clusters using the federation tools.

cli-testing

Evaluate (and implement) the best way of doing this, since we are already testing the api, we could test somehow just the translation from the cli call to the vanApi call... or, if is not possible do something else.

skupper status reports incorrect connect info with edge mode

Observed while deploying skupper-example-mongodb-replica-set

Three router setup, 2 public (interior mode) and 1 private (edge mode)

Public2 connects to Public1
Private1 connects to Public1 and Public 2

From Public interiors 'skupper status' generates:
VanRouter is enabled for namespace '"skupper" in interior mode'. It is connected to 3 other sites (1 indirectly). It has no exposed services.

From Private edge 'skupper status' generates:
VanRouter is enabled for namespace '"skupper" in edge mode'. It is connected to 3 other sites (2 indirectly). It has no exposed services.

Expected result was that status returns: 'It is connected to 2 other sites'.

Changing Private deployment from edge to interior produced expected result.

QDR version 1.12.0

--cluster-local not working

Having a minikube cluster running in localhost, and doing:

cluster local

kubectl create namespace public && \
kubectl create namespace private && \
./skupper init --cluster-local -n public && \
./skupper init --cluster-local -n private && \
./skupper connection-token my-token.yaml -n public && \
./skupper connect my-token.yaml -n private && \
kubectl apply -f ~/skupper-example-tcp-echo/public-deployment.yaml -n public && \
./skupper expose --port 9090 deployment tcp-go-echo -n public

you get:

circleci@default-5d5596d5-405b-4d87-bb6a-a1c801100be5:~/project$ ./skupper status -n private
VanRouter is enabled for namespace '"private" in interior mode'. It is not connected to any other sites. It has no exposed services.
circleci@default-5d5596d5-405b-4d87-bb6a-a1c801100be5:~/project$ ./skupper status -n public
VanRouter is enabled for namespace '"public" in interior mode'. It is not connected to any other sites. It has 1 exposed service.

ans also:

circleci@default-5d5596d5-405b-4d87-bb6a-a1c801100be5:~/project$ kubectl get svc -n public
NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)               AGE
skupper-controller   ClusterIP   10.105.93.222    <none>        8080/TCP              5m29s
skupper-internal     ClusterIP   10.102.151.148   <none>        55671/TCP,45671/TCP   5m30s
skupper-messaging    ClusterIP   10.107.182.223   <none>        5671/TCP              5m30s
tcp-go-echo          ClusterIP   10.106.232.73    <none>        9090/TCP              4m55s
circleci@default-5d5596d5-405b-4d87-bb6a-a1c801100be5:~/project$ kubectl get svc -n private
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)               AGE
skupper-controller   ClusterIP   10.103.60.59    <none>        8080/TCP              5m32s
skupper-internal     ClusterIP   10.110.39.131   <none>        55671/TCP,45671/TCP   5m32s
skupper-messaging    ClusterIP   10.102.37.15    <none>        5671/TCP              5m33s

It is expected that the two instances to be connected (status output) and also a tcp-echo service in the private namespace.

doing the same but without cluster local

circleci@default-5d5596d5-405b-4d87-bb6a-a1c801100be5:~/project$ kubectl create namespace public && \
> kubectl create namespace private && \
> ./skupper init -n public && \
> ./skupper init -n private && \
> ./skupper connection-token my-token.yaml -n public && \
> ./skupper connect my-token.yaml -n private && \
> kubectl apply -f ~/skupper-example-tcp-echo/public-deployment.yaml -n public && \
> ./skupper expose --port 9090 deployment tcp-go-echo -n public
namespace/public created
namespace/private created
Waiting for LoadBalancer IP or hostname...
Skupper is now installed in namespace 'public'.  Use 'skupper status' to get more information.
Waiting for LoadBalancer IP or hostname...
Skupper is now installed in namespace 'private'.  Use 'skupper status' to get more information.
Connection token written to my-token.yaml 
Skupper configured to connect to 10.99.6.128:55671 (name=conn1)
deployment.apps/tcp-go-echo created
VAN Service Interface Target tcp-go-echo exposed
circleci@default-5d5596d5-405b-4d87-bb6a-a1c801100be5:~/project$ ./skupper status -n public
VanRouter is enabled for namespace '"public" in interior mode'. It is connected to 1 other site. It has 1 exposed service.
circleci@default-5d5596d5-405b-4d87-bb6a-a1c801100be5:~/project$ ./skupper status -n private
VanRouter is enabled for namespace '"private" in interior mode'. It is connected to 1 other site. It has 1 exposed service.
circleci@default-5d5596d5-405b-4d87-bb6a-a1c801100be5:~/project$ kubectl get svc -n public
NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                           AGE
skupper-controller   LoadBalancer   10.105.249.136   10.105.249.136   8080:30964/TCP                    35s
skupper-internal     LoadBalancer   10.99.6.128      10.99.6.128      55671:30437/TCP,45671:32374/TCP   37s
skupper-messaging    ClusterIP      10.97.201.188    <none>           5671/TCP                          37s
tcp-go-echo          ClusterIP      10.106.78.65     <none>           9090/TCP                          28s
circleci@default-5d5596d5-405b-4d87-bb6a-a1c801100be5:~/project$ kubectl get svc -n private
NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                           AGE
skupper-controller   LoadBalancer   10.105.190.204   10.105.190.204   8080:30245/TCP                    33s
skupper-internal     LoadBalancer   10.99.128.62     10.99.128.62     55671:32580/TCP,45671:31468/TCP   36s
skupper-messaging    ClusterIP      10.110.127.186   <none>           5671/TCP                          36s
tcp-go-echo          ClusterIP      10.106.248.236   <none>           9090/TCP                          23s

Support optional tear down in integration tests.

For this the testing miniframework that we have must create namespaces with some hash or suffix, making them unique, so if the previous run has not deleted its namespace, the new run does not clash.
Possibly at the end we can have a "clenup" function that remove all namespaces matching our test_namespace preffix.
In the case of CI this is not so important, since the vm is destroyed anyway, but if we run in a persistent cluster, having some clean-up is desired.
Specifically this is very useful for reducing the testing time, for simple, low resource consumption tests, it is easy to just, "do not tear down" and continue with the next test.

Age out service definitions

Update the service controller to age out entries learned from service synch peers. Currently, if they are not "unexposed" prior to connection going away, the entries will remain.

provide some way to control the kubernetes service type for skupper created services

e.g. to allow a skupper service to be directly exposed as a loadbalancer type service.

(On openshift, this is less important as we can just do oc expose on the skupper service to create a route).

A key question is how this would affect service propagation. I don't think its should propagate the type to every site by default. One option might be that it only works locally. Another might be to explicitly list the sites for which the type applies.

Design and implement general error and status reporting.

* A general approach that will work with the Skupper CLI, the Skupper Library, and the declarative implementation.

* Consistent across the code base.

* Be able to control what information it gathers and what it reports.

* Be able to control verbosity at any time during execution.   ( I.e. set verbose mode only during init and delete.)

  * This replaces issues 19 and 34.

Feature request: Cost argument for skupper connect

Consider adding an optional --cost switch to the skupper connect command. The default value is 1. If supplied (valid entries are natural numbers), the cost would be applied to the inter-router connection being created. This is not valid for edge-to-interior connections.

The benefit of this feature is that users could create redundant topologies with a preference for one path over another. For example, inter-continental connections could be given a higher cost than regions that are closer together. Similarly, inter-provider connections might be given higher costs than inter-zone connections within a single provider.

A logs subcommand

I'm not confident this is a good idea, but I'd like to consider a simplified entrypoint for getting logs from the Skupper infrastructure (router, proxies, proxy controller). Where the status command gives you point-in-time information, the logs command would give you the sequence of events. This would be helpful for next-level-of-detail debugging.

To keep the sizes down when needed, it should support the --limit option.

Add skupper/cmd/site-controller README

This is the site-controller application so README should provide overview of operation and details/content for:

  • site config maps
  • token requests
  • tokens for connect, etc.

feature request: skupper as an operator install

Is it possible to allow for installation of skupper as an operator? For my use case, it's a pain including the binary in edge bootstrapping, however I'm already managing a handful of operator resources on init, would be ideal if I could just pull down a manifest for the operator and just apply the secret.yaml to skupper up

Help on setting up Skupper with TCP proxy for JGroups

Hi Skupper community!

Not sure that it's the correct place to open issues but you'll let me know if there's a better place...

I'm trying to setup Skupper on a multi-cluster application that is using Infinispan with cross-site replication. I basically have 2 AWS OpenShift clusters (US and EU). I have successfully deployed my app using LoadBalancer Service types and configuring the JGroups discovery protocol (TCPPING) accordingly. The services have been annotated with service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp.

Now I'm trying to replace this part with Services that are handled by Skupper VAN and being annotated with skupper.io/proxy: tcp. To make sure I understand everything, I’ve deployed the tcp-echo sample and everything is OK. However I cannot succeed making JGroups TCPPING work with Skupper …

Looking at my configuration everything seems to work fine :

  • Proxy services are created on both clusters
  • Proxy pods are created on both clusters,
  • Routers are reconfigured with new routes.
  • No error messages in the logs

On the Infinispans pods, I can see a lot of TCP warnings like below but I used to get the same warnings when using AWS LBs.

0m�[33m10:07:00,384 WARN  [org.jgroups.protocols.TCP] (TcpServer.Acceptor[7900]-2,relay-global,_infinispan-0-31537:SITE2) JGRP000006: failed accepting connection from peer: java.io.EOFException
	at java.base/java.io.DataInputStream.readFully(DataInputStream.java:202)
	at org.jgroups.blocks.cs.TcpConnection.readPeerAddress(TcpConnection.java:245)
	at org.jgroups.blocks.cs.TcpConnection.<init>(TcpConnection.java:53)
	at org.jgroups.blocks.cs.TcpServer$Acceptor.handleAccept(TcpServer.java:126)
	at org.jgroups.blocks.cs.TcpServer$Acceptor.run(TcpServer.java:111)
	at java.base/java.lang.Thread.run(Thread.java:834) 	

The only thing that I can see as warning is in the skipper-router pods where there’s this Link blocked with zero credit message :

2020-01-10 10:04:35.183580 +0000 SERVER (info) [C28] Accepted connection to 0.0.0.0:5671 from 10.128.2.108:41272
2020-01-10 10:04:35.191751 +0000 ROUTER (info) [C28] Connection Opened: dir=in host=10.128.2.108:41272 vhost= encrypted=TLSv1.2 auth=EXTERNAL user=CN=skupper-messaging container_id=infinispan-site2-skupper-proxy-547b9bd84f-pdfxm_tcp:7900=>amqp:infinispan-site2-skupper props=
2020-01-10 10:04:35.191835 +0000 ROUTER (info) [C28][L121] Link attached: dir=in source={<none> expire:sess} target={infinispan-site2-skupper expire:sess}
2020-01-10 10:04:35.191885 +0000 ROUTER (info) [C28][L122] Link attached: dir=out source={<dynamic> expire:sess} target={<none> expire:sess}
2020-01-10 10:04:41.122874 +0000 ROUTER (info) [C26][L113] Link blocked with zero credit for 16 seconds

Communication between 2 clusters using this bridge seems to be OK as I have a socket resumed pending message into the logs of proxy on the cluster 1:

2020-01-10T10:04:35.175Z icproxy info [infinispan-site2-skupper-proxy-547b9bd84f-pdfxm_tcp:7900=>amqp:infinispan-site2-skupper] socket accepted ::ffff:10.131.0.175:60455
2020-01-10T10:04:35.177Z icproxy info client tunnel ::ffff:10.131.0.175:60455 created, sender=18d81bd6-0d27-ea49-b605-3e001dcb1407 receiver=c2652f20-5a5d-c343-b8e3-7216fd00f947
2020-01-10T10:04:35.284Z icproxy info [infinispan-site2-skupper-proxy-547b9bd84f-pdfxm_tcp:7900=>amqp:infinispan-site2-skupper] socket resumed ::ffff:10.131.0.175:60455

And some kind of corresponding message into the logs of service proxy pod on the cluster 2:

2020-01-10T10:04:35.238Z icproxy info [infinispan-site2-skupper-proxy-86f6466d48-bgzjf_amqp:infinispan-site2-skupper=>tcp://10.131.0.189:7900] receiver attached
2020-01-10T10:04:35.241Z icproxy info [infinispan-site2-skupper-proxy-86f6466d48-bgzjf_amqp:infinispan-site2-skupper=>tcp://10.131.0.189:7900] socket connected to 10.131.0.189:7900
2020-01-10T10:04:35.241Z icproxy info server tunnel created for socket 10.128.2.147:50802, receiver 18d81bd6-0d27-ea49-b605-3e001dcb1407@infinispan-site2-skupper-proxy-547b9bd84f-pdfxm_tcp:7900=>amqp:infinispan-site2-skupper
2020-01-10T10:04:35.331Z icproxy info server tunnel for socket 10.128.2.147:50802, sender 57bc0197-db84-d44e-bf30-2b467347724f

I’ve tried to setup another HTTP proxy for another service and everything works fine … Seems my problem only related to TCP communication …

As I’m new to Skupper and QPID I don’t know where to look next … Could you please provide me some hints or help on how to investigate further on this setup ? I really like to be able to run a full demo for both communication style TCP and HTTP …

Many thanks !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.