Giter VIP home page Giter VIP logo

enhancements's People

Contributors

anfredette avatar aswinsuryan avatar dependabot[bot] avatar dfarrell07 avatar donaldh avatar jaanki avatar maayanf24 avatar mangelajo avatar maryamtahhan avatar skitt avatar sridhargaddam avatar tpantelis avatar vthapar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

enhancements's Issues

Developer oriented scale testing

We would like to create tooling that allows running multiple kind clusters in scale testing on one or more machines, and run scale testing to identify any issues.

Tasks:

Stretch goals:

  • Ability to generate load on the control plane

This tracker epic will also be updated with any issues identified during the scale & longevity testing

Get Submariner/OVN-Kubernetes Integration to a Production Ready State

Epic Description

OVN-Kubernetes is one of the two CNI's supported by RedHat Openshift, and will soon become the sole CNI for the Kubernetes distribution. Submariner supports the CNI via the OVN-Kubernetes specific NetworkPluginSyncer, however the design is more of a POC and currently non-functional with recent OVN-K versions. Many more steps need to be taken in order to ensure the integration is bug-free and ready for production environments, this epic will track the work that is needed to get the integration to such a state.

Acceptance Criteria

Submariner's ability to connect clusters utilizing the OVN-Kubernetes CNI is 100% functional with current OVN-K builds, and the correct CI infrastructure has been put into place in order to ensure the continued functionality of the integration.

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

Epic: Support OCP with submariner on VMWare

What would you like to be added:
Proposal is to fully support connecting multiple OCP clusters on VMWare using Submariner.

Why is this needed:
Aim of this proposal is to list and track steps needed to fill any gaps and allow full support for such deployments.

UPDATE
Investigations have confirmed that there is nothing that needs to be done to prepare cloud for VM Ware, in subctl or OCM. Expectation is that VMWare should work without any changes, except when using NSX which is for now outside scope of this EPIC.

Work Items

Based on results of investigations, only effort required is testing.

  • Add VMWare to Submariner Testday
  • Run QE on VMWare

Handle certificate rotation

Epic Description

We want to enable certificate rotation, especially for accessing the broker. Rotated certificates will need to be propagated somehow, or we’ll need to find some other (secure) way of authenticating and authorising access to the broker. This will have to include broker-info.subm files — we want subctl join to be possible throughout the lifetime of a broker.

Acceptance Criteria

  • Broker certificates are automatically rotated
  • Connected clusters suffer no disruption
  • subctl join continues working with broker-info.subm files produced during initial broker deployment (but not necessarily with an older version of subctl); any additional requirements are documented

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

Support Globalnet deployment in ACM/OCM

Epic Description

Currently, ACM/OCM only supports deploying Vanilla Submariner, and it's not possible to deploy Globalnet.
This EPIC is to track the individual tasks associated to implement Globalnet in ACM.

Along with this, we also want to address the remaining bits from the Globalnet 2.0 feature, primarily the metrics support and removing the dependency with the kube-proxy driver.

Acceptance Criteria

One should be able to deploy Globalnet with ACM with all the e2e tests passing.

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

The following are the individual work items for this feature.

Support connecting multiple OCP clusters on ROSA using Submariner.

Epic Description

Support connecting multiple OCP clusters on ROSA using Submariner.
Why is this needed:
The aim of this proposal is to list and track steps needed to fill any gaps and allow full support for such deployments.

Acceptance Criteria

Should be able to prepare and deploy submariner in ROSA using subctl and cloud-prepare.

Definition of Done (Checklist)

  • Code complete
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

  • Deploy using cloud prepare and subctl on existing OCP clusters on ROSA
  • Enhance cloud-prepare to support deploying dedicated Gateway nodes on ROSA
  • Enhance cloud-prepare to support configuring existing worker nodes as GW nodes on ROSA
  • Deploy using OCM and submariner-addon
  • Add UT
  • Support and run subctl verify
  • Support and run subctl diagnose
  • Support and run subctl gather
  • Remove cluster from multcluster using subctl
  • Remove cluster from multicluster using OCM and submariner-addon
  • Uninstall submariner using subctl
  • Uninstall submariner using OCM and submariner-addon
  • UI support for troubleshooting etc. on OCM
  • Documentation for deployments and removal using cloud-prepare and subctl
  • Documentation for deployments and removal using OCM

Extract broker syncer to it's own service

Epic Description

We need a service dedicated to syncing the broker information from the local cluster to the broker, and the broker info to the local cluster.

We have seen on scale testing that having the broker syncer inside the gateway causes issues to connectivity when there's a high enough scale.

Additionally, multiple projects (Submariner, Lighthouse) are using a specific broker syncer to sync over their resources.
From an architectural POV it would be simpler if there was a "broker syncer" service which can sync resources to/from the remote broker, and the local services would only consume local resources.
This decentralizes the information further, and decouples the services allowing the local services to easily keep working even if the broker is temporarily down.
In addition, this opens up possibilities of easy broker HA since the data can be easily replicated.

This can also benefit Axon as it decouples the control plane information dissemination from Submariner, allowing alternative methods to be used.

Acceptance Criteria

  • Submariner deployments work without any need for manual intervention
  • Upgrade from older Submariner deployments works seamlessly without any need for manual intervention
  • Lighthouse is supported
  • Globalnet is supported

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

  • Add new submariner-brokersyncer repository in Quay.io
  • Add new Dockerfile to build submariner-brokersyncer in submariner repository (Would admiral be a better place?)
  • Implement generic broker syncer code (might need to break down to smaller tasks)
  • Implement E2E tests for the new broker syncer service
  • Add new broker syncer service to operator deployment logic
  • Add new broker syncer service to subctl diagnose
  • Add new broker syncer service to subctl gather
  • Add new broker syncer service to subctl uninstall
  • Remove broker syncing logic from submariner
  • Remove broker syncing logic from lighthouse
  • Add submariner-brokersyncer image to releases process
  • Document the new service on the website

Provide ARM64 support

Epic Description

We want to provide out-of-the-box support for ARM64. Submariner should just work on ARM64 nodes, with no special configuration by the administrator or end-user.

Acceptance Criteria

  • All the Submariner components can be deployed on ARM64 nodes
  • All the Submariner components work with each other regardless of individual nodes’ architectures (e.g. an x86 route agent connected to an ARM64 gateway, or failover from an x86 gateway to an ARM64)
  • Submariner gateways can connect to other-arch gateways

Definition of Done (Checklist)

  • Code complete
  • The acceptance criteria met
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

  • ensure all our dependencies are available on ARM64;
  • ensure we can build all our artifacts for ARM64;
  • ensure the images we push provide multi-arch manifests;
  • run tests on all the platforms we care about (assuming we have GHA support for them).

Depends on submariner-io/submariner#1772
Depends on submariner-io/lighthouse#694
Depends on submariner-io/submariner-operator#2021
Depends on submariner-io/subctl#66

Epic: descope the Submariner operator

Epic Description

The Submariner operator currently has wide-ranging privileges. It doesn’t need to be able to access anything outside the namespaces it manages, so this should be reduced. See https://hackmd.io/wVfLKpxtSN-P0n07Kx4J8Q for background.

Depends on submariner-io/submariner-operator#1105

RBAC generation will affect this, we should wait to have a better idea of that before starting work on designing this.

Acceptance Criteria

The operator is de-scoped, ideally with no ClusterRole, at minimum with justifications for every permission in its ClusterRole.

See also submariner-io/submariner-operator#1105 which overlaps with this; auto-generation should be used if the SDK supports it for namespace-scoped Roles.

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

[EPIC] OCM - Add option to run diagnose in UI

Epic Description

subctl diagnose is a CLI tool used to troubleshoot submariner deployments. But no such tools are available from the OCM UI. Add option in OCM UI to run diagnostics.

Refer submariner-io/submariner-operator#88

Acceptance Criteria

TBD

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using ACM/OCM addon
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

Need tools for synchronized actions across multiple git repos

What would you like to be added:
We need to either find or develop some tools to make actions across multiple repos in our org:

  • Create/Delete labels
  • Create/Delete milestones
  • Move tickets between project boards (eg when release finishes move remaining to backlog)
  • Update GHA versions (since were using SHAs)

Tooling could live under releases repo

Why is this needed:
We often need to do these manually which is a bit error prone and time consuming, would be better to have some tooling for this.

Submariner Globalnet Support with OVN-Kubernetes CNI

Epic Description

The current implementation of Globalnet Controller works with iptables based kubeproxy. We have to add support for clusters that use OVN as the network-plugin driver.

Acceptance Criteria

The Globalnet feature functions correctly when using Submariner with the OVN-Kubernetes CNI.

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

Epic: Hub controller for lighthouse

Epic Description

The goal of this epic is to implement ServiceImport merging.

Once an mcs-controller has been introduced, the reconciliation logic will be extended to address
the following open issues:

  • Mark conflicting ServiceExport objects with the ServiceExportConflict condition
  • Merge matching ServiceExport objects into a single ServiceImport object

Related enhancement proposal: #111

Acceptance Criteria

Lighthouse behaviour aligns with MCS specification:

  • Mark conflicting ServiceExport objects with the ServiceExportConflict condition
  • Merge matching ServiceExport objects into a single ServiceImport object

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

Epic: ImportPolicy for selective service import

Epic Description

This is a proposal to introduce policy CRDs to filter the ServiceImports that get added to
each cluster. The policy would primarily make it possible to limit what gets imported to a
cluster:

  • import all (the current behaviour)
  • import namespace foo
  • import named: [ foo:svc, ... ]
  • no import

The policy would additionally support namespace mapping so that ServiceImports can be added
into a different namepsace from the originating namespace in the event that there is a namespace
conflict between exporting and importing clusters.

Acceptance Criteria

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

Epic: Move subctl to its own repo

Epic Description

subctl, soon to be a library itself, should be moved out of submariner-operator repo to its own repo. Everything specific to the subctl is moved to the subctl repo. Operator repo depends on the subctl repo. Operator repo should contain things that the operator framework generates/ needs. This is an alternative to the previous proposal of moving bundles out of the operator repo.

Pros:

  1. both subctl and operator require different set of dependencies which might conflict will improving subctl/operator. For eg, subctl can't be go install-ed currently because the operator dependencies don't allow it.
  2. operator framework upgrades would be simpler.
  3. its logically correct to separate operator and a library as they are 2 different deliverables and provides different ways to install Submariner.
  4. Testing criterias are also different. We don't need subctl related GHAs for operator and vice versa.
  5. Development approach is also different. 1 is a CLI and another is a framework.
  6. submariner-addon integration might be simplified.

Cons:

  1. A new repo to maintain
  2. Might add a new phase to the automated release process

Acceptance Criteria

All of the subctl related code/GHAs are out of operator repo and in the new subctl repo.

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

Depends on #114

  • Create a new subctl repository and copy all the contents from submariner-operator repository.

On subctl repository

On submariner-operator repository

Depends on submariner-io/submariner-operator#2030

On get.submariner.io repository

On submariner-website repository

Release process

External dependencies

  • Add subctl dependency in submariner-addon.

E2E checks should abort on the first failure

Currently, E2E tests run the full test suite, even when tests are failing. The failures often show up as time-outs, and a failing test suite thus takes a very long time; but in most cases, we’re only interested in whether the tests pass or fail, not the details of the failure (after the first one). This is particularly visible with flaky tests and causes significant delays in the release process.

We should add an option to abort on the first failure, and enable it when run from GHAs.

This would involve:

  • Providing some way of controlling whether make e2e runs all tests or aborts on the first failure (at the Ginkgo level); this should default to run all tests, to preserve existing behaviour for local invocations.
  • Setting up all the E2E GHAs to run make e2e with the setting to abort on first failure; this might involve changes in all projects.

This shouldn’t affect the GHA test matrices at all — we still want to run E2E in all desired configurations, the only change is that within each run, the first failure should abort that run.

Automated unjoin/uninstall

Epic Description

Add support to automatically "unjoin" a previously connected cluster by deleting/uninstalling all the Submariner-related components and removing dataplane rules programmed on the route agent and gateway nodes.

This is a common request from users. A manual process is documented here.

The uninstall process should be able to be initiated via a subctl command and also via the Submariner ACM add-on.

For dataplane cleanup, this blog describes how to run a one-time task on each node using a DaemonSet with an init container to run the task. We can create cleanup DaemonSets corresponding to each component (route agent, gateway engine etc). We can reuse the same images and pass an UNINSTALL environment variable to initiate the cleanup.

The tricky part is how to invoke the cleanup DaemonSets. The logical place would be from the Operator controller. We could put a finalizer on the Submariner resource and, when it's being deleted, delete the normal component DaemonSets and create the cleanup DaemonSets. After the cleanup DaemonSets have completed (ie are ready), remove the finalizer and let deletion of the Submariner resource proceed.

If cleanup fails for a component, the DaemonSet init container pod would return a non-zero exit code causing K8s to restart the pod. The Operator controller can wait a certain amount of time, like 2 min, for a cleanup DaemonSet to be ready. After that time it’s reasonable to assume the error isn’t transient so the Operator controller can remove the finalizer and let deletion of the Submariner resource proceed. It could leave the cleanup DaemonSet so the user can inspect the DaemonSet status to see what failed. Or it could delete the DaemonSet and we could create a K8s Event to report the failure, either from the Operator or the failing pod. In subctl, after Submariner is deleted, we can report the failure to the user. Any cleanup DaemonSet that failed will eventually get deleted along with the namespace regardless.

The one potential downside with this approach is if the Operator pod is deleted first and thus the controller is never triggered. This would also leave the finalizer and prevent the Submariner resource from being deleted. But we can document the proper sequence.

Acceptance Criteria

Invoking the uninstall process should remove all Submariner-related components and artifacts in the cluster.

Definition of Done (Checklist)

  • Code complete
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using subctl
  • Deployed using ACM/OCM addon
  • Documentation added
  • Release notes added

Work Items

Related Issues

Add support for finer-grained cluster connectivity options

Submariner currently assumes full mesh by default, and provision VPN connections between all participating clusters. For some use cases, it makes sense to offer finer-grained connectivity options. This could also improve overall scalability/state maintained, for those cases in which full mesh is not required.

Add support for Kube-OVN.

Epic Description

Ovn-kubernetes is not so convenient to use. Please consider supporting Kube-OVN.

Acceptance Criteria

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

Epic: Simultaneous multiple cable support

Epic Description

Submariner currently offers three cable drivers: VXLAN, IPSec or wireguard. However only one of these cable drivers
is enabled across all the clusters in a ClusterSet; That is all clusters that join a broker must use the same cable
driver. The goal of this proposal is to extend the current model to support the selection of different cable types
between connected clusters. For example, an IPSec cable between public and private clouds but a VXLAN cable between
the clusters in the same cloud.

More details can be found here: #110

Acceptance Criteria

Multiple cables can be deployed with Submariner Gateways

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

Rework kubeconfig/kubecontext handling in subctl

The various subctl commands support a mixture of KUBECONFIG environment variables, --kubeconfig flags, --kubecontext flags, kubeconfig files etc.

We should make this consistent, ideally using clientcmd’s tools to handle kubeconfig. skitt/submariner-operator#23 has an old attempt at this, using https://pkg.go.dev/k8s.io/[email protected]/tools/clientcmd#BindOverrideFlags to provide access to all the features supported by clientcmd (see kubectl options).

For commands using a single context, all the following should work, with kubeconfigs containing one or more clusters and/or contexts (this uses a kube prefix when setting up clientcmd):

# Use the default context in the kubeconfig
KUBECONFIG=/path/to/kubeconfig subctl foo
subctl foo --kubeconfig /path/to/kubeconfig
subctl --kubeconfig /path/to/kubeconfig foo

# Use the specified context from the kubeconfig
KUBECONFIG=/path/to/kubeconfig subctl foo --kubecontext bar
subctl foo --kubeconfig /path/to/kubeconfig --kubecontext bar
subctl --kubeconfig /path/to/kubeconfig --kubecontext bar foo

For commands using multiple contexts, the prefixes should make the purpose of each context clear, using appropriate prefixes when setting up clientcmd:

KUBECONFIG=/path/to/kubeconfig subctl benchmark latency --fromcontext foo --tocontext bar
subctl benchmark latency --kubeconfig /path/to/kubeconfig --fromcontext foo --tocontext bar

--kubeconfig is preserved as-is, since users can use a single file combining all their contexts. However such commands should also support separate kubeconfigs:

subctl benchmark latency --fromconfig /path/to/fromconfig --toconfig /path/to/toconfig

clientcmd doesn’t handle --kubeconfig itself, that needs to be handled by subctl.

This enhancement should also include a test suite covering all the different possibilities. As far as possible, the 0.12 options should keep their existing behaviour, and be marked for deprecation; they will be removed after 0.13.

This would subsume submariner-io/subctl#49

[EPIC] Debuggability and Troubleshooting support in OCM


name: Debuggability and Troubleshooting
about: Improving debuggability and troubleshooting for submariner deplyments in general, OCM in particular
labels: epic


Epic Description

Current documentation on Sumariner.io - Troubleshooting docs covers the basic manual steps and how to run subctl diagnose and gather. What more needs to be done? Aim is to make it easy for support teams to troubleshoot and resolve the problems, or capture relevant information if unable to resolve.

Acceptance Criteria

TBD

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

Add support for full node mesh

Full node mesh can be support with the wireguard VPN implementation.
Full node mesh adds the following:

  1. maximum theoretical bandwidth exploitation
  2. best possible resilience to node failure, in fact with n active gateways (#34 ) a node failure causes a drop in bandwidth of 1/n. But with full node mesh a node failure causes a drop in bandwidth of zero as the only workload that is impacted by it is also failed.

Support connecting multiple OCP clusters on RHOS using Submariner

Epic Description

Support connecting multiple OCP clusters on RHOS using Submariner.
Why is this needed:
The aim of this proposal is to list and track steps needed to fill any gaps and allow full support for such deployments.

Acceptance Criteria

Should be able to prepare and deploy submariner in RHOS using subctl and cloud-prepare.

Definition of Done (Checklist)

  • Code complete
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

Make broker objects expirable (auto-cleanup)

See [1] for reference.

Broker Endpoints and cluster could expire if the timestamp is not updated in T, while submariner
would update the objects every T/2 time.

Alternatively, instead of auto-cleanup, this could be an admin triggered action on the broker, to avoid
for example downtime to cluster who can't contact the broker temporarily.

Cleanup of local copies should not happen if the broker is unreachable to avoid dataplane downtime
while the broker is down.

[1] https://github.com/submariner-io/submariner/blob/devel/pkg/cableengine/syncer/syncer.go#L142

Improve integration with submariner-addon

Epic Description

Submariner addon should use submariner operator to manage submariner deployment on managed clusters. This allows for reusing of operator, automated management using operator and a single point of development efforts.

Acceptance Criteria

addon uses operator to deploy broker and submariner on hub and spokes.

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

Epic: Support OCP with submariner on Azure

What would you like to be added:
Proposal is to fully support connecting multiple OCP clusters on Azure using Submariner.

Why is this needed:
Aim of this proposal is to list and track steps needed to fill any gaps and allow full support for such deployments.

Work Items

[EPIC] Add option to gather submariner logs from OCM UI

Epic Description

subctl gather is a CLI tool used to gather logs and other data from submariner deployments. But no such tools are available from the OCM UI. Add option in OCM UI to run gather logs.

Refer #88

Acceptance Criteria

TBD

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using ACM/OCM addon
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

  • Addon - Run subctl gather Job on the cluster
  • UI - Add option to run gather from
  • UI - Add option to collect logs from a gather run
  • UI - Provide option to run gather after a fail diagnostics when running diagnostics

Epic: New Cable Driver protocols

Epic Description

The goal of this Epic is to enable a number of new features for Submariner Cable Drivers:

  1. additional cable driver encapsulation protocols, including: IP in IP and GRE. Enabling these encapsulation protocols could allow for performance improvements over the current configuration.

  2. Enabling IPSec transport mode - coupled with VXLAN, GRE or IP in IP.

  3. It should be possible to do finer grained configurations for any of these cable drivers, for example in the case of IPSec, selecting the encryption algorithm, or for VXLAN selecting the UDP port (separately to the NAT port)...

Acceptance Criteria

New Cable Drivers are functional and deployable with the Submariner Gateways.

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

Submariner support for load-balancer mode

Epic Description

Submariner load-balancer support

Currently, the load-balancer mode is an experimental feature in Submariner and is not well tested across various public and onPrem clusters. This EPIC is to understand the load-balancer implementations across AWS and GCP clouds to start with, finding any gaps and also its implications on HA-failover times.

Acceptance Criteria

Identify and where applicable fix the missing gaps in order to make load balancer a supported feature.

  • Code complete
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

The following are the individual work items for this feature.

Epic: Restructure/refactor submariner-operator repo

Epic Description

This will ensure that submariner-operator repo structure follows go standards and is optimised.

Acceptance Criteria

submariner-operator code is as much optimised and modular as possible.

Definition of Done (Checklist)

  • Code complete
  • [ ] Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • [ ] Deployed using cloud-prepare+subctl
  • [ ] Deployed using ACM/OCM addon
  • [ ] Deploy using Helm
  • [ ] Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • [ ] Uninstall
  • [ ] Troubleshooting (gather/diagnose) added
  • [ ] Documentation added
  • [ ] Release notes added

Work Items

[EPIC] Support for VPC Peering

What would you like to be added:
Proposal is to add support for VPC Peering between clusters deployed with same cloud provider.

Why is this needed:
When all cluster are deployed within the same cloud provider, VPC Peering provides secure networking between them. In such scenario one can use VxLan Cable driver for better performance without the IPSec overhead.

Work Items

  • Add generic API for VPC Peering (cloud-prepare)
  • Add code for VPC Peering in AWS (cloud-prepare)
  • Add code for VPC Peering in GCP (cloud-prepare)
  • Add code for VPC Peering in Azure? (cloud-prepare)
  • Add subctl command to add VPC peering (submariner-operator)
  • Add VPC Peering support to submariner-addon (submariner-addon)
  • Add support for VPC peering in ACM UI (console)
  • Add user prompt for VPC Peering not available if all clusters are not in same cloud provider. (console)
  • Enable VPC Peering by default when using VxLan driver and all clusters on same provider.
  • Add a user prompt when adding a new cluster from a different provider to a clusterset already using VPC Peering that different provider can't be added coz VPC Peering is already enabled.

Epic: Support OCP with submariner on IBM Cloud

What would you like to be added:
Proposal is to fully support connecting multiple OCP clusters on IBM Cloud using Submariner.

Why is this needed:
Aim of this proposal is to list and track steps needed to fill any gaps and allow full support for such deployments.

Work Items

  • Deploy using cloud prepare and subctl on existing OCP clusters on IBM Cloud
  • Deploy using OCM and submariner-addon on OCP clusters on AWS
  • Support and run subctl verify
  • Support and run subctl diagnose
  • Support and run subctl gather
  • Remove cluster from multcluster using subctl
  • Remove cluster from multicluster using OCM and submariner-addon
  • Uninstall submariner using subctl
  • Uninstall submariner using OCM and submariner-addon
  • UI support for troubleshooting etc. on OCM
  • Documentation for deployments and removal using cloud-prepare and subctl
  • Documentation for deployments and removal using OCM

Epic: Add support for multiple active gateways

multiple active gateways can increase the bandwidth. Some cloud provider have limited instance-to-instance bandwidth between regions, so this feature is an enabler for all throughput-sensitive workloads.

Enhancement document: #69

Epic: Debuggability improvements

Epic Description

Improve current troubleshooting documentation on Sumariner.io - Troubleshooting docs
Aim is to make it easy for support teams to troubleshoot and resolve the problems, or capture relevant information if unable to resolve.

Acceptance Criteria

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Troubleshooting (gather/diagnose) added
  • Documentation added
  • Release notes added

Work Items

  • Documentation - Add missing options in subctl diagnose
  • Documentation - Add information on how to interpret diagnose output
  • Documentation - Add more information on different subctl gather options
  • Documentation - Add information on specific gather data to look at for different issues discovered by diagnose
  • Documentation - Add manual troubleshooting steps for anything not covered by diagnose
  • Documentation - More verbose output form gather and diagnose
  • subctl - Handle multiple kubeconfigs/contexts better, avoid gathering information from same cluster repeatedly
  • run subctl diagnose from a pod
  • output from subctl diagnose should be capture in CR
  • add metrics to reflect subctl diagnose running results
  • diagnose - Detect errors in cloud deployment
  • diagnose - Detect service discovery issues
  • diagnose - Detect cabledriver issues
  • gather - Gather logs from submariner-addon if OCM deployment
  • gather - Gather information from cloud e.g. ports opened etc.
  • CI/CD - Add subctl diagnose to CI/CD in case of any test failure
  • CI/CD - Use subctl gather in CI/CD post-mortem in case of any test failure
  • CI/CD - Identify nature of failure in CI/CD to run subset of diagnose/gather and collect minimal information required.

Epic: rebuild the Submariner operator using current best practices

Epic Description

The operator was built using an old version of the Operator SDK. This causes a number of problems:

  • our code (in particular, metrics setup) relies on an old version of the Operator framework, which is incompatible with current versions of the Kubernetes libraries
  • we’re missing out on features supported by newer versions of the SDK (such as healthchecks)

We need to upgrade our operator to use the current Operator SDK, following the migration guide.

Depends on #89

Related changes

stolostron/submariner-addon#318
submariner-io/submariner-operator#1105
submariner-io/submariner-operator#1415
submariner-io/submariner-operator#1824

Acceptance Criteria

The operator’s Operator framework dependencies are up-to-date, and all tests pass.

Definition of Done (Checklist)

  • Code complete
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • [ ] Uninstall
  • [ ] Troubleshooting (gather/diagnose) added
  • [ ] Documentation added
  • [ ] Release notes added

Work Items

An operator was written from scratch following the official operator tutorial. The code was pushed on a fork as a PR. This PR was then used as a reference to make required changes.

The operator-sdk version is upgraded to 1.23.0. Migration guide was followed to pin versions for dependencies.

  1. Depends on submariner-io/submariner-operator#2158
  2. Update versions of various dependencies used
    1. Depends on submariner-io/submariner-operator#2210
    2. Depends on submariner-io/submariner-operator#2179
  3. Depends on submariner-io/submariner-operator#2178
  4. Depends on submariner-io/submariner-operator#2211
  5. Depends on submariner-io/submariner-operator#2204
  6. Depends on submariner-io/submariner-operator#2199
  7. Depends on submariner-io/submariner-operator#2180
  8. Depends on submariner-io/submariner-operator#2230
  9. Update bundle in `operatorHub'(https://github.com/k8s-operatorhub/community-operators)
  10. and here? https://github.com/redhat-openshift-ecosystem/community-operators-prod/tree/main/operators

[EPIC]: Connection status in OCM UI

Epic Description

Currently OCM UI doesn't show connection status in its own UI. It shows up as part of ManagedClusterAddOn status. It is not very intuitive for user to know which cluster connections are in connecting or error states. With topology view available in OCM 2.6, it should be used to display multicluster connection information to user.

Refer #88

Acceptance Criteria

Definition of Done (Checklist)

  • Code complete
  • Relevant metrics added
  • The acceptance criteria met
  • Unit/e2e test added & pass
  • CI jobs pass
  • Deployed using cloud-prepare+subctl
  • Deployed using ACM/OCM addon
  • Deploy using Helm
  • Deployed on supported platforms (for e.g kind, OCP on AWS, OCP on GCP)
  • Run subctl verify, diagnose and gather
  • Uninstall
  • Documentation added
  • Release notes added

Work Items

Dependencies

This requires topology view to be available in OCM UI. Also requires some design discussions with UI team on how to model ConnectionStatus CRD so it can be used by UI in a scaleable way.

Epic: CNCF Incubation/Graduation, CII Silver/Gold

Epic Description

Prepare to apply for CNCF Incubation. Use CNCF Graduation and CII Silver/Gold as guidance for exceeding Incubation minimums and preparing for Graduating in less than two years from Incubating (per requirements).

Acceptance Criteria

CNCF Sandbox

CNCF Incubation:

CNCF Graduation:

CII Silver:

  • The project MUST achieve a passing level badge.
  • The information on how to contribute MUST include the requirements for acceptable contributions (e.g., a reference to any required coding standard).
  • The project SHOULD have a legal mechanism where all developers of non-trivial amounts of project software assert that they are legally authorized to make these contributions. The most common and easily-implemented approach for doing this is by using a Developer Certificate of Origin (DCO), where users add "signed-off-by" in their commits and the project links to the DCO website. However, this MAY be implemented as a _ _Contributor License Agreement (CLA), or other legal mechanism.
  • The project MUST clearly define and document its project governance model (the way it makes decisions, including key roles).
  • The project MUST adopt a code of conduct and post it in a standard location.
  • The project MUST clearly define and publicly document the key roles in the project and their responsibilities, including any tasks those roles must perform. It MUST be clear who has which role(s), though this might not be documented in the same way.
  • The project MUST be able to continue with minimal interruption if any one person dies, is incapacitated, or is otherwise unable or unwilling to continue support of the project. In particular, the project MUST be able to create and close issues, accept proposed changes, and release versions of software, within a week of confirmation of the loss of support from any one individual. This MAY be done by ensuring someone e _ _lse has any necessary keys, passwords, and legal rights to continue the project. Individuals who run a FLOSS project MAY do this by providing keys in a lockbox and a will providing any needed legal rights (e.g., for DNS names).
  • The project SHOULD have a "bus factor" of 2 or more.
    • submariner-io/submariner#1623
    • Document using Submariner Twitter via PRs.
    • Update Submariner's Twitter password, shared the new one with all/only with Owners and docs committers, and document this practice.
  • The project MUST have a documented roadmap that describes what the project intends to do and not do for at least the next year.
  • The project MUST include documentation of the architecture (aka high-level design) of the software produced by the project.
  • The project MUST document what the user can and cannot expect in terms of security from the software produced by the project (its "security requirements").
  • The project MUST provide a "quick start" guide for new users to help them quickly do something with the software.
  • The project MUST make an effort to keep the documentation consistent with the current version of the project results (including software produced by the project). Any known documentation defects making it inconsistent MUST be fixed. If the documentation is generally current, but erroneously includes some older information that is no longer true, just treat that as a defect, then track and fix as usual.
    • Audit website and per-repo docs for anything out-of-date
  • The project repository front page and/or website MUST identify and hyperlink to any achievements, including this best practices badge, within 48 hours of public recognition that the achievement has been attained.
  • The project (both project sites and project results) SHOULD follow accessibility best practices so that persons with disabilities can still participate in the project and use the project results where it is reasonable to do so.
  • The software produced by the project SHOULD be internationalized to enable easy localization for the target audience's culture, region, or language. If internationalization (i18n) does not apply (e.g., the software doesn't generate text intended for end-users and doesn't sort human-readable text), select "not applicable" (N/A).
  • If the project sites (website, repository, and download URLs) store passwords for authentication of external users, the passwords MUST be stored as iterated hashes with a per-user salt by using a key stretching (iterated) algorithm (e.g., Argon2id, Bcrypt, Scrypt, or PBKDF2). If the project sites do not store passwords for this purpose, select "not applicable"
    • NA
  • The project MUST maintain the most often used older versions of the product or provide an upgrade path to newer versions. If the upgrade path is difficult, the project MUST document how to perform the upgrade (e.g., the interfaces that have changed and detailed suggested steps to help upgrade).
  • The project MUST use an issue tracker for tracking individual issues.
  • The project MUST give credit to the reporter(s) of all vulnerability reports resolved in the last 12 months, except for the reporter(s) who request anonymity. If there have been no vulnerabilities resolved in the last 12 months, select "not applicable".
    • NA
  • The project MUST have a documented process for responding to vulnerability reports.
  • The project MUST identify the specific coding style guides for the primary languages it uses, and require that contributions generally comply with it.
  • The project MUST automatically enforce its selected coding style(s) if there is at least one FLOSS tool that can do so in the selected language(s).
  • Build systems for native binaries MUST honor the relevant compiler and linker (environment) variables passed in to them (e.g., CC, CFLAGS, CXX, CXXFLAGS, and LDFLAGS) and pass them to compiler and linker invocations. A build system MAY extend them with additional flags; it MUST NOT simply replace provided values with its own.
  • The build and installation system SHOULD preserve debugging information if they are requested in the relevant flags (e.g., "install -s" is not used).
  • The build system for the software produced by the project MUST NOT recursively build subdirectories if there are cross-dependencies in the subdirectories.
  • The project MUST be able to repeat the process of generating information from source files and get exactly the same bit-for-bit result.
  • The project MUST provide a way to easily install and uninstall the software produced by the project using a commonly-used convention.
  • The installation system for end-users MUST honor standard conventions for selecting the location where built artifacts are written to at installation time. For example, if it installs files on a POSIX system it MUST honor the DESTDIR environment variable. If there is no installation system or no standard convention, select "not applicable" (N/A).
  • The project MUST provide a way for potential developers to quickly install all the project results and support environment necessary to make changes, including the tests and test environment. This MUST be performed with a commonly-used convention.
  • The project MUST list external dependencies in a computer-processable way.
  • Projects MUST monitor or periodically check their external dependencies (including convenience copies) to detect known vulnerabilities, and fix exploitable vulnerabilities or verify them as unexploitable.
  • The project MUST either: make it easy to identify and update reused externally-maintained components; or use the standard components provided by the system or programming language. Then, if a vulnerability is found in a reused component, it will be easy to update that component.
  • The project SHOULD avoid using deprecated or obsolete functions and APIs where FLOSS alternatives are available in the set of technology it uses (its "technology stack") and to a supermajority of the users the project supports (so that users have ready access to the alternative).
  • An automated test suite MUST be applied on each check-in to a shared repository for at least one branch. This test suite MUST produce a report on test success or failure.
  • The project MUST add regression tests to an automated test suite for at least 50% of the bugs fixed within the last six months.
  • The project MUST have FLOSS automated test suite(s) that provide at least 80% statement coverage if there is at least one FLOSS tool that can measure this criterion in the selected language.
  • The project MUST have a formal written policy that as major new functionality is added, tests for the new functionality MUST be added to an automated test suite
  • The project MUST include, in its documented instructions for change proposals, the policy that tests are to be added for major new functionality.
  • Projects MUST be maximally strict with warnings in the software produced by the project, where practical.
  • The project MUST implement secure design principles (from "know_secure_design"), where applicable.
  • The default security mechanisms within the software produced by the project MUST NOT depend on cryptographic algorithms or modes with known serious weaknesses (e.g., the SHA-1 cryptographic hash algorithm or the CBC mode in SSH).
    • The defaults are specified by the tools we use: Libreswan for IPsec, WireGuard, Kubernetes; all of these follow industry best practices.
  • The project SHOULD support multiple cryptographic algorithms, so users can quickly switch if one is broken. Common symmetric key algorithms include AES, Twofish, and Serpent. Common cryptographic hash algorithm alternatives include SHA-2 (including SHA-224, SHA-256, SHA-384 AND SHA-512) and SHA-3.
  • The project MUST support storing authentication credentials (such as passwords and dynamic tokens) and private cryptographic keys in files that are separate from other information (such as configuration files, databases, and logs), and permit users to update and replace them without code recompilation. If the project never processes authentication credentials and private cryptographic keys, select "not applicable" (N _/A).
  • The software produced by the project SHOULD support secure protocols for all of its network communications, such as SSHv2 or later, TLS1.2 or later (HTTPS), IPsec, SFTP, and SNMPv3. Insecure protocols such as FTP, HTTP, telnet, SSLv3 or earlier, and SSHv1 SHOULD be disabled by default, and only enabled if the user specifically configures it. If the software produced by the project does not support network communications, select "not applicable" (N/A).
  • The software produced by the project SHOULD, if it supports or uses TLS, support at least TLS version 1.2. Note that the predecessor of TLS was called SSL. If the software does not use TLS, select "not applicable" (N/A).
  • The software produced by the project MUST, if it supports TLS, perform TLS certificate verification by default when using TLS, including on subresources. If the software does not use TLS, select "not applicable" (N/A).
  • The software produced by the project MUST, if it supports TLS, perform certificate verification before sending HTTP headers with private information (such as secure cookies). If the software does not use TLS, select "not applicable" (N/A).
  • The project MUST cryptographically sign releases of the project results intended for widespread use, and there MUST be a documented process explaining to users how they can obtain the public signing keys and verify the signature(s). The private key for these signature(s) MUST NOT be on site(s) used to directly distribute the software to the public.
  • It is SUGGESTED that in the version control system, each important version tag (a tag that is part of a major release, minor release, or fixes publicly noted vulnerabilities) be cryptographically signed and verifiable as described in signed_releases.
  • The project results MUST check all inputs from potentially untrusted sources to ensure they are valid (an allowlist), and reject invalid inputs, if there are any restrictions on the data at all.
  • Hardening mechanisms SHOULD be used in the software produced by the project so that software defects are less likely to result in security vulnerabilities.
  • The project MUST provide an assurance case that justifies why its security requirements are met. The assurance case MUST include: a description of the threat model, clear identification of trust boundaries, an argument that secure design principles have been applied, and an argument that common implementation security weaknesses have been countered.
  • The project MUST use at least one static analysis tool with rules or approaches to look for common vulnerabilities in the analyzed language or environment, if there is at least one FLOSS tool that can implement this criterion in the selected language.
  • If the software produced by the project includes software written using a memory-unsafe language (e.g., C or C++), then at least one dynamic tool (e.g., a fuzzer or web application scanner) MUST be routinely used in combination with a mechanism to detect memory safety problems such as buffer overwrites. If the project does not produce software written in a memory-unsafe language, choose "not applicable" (N/A).
    • NA

CII Gold

  • The project MUST achieve a silver level badge.
  • The project MUST have a "bus factor" of 2 or more.
  • The project MUST have at least two unassociated significant contributors.
    • Add non-Red Hat significant contributors
  • The project MUST include a copyright statement in each source file, identifying the copyright holder (e.g., the [project name] contributors).
  • The project MUST include a license statement in each source file. This MAY be done by including the following inside a comment near the beginning of each file: SPDX-License-Identifier: [SPDX license expression for project].
  • The project's source repository MUST use a common distributed version control software (e.g., git or mercurial).
    • git
  • The project MUST clearly identify small tasks that can be performed by new or casual contributors.
  • The project MUST require two-factor authentication (2FA) for developers for changing a central repository or accessing sensitive data (such as private vulnerability reports). This 2FA mechanism MAY use mechanisms without cryptographic mechanisms such as SMS, though that is not recommended.
  • The project's two-factor authentication (2FA) SHOULD use cryptographic mechanisms to prevent impersonation. Short Message Service (SMS) based 2FA, by itself, does NOT meet this criterion, since it is not encrypted.
  • The project MUST document its code review requirements, including how code review is conducted, what must be checked, and what is required to be acceptable.
  • The project MUST have at least 50% of all proposed modifications reviewed before release by a person other than the author, to determine if it is a worthwhile modification and free of known issues which would argue against its inclusion
  • The project MUST have a reproducible build. https://reproducible-builds.org/
  • A test suite MUST be invocable in a standard way for that language.
  • The project MUST implement continuous integration, where new or changed code is frequently integrated into a central code repository and automated tests are run on the result.
  • The project MUST have FLOSS automated test suite(s) that provide at least 90% statement coverage if there is at least one FLOSS tool that can measure this criterion in the selected language.
  • The project MUST have FLOSS automated test suite(s) that provide at least 80% branch coverage if there is at least one FLOSS tool that can measure this criterion in the selected language.
  • The software produced by the project MUST support secure protocols for all of its network communications, such as SSHv2 or later, TLS1.2 or later (HTTPS), IPsec, SFTP, and SNMPv3. Insecure protocols such as FTP, HTTP, telnet, SSLv3 or earlier, and SSHv1 MUST be disabled by default, and only enabled if the user specifically configures it.
  • The software produced by the project MUST, if it supports or uses TLS, support at least TLS version 1.2. Note that the predecessor of TLS was called SSL. If the software does not use TLS, select "not applicable" (N/A).
  • The project website, repository (if accessible via the web), and download site (if separate) MUST include key hardening headers with nonpermissive values.
  • The project MUST have performed a security review within the last 5 years. This review MUST consider the security requirements and security boundary.
  • Hardening mechanisms MUST be used in the software produced by the project so that software defects are less likely to result in security vulnerabilities.
  • The project MUST apply at least one dynamic analysis tool to any proposed major production release of the software produced by the project before its release.
  • The project SHOULD include many run-time assertions in the software it produces and check those assertions during dynamic analysis.

Support different cable drivers/settings inside the same supercluster

What would you like to be added:

Support for different cable drivers inside the same supercluster.

Why is this needed:

Once we start supporting cable drivers with meaningfully different features (sorry if I’m insulting any existing cable driver), e.g. submariner-io/submariner#674, various scenarios involve mixing cable drivers. For example, a supercluster could contain three clusters, one of which should only be connected to using encrypted connections, but two of which may be connected to using unencrypted connections (because they’re connected by a private link). So we’d have three connections, one unencrypted and two encrypted, presumably using different cable drivers.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.