Giter VIP home page Giter VIP logo

starship's Introduction

Starship

Universal interchain development environment in k8s. The vision of this project is to have a single easy to use developer environment with full testing support for multichain use cases

Installation

In order to get started with starship, one needs to install the following

Install

Install the test utilities starshipjs and the CLI @starship-ci/cli:

yarn add --dev starshipjs @starship-ci/cli

Recommended Usage πŸ“˜

Stay tuned for a create-cosmos-app boilerplate! For now, this is the most recommended setup. Consider everything else after this section "advanced setup".

This will allow you to run yarn starship to start, setup, deploy, stop and other starship commands:

Deploying Starship πŸš€

yarn starship start

Running End-to-End Tests πŸ§ͺ

# test
yarn starship:test

# watch 
yarn starship:watch

Teardown πŸ› οΈ

# stop ports and delete deployment
yarn starship stop

Related

Checkout these related projects:

  • @cosmology/telescope Your Frontend Companion for Building with TypeScript with Cosmos SDK Modules.
  • @cosmwasm/ts-codegen Convert your CosmWasm smart contracts into dev-friendly TypeScript classes.
  • chain-registry Everything from token symbols, logos, and IBC denominations for all assets you want to support in your application.
  • cosmos-kit Experience the convenience of connecting with a variety of web3 wallets through a single, streamlined interface.
  • create-cosmos-app Set up a modern Cosmos app by running one command.
  • interchain-ui The Interchain Design System, empowering developers with a flexible, easy-to-use UI kit.
  • starship Unified Testing and Development for the Interchain.

Credits

πŸ›  Built by Cosmology β€”Β if you like our tools, please consider delegating to our validator βš›οΈ

Disclaimer

AS DESCRIBED IN THE LICENSES, THE SOFTWARE IS PROVIDED β€œAS IS”, AT YOUR OWN RISK, AND WITHOUT WARRANTIES OF ANY KIND.

No developer or entity involved in creating this software will be liable for any claims or damages whatsoever associated with your use, inability to use, or your interaction with other users of the code, including any direct, indirect, incidental, special, exemplary, punitive or consequential damages, or loss of profits, cryptocurrencies, tokens, or anything else of value.

starship's People

Contributors

0xpatrickdev avatar anmol1696 avatar dependabot[bot] avatar inkvi avatar kayanski avatar liujun93 avatar omahs avatar pyramation avatar whalelephant avatar zetazzz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

starship's Issues

feature: collector service to store various files

Overview

Currently there is no way for the container to stop and an export or a snapshot taken. Ideally before the pods die or on call, the there should be hooks on the containers that will backup various information about the validator into a central file service called collector.]

Local debugging and testing can then query collector to fetch a validators

  • Exported state at a height
  • Snapshot of the data/ directory before the system stops (ideally we could be taking a backup of the whole ~/.<chain-bin> directory)

Usecase

  • When a pod is dying, prestop lifecycle hook of the pod should take a snapshot and store it
  • When a user triggers a export command (somehow), then the chain is stopped, a snapshot taken and the chain restarted (this will be tricky)

Alternatives

Ideally this information would be stored in blob storage instead of a file service. But since shuttle is not tied to a cloud provider, it might make sense to have a inbuilt service. For testing purpose this should not matter too much, although for production usecase collector needs to interact with a blob storage.

feature: Allow chain code to be built in place

Overview

Currently we are building images when we specify the upgrade: key for a chain.

Proposal

Allow in the config file to build the code from source in place.

chains:
- name: core-1
  type: persistencecore
  numValidators: 2
  build:
    enabled: true
    source: <tag or commit>

Although we have some alternatives mentioned bellow, i think the featuer is important enough to be a standalone feature for Starship

Current alternatives

Custom Docker Image

We already support custom image for the chain (as long as the docker image has a bunch of prerequirsts).

One can clone the Dockerfile from docker/chains/ dir, push it to docker hub and mention that in the config"

chains:
- name: core-1
  type: persistencecore
  image: <custom image>
  numValidators: 2

Upgrade with only genesis

If one does not perform the chain upgrade via software upgrade proposal, then they can run the system with

- name: core-1
    type: persistencecore
    numValidators: 3
    upgrade:
      enabled: true
      type: build
      genesis: v7.0.0

This will build the tag from source and perpare the cosmovisor.

improvement: better resource usage for chains

Overview

To be able to run longer running devnets, we need to use less storage (main factor contributing to resource overflow).

Currently

The chains are run with default configuration, saving everything to storage of the Pods, this storage is limited by the disk space per node available.

Proposal

For longger running chains, we need the capability to use less resources, maybe pruning every so many blocks to keep the storage in check.
Need to look into the configs for each chain to use less storage.

Wacky idea

Maybe we perform pruning action per chain node based on the available storage. Not sure how this can be done if at all.

tests: Add e2e tests for starship itself

Overview

Now with more and more moving parts, it make sense to start having more e2e tests for various parts of Starship itself.

Proposal

Given a config.yaml file and a topology

Registry Service Tests

  • Tests to make sure exposer service is working as expected. Simple assertions
    • /chains, /chain_ids, /ibc endpoints
  • Tests need to be in Golang

Infra setup tests

  • Have simple tests to make sure blocks are being produced for the various networks in the config file
  • Make sure various chains and relayers are all up and running

Exposer service Tests

  • Since we run exposer for each of the chain as side car, we can have tests to make sure exposer is running as expected

All tests can be e2e, and based on the config.yaml file.

Bonus

Large scale testing of the system running in a k8s cluster somewhere.

Note

Have a look at helm testing, maybe we could use something like that.

Support for mac

Overview

Currently all the docker images, scripts and helm charts are only working for Linux users out of the box.

We need support for multiple developer environments, specially the mac

Proposal

  • Add CI to create all docker images for darwin as well
  • Run the scripts and make for mac as well
  • CI to test examples for various hardware

Note: Initially it would be best to start support for developers using Starship to have mac support (development itself of starship can be ported later)

examples: cw-orchestrator using starship

Overview

Currently AbstractSDK/cw-orchestrator is using a fork of interchain tests with a mix of golang and rust coupled with docker.
We create an example in which we are able to perform the various e2e tests

Proposal

Inorder to achive this we have to do the following

On Starship: Rust Client

Create a rust client similar to the go client: This client will be responsible for initial connection with the starship infra running from a config file.

The client will do the following:

  • Have information about all the chains running, relayers etc
  • Initial keys/mnemonics with the balances
  • Handy util functions to: send tokens, ibc tokens, contract deployment (?), contract store (?) etc

On cw-orchestrator: Starship directory

First we create a starship dir in root, with the

  • config file (specifying the infra we want to setup)
  • Makefile with handy commands to spin up starship infra, and to connect to it
  • Test cases, which will import the rust client created above, and write idempotent test cases

Bonus:

  • Add CI/CD using starship-action to run the whole testing setup and test cases (maybe in a k8s cluster)

Workflow

Once we have the components the tests would look something like:

# Spin up the infra
make start

# Run the tests, should be able to run multiple times
cargo test

# Stop everything
make stop

The setup is such that there is some initial setup time, so it would make sense to write test cases itself such that they can be run multiple times against the same infra setup.

Note: In CI/CD we spinup the system from scratch.

feature: local chain registry

Overview

Inorder for external client to connect to a devnet chain, we need some information about the chain part of chain-registry. The current idea is to create a single entrypoint into shuttle devnets exposed universally at a given port.

With this, interaction would be something like, all clients (js, go, bash) need to query this API to fetch essential information, and then configure there own clients to be able to talk to any of the chains on shuttle

Currently

Information about the chain is presumed. It can either be fetched

  1. From values.yaml from github
  2. Hardcode it as config: https://github.com/Anmol1696/shuttle/blob/main/examples/chain/src/network.ts

Both these approaces are non ideal and messy.

Proposal

Create a local chain registry simple api, which can take in chain schema for various chains running as config files, and expose an API, like cosmos-directory.

As a user of shuttle just need to specify

# cat values.yaml
chainRegistry:
  enabled: true
  image: <image>

API

The api endpoints should be able to read config files and provide endpoints to retrive chain registry data.
We create the config files as configmaps (so chain-registry data is generated based on values.yaml file and default values)

Note:
Additionally we can use this API to also expose the mnemonics used for testing. at endpoint
/mnemonics.

Endpoints:

  • /chains, /chains/{chain}
  • /assets
  • /mnemonics

Future work

Ability to register IBC channels and connections. When the relayers are spinning up and new channels and connections are created, then we need to be able to either

  1. Actively relayers register IBC channels and connections to the chain registry api
  2. Or the api scrapes all the chains to fill the IBC information by itself

Either ways, for this API down the line, would require to maintain state, for it to handle POST requests.

Reference

epic: `starship` cli

Overview

Currently we use make commands and some handy scripts like scripts/port-forward.sh to interact with the system.

Proposal

Create a go cli starship with the following commands:

  • start: Read a config yaml file, and spin up k8s infra on the cluster, optionally create the kind cluster if local deployment
  • stop: Delete all the helm deployments
  • connect: Analogous to port forwarding
  • setup: Initial setup and install all the binrarries required

The cli should be able to work on mac or linux initially (need tests to make sure this works)

Working

For the first iteration of the cli, we will just use exec commands from inside the binary. This would mean, we just need to port the current commands from various make and bash commands into the go cli."

APIs

Once the cli is working, for the developer, this should be following commands to get started

go get github.com/cosmology-tech/starship/cli

starship start --config config.yaml

Future work

  • Automated resource allocation based on available resources
  • Native support of multiple helm and others

Sub Tasks

  • version in config: #202
  • port-forward check: #145
  • list: list all helm charts installed
  • start: add ability to use --set, --set-file into the config file
  • start: be able to parse the config file, run schema for verification before install
  • status: be able to run kubectl get pods and helm status, to figure out the actual status of the helm charts

Docs: create on-boarding docs and scripts for new users

Overview

Anyone who is new to Starship faces alot of issues around on boarding and is currently manually told what to do, need a better documentation on user onboarding to get them running with Starship quickly

Implementation details

  • List out all the dependencies clearly
  • Clean setup scripts to create a new environment quickly and easily
  • Working examples to get something running

Feature: Upgrade testing

Overview

One of the main benefits of the shuttle is that we are emulating the real world network systems. One of the applications for this project is to use it for upgrade testing.

Usecase

The main users of this feature will be chains itself, and this feature will be used for various scenarios that involve upgrade and upgrade testing.

  • New version of the chain is in RC, then this setup can be used to test the upgrade process is smooth.
  • External upgrade triggers will allow users to write complex test cases, create some IBC clients and upload some contracts, upgrade the chain, and see the behaviour of the upgrade on existing or new features.
  • Down the line, chains will be able to run tests and simulations against the actual state from mainnet with emulated IBC clients

Design

Inorder to achive this, we will make upgrades a first class citizen of setup. There a couple of approaches we can take

Init container setup upgrades

Use various init containers to setup and build statefulsets with the following setup

# cat vaules.yaml
---
chains:
- name: osmosis-1
  type: osmosis
  image: "<go alpine builder image>"
  upgrade:
    enabled: true
    genesis: <commit or tag of pre-upgrade>
    upgrades:
    - name: v3
      version: <commit or tag of pre-upgrade>
---

For this interface to work, we would need a mapping of build steps for each of the chain so we can build the binaries in init containers. The init containers in this case will

  • Build binary for genesis version of the chain
  • Build binary for all upgrades versions of the chain
  • Get cosmovisor binary and setup ~/.<chain>/cosmovisor directories properly, with genesis and all upgrades

The init container will just build the binaries and data directories and pass it to the main validator containers for it to start.

Note: The actual upgrade will be tiggered externally, this setup will just create the chain in a state which is setup so that it can be upgraded

Custom docker images

We can additionally create docker images with various binaries as well as upgrade.

Benifits

  • Already possible, since for chains we already allow custom images, can pass an image with various upgrade binaries and cosmovisor preset.
  • Startup time for the cluster will be small, since the docker images are preloaded

Drawbacks

  • Users have to interact with docker, upload images and then be able to use the image, and also integrate with helm values
  • Upgrade name is specific to the chain, and the values.yaml will be very hard coded to the docker image

spike: Docker-compose for local setup

Overview

As a user, one does not care where and how the system is being run as long as it is consistent, locally, on the CI and in the cluster.

Running k8s locally is still a bit of a hassle with docker-desktop it becomes a little easy.

Proposal

What if we allow users to define a simple config file, but then still be able to run that with docker-compose. We will solve the local issues.
We can restrict the number of nodes to be spun as well as not allow multi-validator setup, but basically give users a development environment of there choices.

Design

We would have to redo alot of stuff that k8s provides for free, mostly around init-containers, templating with helm and more to get this to work. But this would be a good feature to have for local testing.

User inputs

Users still just needs to provide a simple config file as input, same configuration, and we can create a docker-compose template for running it, first via the Makefile, then via the cli.

Multiple backend supports would be really big for Starship, specially for the initial hurdle of onboarding users to Starship.

NOTE: We would need to restrict the features for docker-compose based on the limitations of the framework itself.

improvements: Connect GH action for e2e tests to a remote k8s cluster

Overview

Currently the resources available on the GH action runner is very limited, 2 CPU and 2 GB ram. This is not enough for any kind of testing.

Proposal

Create a kubernetes cluster in AWS, and connect it to the GH action repo via KUBECONFIG and GH secrets. Then we will be able to run the e2e tests remotely without any worry about the resources on the runners.

Feature: Add faucet capabilities to devnet

Overview

values.yaml has faucet key, but need to add the template templates/faucet.yaml using the faucet from confio.

# cat values.yaml
faucet:
  enabled: false
  image: <docker image for facuet>

Proposal

Faucet needs to be created for each of the chains defined in chains key in values.yaml. Either we create a deployment with a single pod for each chain, with minimal resources.

Mnemonics are stored in charts/devnet/configs/keys.json. Can use the first jq -r "genesis[0].mnemonic" keys.json.

We can use the initContainers for the setup.

tests: Build test docker images before running e2e tests

Overview

Currently tests are run against fixed latest docker images of various chains and services.

Proposal

In the e2e-test workflow, before the tests, build docker images and push them with prefix e2e-{image-name} and then use that in the tests.

This will slowdown the e2e tests a little as well, already takes 10 mins. But would be the best way to test e2e

improvements: simplify `resources` in config.yaml file

Overview

Currently when defining resources for any of the directives in the config file, one has to specify

resources:
  limits:
    cpu: 1
    memory: 2Gi
  requests:
    cpu: 1
    memory: 2Gi

Proposal

For most users don't need to specify limits and requests for resources since for testing purpose they can be actually the same. In the config file a user should only specify cpu and memory directly, and we have a helper function to convert it to actual k8s resource

resources:
  cpu: 1
  memory: 2Gi

onboarding: Script to perform the start of the helm start

Overview

Currently we are using make commands with helm directly to start the cluster running. With this we are not able to provide accurate information to the user with hints and suggestions

Proposal

Create a scripts/start.sh which the users will provide a config.yaml file to spinup the helm charts. It is supposed to do the following:

  1. Check access to kubernetes with kubectl get pods
  2. Make sure all dependencies are installed (only check)
  3. Warn users about the resources that are going to be consumed (nice to have feature)
  4. Install helm chart
  5. If resources are in pending state then we should warn the users about the resource utilization (nice to have feature)
  6. Perform port-forward

This should make the onboarding much smoother.

docs: Add contirbutor.md file

Readme is outdated for development, specially after Bazel integration.

For a contributor getting started with the repo, should have a good understanding of the codebase and pracitces we use.

feature: Add `/keys` endpoints for chains

Overview

For the clients connecting to the infra, we need keys and addresses that have the initial supply.
Exposer service already exposes this via the http endpoint. It might make sense to expose this via the registry service so there is a single point of enterance.

Will Touch

  • Registry service
  • Protos

Proposal

Registry service creates endpoint to fetch the keys from the exposer /chains/{chain}/keys. It will be a simple proxy to the exposer keys endpoint, so that we dont have to expose all the exposers as well.

improvements: Ability to patch genesis based on config input

Overview

Currently the genesis.json for all the chains are created via a script in init-containers. There is a need to override certain genesis params based on user requirements or initialization of the cluster.

Example: Default fees for creation of pool in Osmosis is 1000 Osmo, set in genesis.json as

"app_state": {
...
    "gamm": {
      "pools": [],
      "next_pool_number": "1",
      "params": {
        "pool_creation_fee": [
          {
            "denom": "uosmo",
            "amount": "1000000000"
          }
        ]
      }
    },
...

Proposal

We have a couple of solutions or approaches for this:

  • Use jq to perform patchMerge, similar to how kustomize works in k8s yaml. The user interface via config file would look something like
chains:
  - name: osmosis-1
    type: osmosis
    numValidators: 2
    overrides:
      genesis: |
        {
            "app_state": {
              "gamm": {
                "params": {
                  "pool_creation_fee": [
                    {
                      "denom": "uosmo",
                      "amount": "100000"
                    }
                  ]
                }
              }
            }
        }

or provide a json patch file as input

chains:
  - name: osmosis-1
    type: osmosis
    numValidators: 2
    overrides:
      genesis: custom-patchGenesis.json
  • Provide users a way to provide a costom genesis script run pre or post after running the default scripts. We can also have the option to skip the default genesis creation script completely
chains:
  - name: osmosis-1
    type: osmosis
    numValidators: 2
    overrides:
      genesis: 
        post: custom-genesis-script.sh

Note: custom scripts need to be based on the variabled provided by the original script

improvements: Spinup infra within TestSetup

Overview

Currently we first spin up the kubernetes environment from make start command which first installs the helm chart on the kubernetes cluster and then run port-forwarding.

This is kinda of annoying when running tests, user has to do following

make start
# Wait time is long, anywhere from 5 mins to 20mins....

# Run if helm timeouts, else `start` command will also perform port-forward
make port-forward

# After the above two are done, only then test can be run, test could be golang or npm
go test .
# or
npm e2e:test

Proposal

Not sure how to fix this. Infra setup since takes time, should be setup once per testing batch. If setup per test, it would take too long.

Might want to comeup with better ways about this.

improvements: Remote Bazel builds

Overview

Currently the CI is slow due to bazel cache being built everytime.

Proposal

Remote bazel caching in a prod k8s cluster could be a good answer for this, and speed up the whole CI.

improvements: Relayers to specify paths to open between chains

Overview

Currently on the relayer start, we create the transfer port between the chains specified. We should have the ability to create more custom connections between the chains.

Proposal

Have a paths directive in the relayer directive to make the connections explicit.

relayers:
  - name: hermes-osmo-juno
    type: hermes
    replicas: 1
    chains:
      - osmosis-1
      - juno-1
    paths:
      - name: transfer-port 
        connection: connection-0
        channel: channel-0
        port: transfer
      - name: channel-connection
        connection: connection-0
        channel: channel-1

Questions

  • How will the registry service handle multiple connections? Current /ibc endpoints follow the IBC schema for chain-registry and only have the type of transfer ibc ports.

chore: convert all services to proto

Overview

A grpc service is able to host both rest and grpc services.
If we convert all the service to grpcs we will have a clean interfaces by there protos which we can store in the root dir.

Cons

Currently for all the API services, have been built using a custom go-chi template and structure. A grpc service would need modification to the template

Pros

What a service does will become explicit with protos. Grpc + rest service will be a better paradigm then just rest service

improvements: `port-forward` should check status of deployments before running

Overview

Currently when a user runs make port-forward it is a no-op. So even if the infra is currently being spun up, it will just perform the port-forward.

Proposal

Check if the deployment is healthy or not before performing the port-forwarding. And warn the users to wait.
For the check we should make sure:

  • Pods that we are performing port-forwarding for are up
  • If pods belong to a must-be-up list, only then fail, else perform the port-forward, or I guess warn accordingly.

bug: slow chain initialization

Overview

During chain initialization, adding keys to the keyring seems to take the most amount of time.
When constrained on resources this is even more noticable slowdown.

Proposal

  • Tryout other key backend instead of test, maybe memory or keyring itself
  • Somehow preload the keyring test dir before hand, fetch zip file with the keyring dir for the chain (we can preload 100s of addresses)
  • Try to run multiple commands in parallele (does not work with keyring-backend), maybe re-egineer keys addition step manually

feature: Test framework clients

Overview

Starship requires simple clients that interacts with the starship infra running in k8s and wraps around a couple of utils functionalities making it easier for writing tests

Proposal

The supported languages are

  • go
  • rust
  • js

I was thinking of structuring packages like:

clients/
  go/
    chain.go
    config.go
    ...
  js/
    chain.js
    config.js
    ...
  rust/
    ...

All packages need to be importable in other projects, so need to figure out the release cycle as well.

improvement: default overrides part of the chain defination itself

Overview

For anyone using a custom image for running the chain, need the ability to define defaultChain values in the chain block itself.

Proposal

Add support for type: custom in chain, where one can define all the remaining default values directly into the chain block

chains:
- name: test-chain-1
  type: custom
  image: <image for the chain>
  home: <path to home dir>
  binary: <name of binary>
  ...

Without changing the default values, one can directly inject a custom chain into the config file and have helm take care of the rest.

bug; chain grpc port not exposed properly

Chains grpc ports are not enabled and also not exposed properly to other resources in the cluster.

How to reproduce?

Run with newwer hermes relayer version 1.5.1, using the grpc endpoints

features: `relayers` Expose running arbitrary scripts or commands on nodes

Overview

If one wants to create custom channels between the chains on the fly, then it is currently not possible.

Proposal

/command endpoint on the exposer to run custom scripts or commands directly on the relayers.

Alternatives

Currently we can still do this with kubectl exec. We can run this directly from a test case, although this will make it a bit messy.

improvements: Registry service fully compatibale with chain registry

This feature will complete starship, for it to be integrated with a testing framework.

Overview

Currently the registry service is being built as a local Chain Registry.

Currently

Inorder for an app to connect with Starship infra, need to hardcode the ports and ids to connect with, which is not scalable and ideal. All the information needed to connect with production RPC endpoints are available in the chain registry repo, which is currently missing in Starship

Proposal

Add ability in Starship Registry service to fetch the information needed for the client to connect to the chains.
This will done via:

  • Initial config file loaded into the Registry service as a configmap which will have the static information
  • Registry service will fetch and store information in in-memory cache the following information:
    • IBC connection and channel information
    • Node id and peers feild in the

For the above we need a way to fetch the data from running nodes via the lens client. We need to create an internal model in the Registry service to hold all the data in cache (no need for a database).

Note: Cache for the service can be filled on Registry service startup, even by k8s hooks.

feature: Add support for ICQ relayer

Overview

With more and more of ICQ and ICA based applications in Cosmos, we need the ability to run all kinds of ICQ relayers as well.

Proposal

relayers/
  icq/

Maybe we need to look into the types of ICQ relayers we support as well.

feature: ability to tombstone validators

Overview

We need the ability to tombstone a validator on shuttle

Design

exposer will expose its priv_validator_key.json of the node. It will support both GET and POST to set a priv_validator_key.json.
From this we will be able to have a script from outside the system

improvements: custom scripts

Overview

Currently all scripts being run inside the chain and relayers are default and available in the scripts/ dir in the chart.

Proposal

Make this customizable by pushing the scripts to the values.yaml

chains:
  - name: gaia-1
    type: cosmos
    numValidators: 2
    build:
      enabled: true
      source: v1.0.1
      script: scripts/build_chain.sh
    scripts:
      genesis: scripts/setup_genesis.sh
      config: scripts/setup_config.sh
      validator: scripts/create_validator

The bash scripts should be built ontop of the current scripts, since that has the correct path information as well as the environment variables exposed to them.

feature: add support for `cometMock` directive for chains

Overview

Since we are running full chains, sometimes it is hard to get some unhappy path tests, forcing the CometBTF layer to behave a certain way. It would be a good feature to run cometMock which mocks the CometBTF.

Proposal

Users should be able to easily specify a directive for the chain to use CometMock by enabling it. Config should look something like:

chains:
- name: chain-1
  type: cosmos
  numValidators: 4
  cometMock:
    enabled: true
    image: <custom image of cometMock>

Design

  1. Run an additional deployment/statefulset with the chain
  2. Run all the validators with an external tendermint flag
  3. Spin up CometMock with address of all the validators (might have to wait for all validators to spin up first)

CC: @p-offtermatt

bug: slow explorer start

Overview

Explorer is currently being run with yarn serve at runtime. This takes some time to spinup but consumes alot of memory and cpu. Explorer pod can not be smaller than 1CPU and 2GB ram. Which is alot just for the CPU.

Proposal

  • Figure out a way to pass configs to the explorer at runtime, instead of building it everytime
  • Fork ping-pub and make the configs overrideable with environment variables

bug: `0.1.34` does not seem to work with Mac M1

Overview

Currently release 0.1.34 is not working with Mac M1, as it is trying to pull image with --platform linux/arm64

Proposal

Build all images with platform --platform linux/arm64. All images would need to support arm docker images in that case.

improvements: Faster upgrade setup

Overview

Currently it is taking alot of time to spin up, 5 mins.

For the upgrade testing case the most time is take due

  • Building each of the chain binary in docker containers
  • Keys being added takes up alot of time for each spinup
  • Too many init containers

Proposal

  • Reduce the number of keys, or figure out a way to preadd them
  • Use prebuild binaries or docker images to fetch various binaries

Goal

The goal is to bring down the inital startup time from 5 mins to under a minute.

examples: Add cosmjs setup scripts to run external setup

Overview

Given that the cluster is up and running, current expectations for setup state of the whole ecosystem is to be done externally. We need to create examples/ where we are creating following examples of scripts

  • Create osmos/ibc token pools
  • Run upgrade, pre-upgrade and post upgrade

These are similar to test setups

Design

A cosmjs script that will read the values.yaml file to figure out the client it needs to create based on the chains key. In the testing setup the script can be seperated out in way where we can run js scripts individually as well, and in a composible state.

Functional requirements

  • these will be setup scripts, they may or maynot have asserts
  • Scripts are currenty should be run against the cluster, which is already running
  • Maybe we have values.yaml itself also generated inside the cosmjs scripts.

Note: Need a good structure for the scripts, need cosmjs examples itself that we can be using for standarization

improvements: script to perform port-forward

Overview

Currently we use make port-forward-all as the command to route trafic locally. This can be done better.
From a user perspective, we need easy way get users working with the infra setup.

Proposal

There are a couple of ways we can solve the problem. Here is what the userflow will look like

  1. User creates config.yaml file for spinup helm
  2. User runs make install or helm install against a k8s cluster
  3. User port-forwards all the desiered ports locally: (this can be done in multiple ways):
    a. Create a script/port-forward.sh which will take the config.yaml file as input. User then would just run the script.
    b. Have some python scirpt to replace spinup and port fowarding
    c. Create a starshipd in golang.

Short term

Currently to improve the workflow out of the box, it might make sense to go with a bash script.

Extra note

Registry service needs to be able to adjust based on port forwarding or not. Down the line when we are running scripts and tests inside the cluster, is when we would need to figure more out

Ingress for all chain nodes

Overview

Currently we are using kubectl port-forward to get access to the RPC ports and expose the chains. We relay on this for both local as well as remote cluster on a k8s digital ocean.

Proposal

Inorder to unify the experince of everyone down the line, we dont expect people to use kubectl at all. The proposal here is to use ingress to expose the RPC ports for various chains. We can use the following ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: node-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx-example
  rules:
  - host: "rpc.osmosis-1.starship.one"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: osmosis-1-genesis
            port:
              number: 26657
  - host: "rest.osmosis-1.starship.one"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: osmosis-1-genesis
            port:
              number: 1317

There are a coupld of considerations to be made here:

  • Do we want to expose all the validator nodes independently with ingress, rpc.<node>.<chain>.shuttle.one?
  • New service per validator node?
  • TLS need to end on ingress for remote

For local setup, need to map the hosts to the IPs and for remote need to create cloudflare rules with tls ending secrets.

Feature: Ability to run scripts against the system as a k8s job

Overview

Currently the only way to run scripts that change or update the state of the system are via a script run locally with port forwarded locally.

NOTE: Nice to have feature

We need a way to run a script against the system from inside the k8s cluster itself. This will alow us to get the whole cluster to a desiered state. Example of scripts that we would run via this process:

  • Create inital pools on osmosis that IBC tokens from another chain
  • Upload predefined contracts to the wasm based chain
  • Transfer tokens to some wallets to or ibc token from one chain to another

Design

We run all the jobs
values.yaml

# cat values.yaml
---
jobs:
- name: "Osmosis pool creation script"
  type: "cosmjs" # one of cosmjs or bash type of job
  image: "<custom image>" # default image will be based on the type, can be specificly specified
  scripts: # This will be mounted as a volume to the k8s cluster via configmaps 
  - examples/osmosis-setup/
  run: |  # run will be set as command for the job
    npm install
    npm build
    npm run test
- name: "Move atoms from gaia to persistenceCore"
  type: "bash"
  image: "<custom image>"
  scripts:
  - examples/persistence-gaia-setup.sh
  run: |
    bash ./examples/presistence-gaia-setup.sh
---

Benifits of this would be we can make state setup as part of cluster startup. This is nice feature to have, since we are already able to have external setup scripts that can run against the cluster and get the state to a new stage.

Down the line

These are scripts that set the state of the system, overtime this can be a workflow type of system as well and create DAGs

chore: cleanup

Overview

Not long since developing this codebase, and we already have very messy codebase and yaml files. We need to use templating properly and figure out a way to have a cleaner templating and genesis.yaml and validator.yaml files.

improvements: Add ability to specify multiple denoms in values.yaml

Overview

Currently we are using coins: 100000000000000uosmo,100000000000000uion and denom in the default values.yaml file in defaultChains.

The side effect of this is, the registry service is currently only able to show a single denom in the assetlist.

Proposal

Ability to specify multiple denoms as per the chain in the default values (based on the actual chain-registry)

defaultChains:
  osmosis:
    image: anmol1696/osmosis:latest
    home: /root/.osmosisd
    binary: osmosisd
    prefix: osmo
    denom: uosmo
    denoms:
      - name: osmo
        # "<exp 0 denom>, <exp 6 denom>"
        units: "uosmo, osmo"
      - name: ion
        units: "uion, ion"
    coins: 100000000000000uosmo,100000000000000uion
    hdPath: m/44'/118'/0'/0/0
    coinType: 118
    repo: https://github.com/osmosis-labs/osmosis

Add tests in the e2e package to make sure this runs always

GH actions

Overview

Ability to create shuttle cluster in GH actions. It requires the following functionalities

  • Chains and relayer topology exposed via with in action step

Interface

Ideally one would want the following GH actions interface

jobs:
  setup:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout
      uses: actions/checkout@v2
      with:
        fetch-depth: 0
    - name: Setup shuttle
      users: Anmol1696/shuttle-devnet-actions@v1
      with:
        # Either we profide the values file or values itself
        values:
          chains:
          - name: osmosis-1
            type: osmosis
          - name: wasmd
            type: wasmd
          relayers:
            ...
        values-files: "<path to custom values.yaml file>"
        port-forward: true
        # Specify the following flags and options if you want to deploy the chain to remote k8s cluster
        remote: "true"
        kubeconfig: "${{ secrets.KUBECONFIG }}"
        # Optional
        namespace: "<namespace name>"

The GH actions should perform the following:

  • Create kind cluster
  • Install helm
  • Install kubectt
  • If remote is specified, then setup kubeconfig and connect to remote cluster
  • Setup helm repos with shuttle devnet
  • Perform helm install and wait for the pods to be deployed
  • Perform kubectl port forward to local

feature: ability to connect to the node cluster from debugger

Overview

While debugging, it is really hard to debug on the k8s cluster. It would be really helpful if we are able to connect to a running cluster, and start the node in debug mode.

Design

Might just require a script of a list of steps to perform. It would be really good if we can have custom plugins for various debugger in Goland and VScode

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.