Giter VIP home page Giter VIP logo

layotto's Introduction

Layotto (L8): To be the next layer of OSI layer 7

Layotto Env Pipeline 🌊 Layotto Dev Pipeline 🌊

GoDoc Go Report Card codecov Average time to resolve an issue

查看中文版本

Layotto(/leɪˈɒtəʊ/) is an application runtime developed using Golang, which provides various distributed capabilities for applications, such as state management, configuration management, and event pub/sub capabilities to simplify application development.

Layotto is built on the open source data plane MOSN .In addition to providing distributed building blocks, Layotto can also serve as the data plane of Service Mesh and has the ability to control traffic.

Motivation

Layotto aims to combine Multi-Runtime with Service Mesh into one sidecar. No matter which product you are using as the Service Mesh data plane (e.g. MOSN,Envoy or any other product), you can always attach Layotto to it and add Multi-Runtime capabilities without adding new sidecars.

For example, by adding Runtime capabilities to MOSN, a Layotto process can both serve as the data plane of istio and provide various Runtime APIs (such as Configuration API, Pub/Sub API, etc.)

In addition, we were surprised to find that a sidecar can do much more than that. We are trying to make Layotto even the runtime container of FaaS (Function as a service) with the magic power of WebAssembly .

Features

Project Architecture

As shown in the architecture diagram below, Layotto uses the open source MOSN as the base to provide network layer management capabilities while providing distributed capabilities. The business logic can directly interact with Layotto through a lightweight SDK without paying attention to the specific back-end infrastructure.

Layotto provides SDKs in various languages. The sdk interacts with Layotto through grpc. Application developers only need to specify their own infrastructure type through the configuration file configure file provided by Layotto. No coding changes are required, which greatly improves the portability of the program.

Architecture

Quickstarts

Get started with Layotto

You can try the quickstart demos below to get started with Layotto. In addition, you can experience the online laboratory

API

API status quick start desc
State demo Write/Query the data of the Key/Value model
Pub/Sub demo Publish/Subscribe message through various Message Queue
Service Invoke demo Call Service through MOSN (another istio data plane)
Config demo Write/Query/Subscribe the config through various Config Center
Lock demo Distributed lock API
Sequencer demo Generate distributed unique and incremental ID
File TODO File API implementation
Binding TODO Transparent data transmission API

Service Mesh

feature status quick start desc
istio demo As the data plane of istio

Extendability

feature status quick start desc
API plugin demo You can add your own API !

Actuator

feature status quick start desc
Health Check demo Query health state of app and components in Layotto
Metadata Query demo Query metadata in Layotto/app

Traffic Control

feature status quick start desc
TCP Copy demo Dump the tcp traffic received by Layotto into local file system
Flow Control demo limit access to the APIs provided by Layotto

Write your bussiness logic using WASM

feature status quick start desc
Go (TinyGo) demo Compile Code written by TinyGo to *.wasm and run in Layotto
Rust demo Compile Code written by Rust to *.wasm and run in Layotto
AssemblyScript demo Compile Code written by AssemblyScript to *.wasm and run in Layotto

As a FaaS(Serverless) runtime (Layotto + WebAssembly + k8s)

feature status quick start desc
Go (TinyGo) demo Compile Code written by TinyGo to *.wasm and run in Layotto And Scheduled by k8s.
Rust demo Compile Code written by Rust to *.wasm and run in Layotto And Scheduled by k8s.
AssemblyScript demo Compile Code written by AssemblyScript to *.wasm and run in Layotto And Scheduled by k8s.

Presentations

Landscapes

  

Layotto enriches the CNCF CLOUD NATIVE Landscape.

Community

Contact Us

Platform Link
💬 DingTalk (preferred) Search the group number: 31912621 or scan the QR code below

How to contribute

Where to start? Check "Community tasks" list!

As a programming enthusiast , have you ever felt that you want to participate in the development of an open source project, but don't know where to start? In order to help everyone better participate in open source projects, our community will regularly publish community tasks to help everyone learn by doing!

Document Contribution Guide

Component Development Guide

Layotto Github Workflows

Layotto Commands Guide

Layotto Contributor Guide

Contributors

Thank y'all!

Design Documents

Actuator Design Doc

Configuration API with Apollo

Pubsub API and Compability with Dapr Component

RPC Design Doc

Distributed Lock API Design

FaaS Design

FAQ

Difference with dapr?

dapr is an excellent Runtime product, but it lacks the ability of Service Mesh, which is necessary for the Runtime product used in production environment, so we hope to combine Runtime with Service Mesh into one sidecar to meet more complex production requirements.

layotto's People

Contributors

15669072513 avatar akkw avatar alilestera avatar bxiiiiii avatar canaan-wang avatar cyb0225 avatar duan-0916 avatar dzdx avatar gimmecyy avatar kevinten10 avatar leemos-xx avatar lxpwing avatar michaeldesteven avatar moonshining avatar nanjingboy avatar nejisama avatar nobodyiam avatar rayowang avatar seeflood avatar stulzq avatar tianjipeng avatar wenxuwan avatar wlwilliamx avatar xiaoxiang10086 avatar xunzhuo avatar zach030 avatar zhaoxuanqing avatar zhenjunma avatar zlber avatar zu1k avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

layotto's Issues

在layotto的invoker channel是否可以直接将xframe在mosn downstream.go的OnReceive函数开始调用

现行的layotto 调用mosn的机制是通过一个net.pipe将layotto的数据塞入mosn的layer 4网络层,
这样带来的好处是高度的和mosn集成,但是同样带来一些问题,最主要的问题是,需要带来额外的损耗,包括编解码以及网络
我觉得应该可以这么设计
1、在layotto的invoker channel开发一个direct channel
2、这个direct channel初始化的时候,不再建立mosn的layer 4 connection,而是直接初始化一个对应的proxy以及一个downstream stream
3、在Do invoker的时候,将layotto的数据req构建成xframe以后通过onReceiveStream.OnReceive(OnReceiveCtx, xreq.eq.GetHeader(), xreq.GetData(), nil)的方式写入
4、在wait for response channel中,通过第二步构建proxy用到的fake connection的Write函数拿到对应的callback response,如果可以更进一步,甚至可以在upstream endStream 的时候如果downstream并不是一个connection对象可以直接把xframe写过去,而不是encode后写入iobuffer

Community tasks 新手任务计划

Community tasks

As a programming enthusiast , have you ever felt that you want to participate in the development of an open source project, but don't know where to start?
In order to help everyone better participate in open source projects, the MOSN community will regularly publish community tasks to help everyone learn by doing!

Task List

Tasks of different difficulty are released:

Easy

Layotto runtime

  • Reduce the risk of panic. Add recover for all codes that create new goroutines. See #197
  • Fail fast,make Layotto kill itself when error accurs during startup. See #275

Bug fix

  • fix bug of actuator: apollo health status "INIT",even if no apollo component configurated. see #462
  • 修复 rpc 相关代码的单元测试报错问题
    #550

Write a Feature

  • Use in-memory component to implement the Actuator API. See #463

Comment related

  • Add comment to exported function/method/interface/variable (such as RPC/distributed lock modules), see #112
    Considering that the workload of adding comments to all the modules is relatively large, we split it into multiple tasks:
  • add comments to RPC related code
  • add comments to pubsub related code
  • add comments to lock related code . assigned
  • add comments to state API related code
  • add comments to actuator related code
  • add comments to tcpdump related code
  • add comments to WASM related code
  • add comments to flow-control related code
  • Add comments for file API related code in the proto file.
    image

SDK related

  • add metadata field for go,js and dotnet sdk. See #320
  • add distributed lock API for java sdk
  • add distributed sequncer API for java sdk
  • add nodejs sdk. assigned ,see #258
  • add python sdk
  • add c/c++ sdk
  • add .net sdk
    assigned.see #130
  • add more sdk api into java sdk(currently there are only protobuf files,but some users ask us to add more feature in sdk)
    assigned. see #153
  • compile proto files into java code as java sdk. See #79 Already done by @MentosL

Document related

Translation

Tests related

  • Add unit tests for actuator module. The code is under pkg/filter/stream/actuator/http
    image

  • Understand the implementation of wasm module & add unit tests: see #105

  • add more unit test whereever you like to make layotto's unit test coverage higher(currently it's only 46%)

  • Add unit tests for runtime/runtime.go and grpc/api.go. These two files are the core engine code of layotto, responsible for component lifecycle management and grpc request processing respectively
    Already assigned to @tianjipeng See #138

  • Add integration test cases for a certain type of API. See #107 assigned to @seeflood

  • Understand the implementation of the rpc module & add unit tests: see #106 Already done by @tianjipeng

  • Add integrate test. see #415

Components

in-memory components

See #67 (comment)
We want to let users and sdk developers run our demo without go , docker and back-end storage (e.g. redis) pre-installed.
To achieve this goal, we need to add in-memory components,including:

  • pubsub
  • lock
  • sequencer
  • configuration API

You can take the in-memory state store component as an example,see #327

Medium

Add support for Dapr API

We want Layotto to support both Layotto API and Dapr API. In this way, if users are worried about vendor lock-in, they can use Dapr SDK to switch between Layotto and Dapr freely.
You can refer to:
#361
#362

  • InvokeService
  • InvokeBinding
  • State
  • pubsub. See #406
  • Configuration
  • Secret
  • GetMetadata/SetMetadata
  • Shutdown

SDK

  • Improve .net sdk. Make it have the same capabilities and api as go sdk
    assigned.see #130

  • Develop python sdk
    Because we want to reduce the cost of maintaining multi-language SDKs, we want to reuse Dapr SDKs as much as possible. Therefore, this task suggests to fork the Dapr python SDKs for modification (because the package path of the proto interface is different, so the code compiled by proto is different, so we need to do some modification).

  • Develop the spring-boot-layotto package. See layotto/java-sdk#8
    Let layotto integrate spring boot so that users can use annotations to register pubsub subscription callback

Actuator

  • Add a actuator metrics API. See #201

WASM

  • Let Layotto monitor whether the .wasm file has changed, and reload the .wasm file if there is a change.
    Achieve the effect of dynamically replacing .wasm at runtime. see #165
  • WASM Function access cache through state API. see #192
  • Upgrade the wasm demo developed by rust. see #255
  • Upgrade the wasm demo developed by AssemblyScript. see #256

File api implementation

  • Add components for file api. File api have been supported in layotto, need add other components for file api. #236 , #98
    • awsOss
    • local file system(file on local disk)
    • HDFS
    • minIo
    • Implementing File API components with ceph

Sequencer API related

  • Sequencer API Component:Choose an open source component or cloud service you like (such as zookeeper, leaf, etc.) to implement distributed auto-increment id generation service.

    • Etcd

    • Stand-alone redis
      assigned

    • Zookeeper
      assigned

    • Leaf

    • Mongo

    • Consul

    • snowflake algorithm (need to avoid clock rollback problems) assigned.see #193

    • Mysql

    • PostgreSQL

    • Any other storage

  • Implement the segment caching feature of Sequencer API. assigned to @ZLBer ,see #158
    You can refer to Leaf to do double buffer optimization
    The function to be implemented is at https://github.com/mosn/layotto/blob/main/pkg/runtime/sequencer/cache.go

Distributed Lock API

  • Choose an open source component or cloud service (such as zookeeper) you like to implement distributed lock API. See #104
    • redis standalone
      done
    • redis cluster (with redlock or some other algorithm to make it safer)
    • zookeeper
      assigned.see #111
    • etcd
      assigned.see #128
    • Consul
      assigned.see #129
    • Mongo
      assigned.see #348
    • Cassandra
      refer to https://github.com/dekses/cassandra-lock
    • anything else. e.g. some build block provided by AWS or aliyun

Engineering

  • Make Layotto CI more powerful. See #532 (comment)
  • Add more linters. See #599
  • Provide Layotto Dockerfile,so that users can deploy Layotto with Docker. See #178
  • Automatically generate API documents: use some tools to automatically generate API documents based on proto files. Refer to the document automatically generated by Etcd
  • Automatically check the pull request title and ensure it to meet the format of type(scope): subject in order to make the commit history more readable.
    See #243
  • Automatically check that new PR must have related issues
  • Automatically check code style with go lint in ci/cd pipeline. For example:
    • no Chinese in the code;
    • Every code that starts a new goroutine must have a recover
      It doesn't matter if these examples can't be realized. As long as we can do valuable automated inspections

Currently we use github actions as ci/cd pipeline, and the configuration file is here
You can refer to some github actions high-quality tutorials:

https://www.ruanyifeng.com/blog/2019/09/getting-started-with-github-actions.html

http://www.ruanyifeng.com/blog/2019/12/github_actions.html

  • Automated build: Automatically compile multi-platform binary files (binary files under linux/mac), compress (compress binary files), and build docker images
    You need to investigate the solution to see whether you write a build script yourself, using the cross-compilation feature of go, or just use a exist platform to help you build

Hard

Observability

  • Integrate with Skywalking, Jaeger and other systems
    Layotto currently supports tracing and we hope to integrate with observability-related platforms such as Skywalking and Jaeger.
    • Skywalking. assigned #310
    • Jaeger. assigned #547
    • Zipkin
    • Some other tracing platform...

Runtime API Lab

Hard

#530

  • Redis API.

  • Kafka API.
    Alicloud SLS component can implement Kafka-like API too.

  • Design transaction message API (like RocketMQ's) for pubsub

  • Delay message API for pubsub

  • Let Layotto support Dapr's Config API (alpha version)
    We have been discussing and working with Alibaba and Dapr community to contribute a Config API for Dapr since March of this year,and recently it was finally merged into Dapr .
    Now it's time to can make Layotto support Dapr's Config API.
    Dapr Config API is still in the alpha version, which is similar to Layotto's existing Config API but lacks some fields. You can refer to Layotto's existing implementation during development.
    See Dapr's API definition

  • Let Layotto support secret API.
    Layotto's goal is to build a Runtime API standard with Dapr and other communities (promote Dapr API as an industry standard, and Layotto as an implementation of this API), so it needs to support Dapr's secret API. Therefore, this task needs to port Dapr's secret API into Layotto. For Dapr's secret management related documents, see https://docs.dapr.io/developing-applications/building-blocks/secrets/
    assigned. see #212

  • Layotto support Binding API. Same as above, porting Dapr's binding API into Layotto

WASM Lab

  • Support loading multiple wasm files,so that Layotto can serve as FaaS container. See #176
  • Support dynamic load wasm file. See #191

If you are interested, you can reply and we will assign the task to you

Kubernetes Lab

  • Deploy Layotto on Kubernetes. see #189

Istio

  • Integrates with istio 1.10, allowing layotto's invokeService API to reuse istio's traffic management capabilities. See #311

[BUG] set raw requestID into response

https://github.com/mosn/layotto/blob/main/components/rpc/invoker/mosn/channel/xchannel.go#L79

这里我们会替换掉原始的 rpc 协议的里面的 requestID,但是在返回响应的时候并没有把原始的requestID给替换回来,所以我认为这里是一个bug。

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

如果想要重现这个bug,可以在layotto 启动完毕后,使用dubbo或者其他的rpc,发送一个请求(reqeustID 为2),返回会发现dubbo客户端收到的响应的requestID是1,因为这个地方是从1还是计数的:https://github.com/mosn/layotto/blob/main/components/rpc/invoker/mosn/channel/xchannel.go#L78

Anything else we need to know?:

Add more components for distributed lock API

What would you like to be added:

Add more components for distributed lock API.
Choose an open source component or cloud service (such as zookeeper) you like to implement distributed lock API
References:pull request #100

  • redis standalone
    done
  • redis cluster (with redlock) . see #284
  • zookeeper
    assigned.see #111
  • etcd
    assigned.see #128
  • Consul see #140
  • Mongo. assigned to @LXPWing
  • Cassandra
  • anything else. e.g. some build block provided by AWS

chinese:
选择一个你喜欢的开源组件或云服务(比如zookeeper)实现分布式锁API
参考资料:pull request #100

Why is this needed:

Currently we only support standalone Redis as a distributed lock store.We need more components to make it useful

grpc stream invoke

Your question

// SubscribeConfiguration gets configuration from configuration store and subscribe the updates.
rpc SubscribeConfiguration(stream SubscribeConfigurationRequest) returns (stream SubscribeConfigurationResponse) {}

这种流式调用 对热升级 不友好吧

Environment

  • Layotto Version

Logs

  • Paste the logs you see.

Add integration test cases

What would you like to be added:
Add integration test cases for a certain type of API. Choose one type of API (such as state API, pubsub API) to add integration test cases for it.
Currently only the wasm module has integrated test cases, you can refer to the wasm configuration in the Makefile.
In theory, there is no need to develop additional code. We can just run the demo program in the Makefile

chinese:
为某类API添加集成测试用例。任选一类API(比如state API,pubsub API)为它添加集成测试用例。
目前只有wasm模块有集成测试用例,可以参考Makefile里wasm的配置。理论上不需要开发额外代码、在Makefile里运行demo程序就行
image

Why is this needed:
We need integration tests!

[Proposal]新增API时的开发规范

写了下API开发规范,发出来看下大家意见:

新增API时的开发规范

感谢您对Layotto的支持!

本文档描述了如何设计并实现新的Layotto API。Layotto用Go语言编写,如果您对Go语言不熟悉可以看下Go教程

在开发新API的时候,可以参考已有的其他API的代码、文档,这会让开发简单很多。

Q: 为啥要制定该规范?

A: 目前缺少使用文档,用户不好用,例如:
img_1
img_2

代码缺少注释,感兴趣的贡献者看不懂,例如 #112

旧文档和注释补起来太慢,希望今后开发的新功能有这些。

Q: 遵循规范太麻烦了,会不会让想贡献代码的同学望而却步?

A: 本规范只限制“新增Layotto API的pr需要有哪些东西”(比如新设计一个分布式自增id API),其他pr比如新开发一个组件、新开发一个sdk都不需要遵循本规范,没这么复杂,足够自由

太长不看

开发前先提案,提案要详细

开发时要写4个给用户看的文档

  • Quick start
  • 使用文档
  • API通用配置
  • 组件配置

不用写设计文档,但是proto API和组件API要写详细注释,注释 as doc

新增API的pr要两个人code review,后续有机器人了可以一个人cr;其他pr随意

一、向社区发布API提案,经过充分讨论

1.1. 发布详细的提案

1.1.1. 为什么提案要详细

如果提案粒度太粗,其他人评审时可能没啥好评的,发现不了问题;

评审的目的是集思广益,大家一起帮忙分析当前的设计存在的不足,尽早暴露问题,免得以后返工。

1.1.2. 提案的内容

提案需要包含以下内容:

  • 需求分析
    • 为什么要做这个API
    • 定义需求的边界,哪些feature支持,哪些不支持
  • 市面上产品调研
  • grpc/http API设计
  • 组件API设计
  • 解释你的设计

一个优秀的提案示例:dapr/dapr#2988

1.2. 提案评审

简单的API发出来后大家文字讨论即可;

重要或复杂的API设计可以组织社区会议进行评审。

二、开发

2.1. 代码规范

2.2. 测试规范

  • 有单元测试
  • 有client demo,可以拿来做演示、当集成测试

2.3. 文档规范

原则:需要写给用户看的文档;至于给开发者看的设计文档,因为时间长了后可能过期、和代码不一致,可以不写,通过贴proposal issue的链接、在代码里写注释来解释设计。

2.3.1. Quick start

需要有:

  • what 这API是干嘛的
  • what 这个quickstart是干嘛的,想实现啥效果,最好有个图解释下
  • 操作步骤

正例:Dapr pub-sub quickstart 在操作之前贴图解释下要做什么事情

img

反例:文档只写了操作步骤1234,用户看不懂操作这些想干啥

2.3.2. 使用文档

文档路径在"用户手册--接口文档"下,例如 State API的见 https://mosn.io/layotto/#/zh/api_reference/state/reference

调研发现Dapr的使用文档较多,比如光State API就有:

https://docs.dapr.io/developing-applications/building-blocks/state-management/
https://docs.dapr.io/reference/api/state_api/

https://docs.dapr.io/operations/components/setup-state-store/

https://docs.dapr.io/reference/components-reference/supported-state-stores/

我们处于项目早期,可以轻一些

需要有:

what.这个API是啥,解决啥问题
when.什么场景适合用这个API
how.怎么用这个API
  • 接口列表。例如:

img_4

列出来有哪些接口,一方面省的用户自己去翻proto、不知道哪些是相关API,一方面避免用户产生"这项目连接口文档都没有?!"的反感

  • 关于接口的出入参:拿proto注释当接口文档
    考虑到接口文档用中英文写要写两份、时间长了还有可能和代码不一致,因此建议不写接口文档,直接把proto注释写的足够详细、当接口文档。例如:
// GetStateRequest is the message to get key-value states from specific state store.
message GetStateRequest {
  // Required. The name of state store.
  string store_name = 1;

  // Required. The key of the desired state
  string key = 2;

  // (optional) read consistency mode
  StateOptions.StateConsistency consistency = 3;

  // (optional) The metadata which will be sent to state store components.
  map<string, string> metadata = 4;
}

// StateOptions configures concurrency and consistency for state operations
message StateOptions {
  // Enum describing the supported concurrency for state.
  // The API server uses Optimized Concurrency Control (OCC) with ETags.
  // When an ETag is associated with an save or delete request, the store shall allow the update only if the attached ETag matches with the latest ETag in the database.
  // But when ETag is missing in the write requests, the state store shall handle the requests in the specified strategy(e.g. a last-write-wins fashion).
  enum StateConcurrency {
    CONCURRENCY_UNSPECIFIED = 0;
    // First write wins
    CONCURRENCY_FIRST_WRITE = 1;
    // Last write wins
    CONCURRENCY_LAST_WRITE = 2;
  }

  // Enum describing the supported consistency for state.
  enum StateConsistency {
    CONSISTENCY_UNSPECIFIED = 0;
    //  The API server assumes data stores are eventually consistent by default.A state store should:
    //
    // - For read requests, the state store can return data from any of the replicas
    // - For write request, the state store should asynchronously replicate updates to configured quorum after acknowledging the update request.
    CONSISTENCY_EVENTUAL = 1;

    // When a strong consistency hint is attached, a state store should:
    //
    // - For read requests, the state store should return the most up-to-date data consistently across replicas.
    // - For write/delete requests, the state store should synchronisely replicate updated data to configured quorum before completing the write request.
    CONSISTENCY_STRONG = 2;
  }

  StateConcurrency concurrency = 1;
  StateConsistency consistency = 2;
}

这就要求proto注释里写清楚:

  • 是必传参数还是可选参数;
  • 解释这个字段啥含义;光解释字面意思是不够的,要解释背后的使用机制,比如上面的consistency和concurrency要解释用户传了某个选项后,服务器能提供什么样的保证

(consistency和concurrency上面的注释其实是我把Dapr文档上的描述精简后粘过来的,省了写双语文档)

  • 注释讲不清楚的,在文档上解释
why.为什么这么设计

有设计文档的话贴个文档链接,没文档的话贴个proposal issue链接

2.3.3. 介绍API通用配置的文档

例如https://mosn.io/layotto/#/zh/component_specs/state/common

  • 配置文件结构
  • 解释这个API的通用配置,比如keyPrefix

2.3.4. 介绍组件配置的文档

例如https://mosn.io/layotto/#/zh/component_specs/state/redis

  • 这个组件的配置项说明
  • 想启动这个组件跑demo的话,怎么启动

2.4. 注释规范

proto注释 as doc

见上

组件API 注释 as doc

如果不写双语设计文档,那么组件API的注释要承担设计文档的作用(向其他开发者解释)。可以贴下proposal issue的链接

判断写得好不好的标准是"发出去后,社区爱好者想贡献组件的话,能否不当面提问、自己看项目就能上手开发组件"

如果觉得注释解释不清楚,就写个设计文档,或者补充下proposal issue、写的更详细些吧

其他注意事项

确保没有中文注释;

不用写无意义注释(把方法名复述一遍),比如:

	//StopSubscribe stop subs
	StopSubscribe()

三、提交pull request

3.1. 不符合开发规范的pr不可以合并进主干

3.2. cr人数

新增API的code review需要两个人review,后续有机器人自动检查后改成1个人review。

其他pull request的cr人数随意,不做约束。

[bug]layotto can be started successfully when port conflicts.

What happened:

Start multiple layotto with same config:

./layotto start -c ../../configs/config_apollo_health_mq.json
./layotto start -c ../../configs/config_apollo_health_mq.json
./layotto start -c ../../configs/config_apollo_health_mq.json

all of them succeed.

What you expected to happen:
only one server start successfully

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Add more features for java sdk

What would you like to be added:
Add more sdk api into java sdk
(currently there are only protobuf files,but some users ask us to add more feature in sdk)

Why is this needed:

Failed to build layotto

What happened:

I tried to run build this project and know how it works, so I downloaded the source code and then followed the quick-start to run the apollo demo.

Failed to run the command below:

cd ${projectpath}/cmd/layotto
go build

Error

****:layotto ****$ go build
go: downloading github.com/dapr/components-contrib v1.2.0
go: downloading github.com/dapr/kit v0.0.1
go: downloading github.com/urfave/cli v1.22.1
go: downloading google.golang.org/grpc v1.37.0
go: downloading mosn.io/mosn v0.22.1-0.20210425073346-b6880db4669c
go: downloading mosn.io/pkg v0.0.0-20210401090620-f0e0d1a3efce
go: downloading github.com/aws/aws-sdk-go v1.27.0
go: downloading github.com/Azure/azure-event-hubs-go v1.3.1
go: downloading github.com/Azure/azure-storage-blob-go v0.8.0
go: downloading github.com/Azure/go-autorest v14.2.0+incompatible
go: downloading cloud.google.com/go v0.65.0
go: downloading google.golang.org/api v0.32.0
go: downloading github.com/cenkalti/backoff v2.2.1+incompatible
go: downloading github.com/apache/pulsar-client-go v0.1.0
go: downloading github.com/cenkalti/backoff/v4 v4.1.0
go: downloading github.com/Azure/go-autorest/autorest v0.11.12
go: downloading github.com/Azure/azure-service-bus-go v0.10.10
go: downloading github.com/nats-io/nats.go v1.10.1-0.20210330225420-a0b1f60162f8
go: downloading github.com/hazelcast/hazelcast-go-client v0.0.0-20190530123621-6cf767c2f31a
go: downloading github.com/hashicorp/golang-lru v0.5.4
go: downloading github.com/Azure/go-amqp v0.13.1
go: downloading github.com/Shopify/sarama v1.23.1
go: downloading github.com/nats-io/stan.go v0.8.3
go: downloading github.com/go-redis/redis v6.15.9+incompatible
go: downloading github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271
go: downloading cloud.google.com/go/pubsub v1.3.1
go: downloading github.com/eclipse/paho.mqtt.golang v1.3.2
go: downloading github.com/go-redis/redis/v8 v8.8.0
go: downloading github.com/aerospike/aerospike-client-go v4.5.0+incompatible
go: downloading github.com/agrea/ptr v0.0.0-20180711073057-77a518d99b7b
go: downloading github.com/Azure/azure-sdk-for-go v48.2.0+incompatible
go: downloading github.com/google/uuid v1.2.0
go: downloading github.com/a8m/documentdb v1.2.1-0.20190920062420-efdd52fe0905
go: downloading github.com/gocql/gocql v0.0.0-20191018090344-07ace3bab0f8
go: downloading cloud.google.com/go/datastore v1.1.0
go: downloading gopkg.in/couchbase/gocb.v1 v1.6.4
go: downloading github.com/hashicorp/consul/api v1.3.0
go: downloading github.com/golang/protobuf v1.5.0
go: downloading github.com/bradfitz/gomemcache v0.0.0-20190913173617-a41fca850d0b
go: downloading go.mongodb.org/mongo-driver v1.1.2
go: downloading github.com/go-sql-driver/mysql v1.5.0
go: downloading github.com/jackc/pgx v3.6.2+incompatible
go: downloading github.com/dancannon/gorethink v4.0.0+incompatible
go: downloading github.com/golang/mock v1.4.4
go: downloading github.com/jackc/pgx/v4 v4.6.0
go: downloading github.com/denisenkom/go-mssqldb v0.0.0-20191128021309-1d7a30a10f73
go: downloading github.com/hashicorp/go-multierror v1.0.0
go: downloading github.com/samuel/go-zookeeper v0.0.0-20190923202752-2cc03de413da
go: downloading mosn.io/api v0.0.0-20210414070543-8a0686b03540
go: downloading github.com/valyala/fasthttp v1.26.0
go: downloading github.com/zouyx/agollo/v4 v4.0.7
go: downloading google.golang.org/protobuf v1.26.0
go: downloading github.com/cpuguy83/go-md2man v1.0.10
go: downloading github.com/cpuguy83/go-md2man/v2 v2.0.0
go: downloading github.com/sirupsen/logrus v1.8.1
go: downloading golang.org/x/net v0.0.0-20210510120150-4163338589ed
go: downloading github.com/alibaba/sentinel-golang v1.0.2
go: downloading github.com/prometheus/client_golang v1.8.0
go: downloading github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0
go: downloading go.uber.org/atomic v1.7.0
go: downloading github.com/miekg/dns v1.0.14
go: downloading github.com/mosn/easygo v0.0.0-20201210062404-62796fdb3827
go: downloading golang.org/x/sys v0.0.0-20210514084401-e8d321eab015
go: downloading github.com/Azure/azure-amqp-common-go v1.1.4
go: downloading github.com/Azure/azure-pipeline-go v0.2.1
go: downloading github.com/Azure/go-autorest/autorest/date v0.3.0
go: downloading github.com/Azure/go-autorest/autorest/adal v0.9.5
go: downloading go.opencensus.io v0.22.5
go: downloading github.com/Azure/go-autorest/autorest/to v0.4.0
go: downloading github.com/jpillora/backoff v1.0.0
go: downloading github.com/mitchellh/mapstructure v1.3.3
go: downloading pack.ag/amqp v0.11.2
go: downloading google.golang.org/genproto v0.0.0-20201204160425-06b3db808446
go: downloading github.com/Azure/go-autorest/tracing v0.6.0
go: downloading github.com/Azure/azure-amqp-common-go/v3 v3.1.0
go: downloading github.com/devigned/tab v0.1.1
go: downloading nhooyr.io/websocket v1.8.6
go: downloading github.com/nats-io/nkeys v0.3.0
go: downloading github.com/nats-io/nuid v1.0.1
go: downloading github.com/DataDog/zstd v1.3.6-0.20190409195224-796139022798
go: downloading github.com/eapache/go-resiliency v1.2.0
go: downloading github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21
go: downloading github.com/eapache/queue v1.1.0
go: downloading github.com/jcmturner/gofork v1.0.0
go: downloading github.com/pierrec/lz4 v2.0.5+incompatible
go: downloading gopkg.in/jcmturner/gokrb5.v7 v7.3.0
go: downloading github.com/googleapis/gax-go/v2 v2.0.5
go: downloading golang.org/x/sync v0.0.0-20201207232520-09787c993a3a
go: downloading github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f
go: downloading go.opentelemetry.io/otel v0.19.0
go: downloading github.com/yuin/gopher-lua v0.0.0-20200603152657-dc2b0ca8b37e
go: downloading github.com/opentracing/opentracing-go v1.2.0
go: downloading gopkg.in/couchbase/gocbcore.v7 v7.1.18
go: downloading gopkg.in/couchbaselabs/gocbconnstr.v1 v1.0.4
go: downloading gopkg.in/couchbaselabs/jsonx.v1 v1.0.0
go: downloading github.com/hashicorp/go-rootcerts v1.0.0
go: downloading github.com/hashicorp/serf v0.8.2
go: downloading github.com/golang/snappy v0.0.3
go: downloading github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed
go: downloading golang.org/x/crypto v0.0.0-20210513164829-c07d793c2f9a
go: downloading gopkg.in/fatih/pool.v2 v2.0.0
go: downloading gopkg.in/gorethink/gorethink.v4 v4.1.0
go: downloading github.com/jackc/pgconn v1.5.0
go: downloading github.com/jackc/pgtype v1.3.0
go: downloading github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe
go: downloading github.com/hashicorp/errwrap v1.0.0
go: downloading github.com/hashicorp/go-syslog v1.0.0
go: downloading golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5
go: downloading github.com/c2h5oh/datasize v0.0.0-20171227191756-4eba002a5eae
go: downloading github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d
go: downloading istio.io/api v0.0.0-20200227213531-891bf31f3c32
go: downloading mosn.io/proxy-wasm-go-host v0.0.0-20210312032409-2334f9cf62ec
go: downloading github.com/andybalholm/brotli v1.0.2
go: downloading github.com/klauspost/compress v1.12.2
go: downloading github.com/valyala/bytebufferpool v1.0.0
go: downloading github.com/gammazero/workerpool v1.1.2
go: downloading github.com/russross/blackfriday v2.0.0+incompatible
go: downloading github.com/russross/blackfriday/v2 v2.0.1
go: downloading github.com/ghodss/yaml v1.0.0
go: downloading go.uber.org/automaxprocs v1.3.0
go: downloading github.com/dchest/siphash v1.2.1
go: downloading github.com/prometheus/common v0.14.0
go: downloading github.com/prometheus/procfs v0.6.0
go: downloading github.com/hashicorp/go-plugin v1.0.1
go: downloading github.com/trainyao/go-maglev v0.0.0-20200611125015-4c1ae64d96a8
go: downloading github.com/mattn/go-ieproxy v0.0.0-20190610004146-91bb50d98149
go: downloading github.com/Azure/go-autorest/logger v0.2.0
go: downloading github.com/form3tech-oss/jwt-go v3.2.2+incompatible
go: downloading github.com/spaolacci/murmur3 v1.1.0
go: downloading github.com/Azure/go-autorest/autorest/validation v0.3.0
go: downloading github.com/satori/go.uuid v1.2.0
go: downloading gopkg.in/jcmturner/dnsutils.v1 v1.0.1
go: downloading github.com/hashicorp/go-uuid v1.0.1
go: downloading go.opentelemetry.io/otel/metric v0.19.0
go: downloading go.opentelemetry.io/otel/trace v0.19.0
go: downloading github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af
go: downloading github.com/jackc/pgio v1.0.0
go: downloading github.com/jackc/pgproto3 v1.1.0
go: downloading github.com/armon/go-metrics v0.0.0-20190430140413-ec5e00d3c878
go: downloading github.com/jackc/chunkreader v1.0.0
go: downloading github.com/jackc/pgproto3/v2 v2.0.1
go: downloading github.com/jackc/pgpassfile v1.0.0
go: downloading github.com/jackc/chunkreader/v2 v2.0.1
go: downloading github.com/jackc/pgservicefile v0.0.0-20200307190119-3430c5407db8
go: downloading github.com/shirou/gopsutil v3.21.3+incompatible
go: downloading github.com/spf13/viper v1.7.1
go: downloading github.com/gammazero/deque v0.1.0
go: downloading istio.io/gogo-genproto v0.0.0-20190930162913-45029607206a
go: downloading github.com/go-stack/stack v1.8.0
go: downloading github.com/xdg/scram v0.0.0-20180814205039-7eeb5667e42c
go: downloading github.com/xdg/stringprep v1.0.0
go: downloading github.com/shurcooL/sanitized_anchor_name v1.0.0
go: downloading github.com/google/cel-go v0.5.1
go: downloading github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403
go: downloading github.com/envoyproxy/protoc-gen-validate v0.1.0
go: downloading github.com/juju/errors v0.0.0-20190930114154-d42613fe1ab9
go: downloading github.com/hashicorp/go-hclog v0.14.1
go: downloading github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb
go: downloading github.com/mitchellh/go-testing-interface v1.0.0
go: downloading github.com/oklog/run v1.0.0
go: downloading gopkg.in/jcmturner/rpc.v1 v1.1.0
go: downloading github.com/google/go-cmp v0.5.5
go: downloading github.com/hashicorp/go-immutable-radix v1.0.0
go: downloading github.com/apache/dubbo-go-hessian2 v1.7.0
go: downloading github.com/hashicorp/hcl v1.0.0
go: downloading github.com/magiconair/properties v1.8.1
go: downloading github.com/pelletier/go-toml v1.2.0
go: downloading github.com/spf13/cast v1.3.0
go: downloading github.com/spf13/jwalterweatherman v1.0.0
go: downloading github.com/subosito/gotenv v1.2.0
go: downloading gopkg.in/ini.v1 v1.51.0
go: downloading github.com/tklauser/go-sysconf v0.3.5
go: downloading golang.org/x/tools v0.0.0-20210106214847-113979e3529a
go: downloading github.com/census-instrumentation/opencensus-proto v0.2.1
go: downloading github.com/fatih/color v1.7.0
go: downloading github.com/mattn/go-isatty v0.0.12
go: downloading gopkg.in/jcmturner/aescts.v1 v1.0.1
go: downloading github.com/dubbogo/gost v1.9.0
go: downloading github.com/antlr/antlr4 v0.0.0-20200503195918-621b933c7a7f
go: downloading github.com/mattn/go-colorable v0.1.4
# mosn.io/pkg/utils
../../../../../go/pkg/mod/mosn.io/[email protected]/utils/dup_arm64.go:8:9: undefined: syscall.Dup3

What you expected to happen:

Hope I can build this project successfully

How to reproduce it (as minimally and precisely as possible):

see above

Anything else we need to know?:

Layotto Version: master
OS: Mac OS Big Sur 11.4

I guess that mosn don't support Apple M1 chip well

transport protocol dubbo FromFrame stack overflow

What happened:
when write unit test to dubbo FromFrame, stack overflow happened:

=== RUN   Test_dubboProtocol_FromFrame
=== RUN   Test_dubboProtocol_FromFrame/success
runtime: goroutine stack exceeds 1000000000-byte limit
runtime: sp=0xc0207c8398 stack=[0xc0207c8000, 0xc0407c8000]
fatal error: stack overflow

runtime stack:
runtime.throw(0x18dec08, 0xe)
	/usr/local/go/src/runtime/panic.go:1117 +0x72
runtime.newstack()
	/usr/local/go/src/runtime/stack.go:1069 +0x7ed
runtime.morestack()
	/usr/local/go/src/runtime/asm_amd64.s:458 +0x8f

goroutine 10 [running]:
mosn.io/layotto/components/rpc/invoker/mosn/transport_protocol.(*dubboProtocol).FromFrame(0xc0407c7f60, 0x19c79b8, 0xc0003d8aa0, 0x0, 0x0, 0x0)
	/Users/tianjipeng/workspace/my/layotto/components/rpc/invoker/mosn/transport_protocol/dubbo.go:55 +0x135 fp=0xc0207c83a8 sp=0xc0207c83a0 pc=0x170d515
mosn.io/layotto/components/rpc/invoker/mosn/transport_protocol.(*dubboProtocol).FromFrame(0xc0407c7f60, 0x19c79b8, 0xc0003d8aa0, 0x0, 0x0, 0x0)
	/Users/tianjipeng/workspace/my/layotto/components/rpc/invoker/mosn/transport_protocol/dubbo.go:60 +0x105 fp=0xc0207c8400 sp=0xc0207c83a8 pc=0x170d4e5

unit test code like this:

func buildDubboRequestData(requestId uint64) []byte {
	service := hessian.Service{
		Path:      "io.mosn.layotto",
		Interface: "test",
		Group:     "test",
		Version:   "v1",
		Method:    "Call",
	}
	codec := hessian.NewHessianCodec(nil)
	header := hessian.DubboHeader{
		SerialID: 2,
		Type:     hessian.PackageRequest,
		ID:       int64(requestId),
	}
	body := hessian.NewRequest([]interface{}{}, nil)
	reqData, err := codec.Write(service, header, body)
	if err != nil {
		return nil
	}
	return reqData
}

t.Run("success", func(t *testing.T) {
		data := buffer.NewIoBuffer(1000)
		data.Write(buildDubboRequestData(1))
		resp := dubbo.NewRpcResponse(nil, data)
		resp.Header.Set("key1", "value1")
		resp.Header.Status = dubbo.RespStatusOK
		d := newDubboProtocol()

		rsp, err := d.FromFrame(resp)
		assert.Nil(t, err)
		assert.Equal(t, "value1", rsp.Header.Get("key1"))
	})

What you expected to happen:
stack overflow not happen,maybe it should change d.FromFrame to d.fromFrame.FromFrame:

func (d *dubboProtocol) FromFrame(resp api.XRespFrame) (*rpc.RPCResponse, error) {
	if resp.GetStatusCode() != dubbo.RespStatusOK {
		return nil, fmt.Errorf("dubbo error code %d", resp.GetStatusCode())
	}

	return d.FromFrame(resp) // change to d.fromFrame.FromFrame(resp) , stack overflow not happen
}

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

[Improve README]Explain why we chose to develop layotto instead of using Dapr and give demos

What would you like to be added:
Please add more explaination into README.md and README-ZH.md:

  1. explain why we chose to develop layotto instead of using Dapr
  2. give demos and introduction of different features about RPC and Service Mesh, which in my opinion is the most important difference between Layotto and Dapr
  3. highlight the concept 'Service Mesh + Runtime',maybe with '+Serverless'
  4. give demos and introduction of FaaS on WASM

Why is this needed:
This explains why Layotto is useful and why users shouldn't close the browser window.

Add design doc

What would you like to be added:
Add design doc
Why is this needed:

return error when mosnInvoker.Invoke panic

What happened:
mosnInvoker.Invoke return nil if panic

What you expected to happen:
return error to indicate panic

func (m *mosnInvoker) Invoke(ctx context.Context, req *rpc.RPCRequest) (*rpc.RPCResponse, error) {
	defer func() {
		if r := recover(); r != nil {
			log.DefaultLogger.Errorf("[runtime][rpc]mosn invoker panic: %v", r)
                         // return nil,  errors.New("[runtime][rpc]mosn invoker panic")
		}
	}()
}

Should Layotto support SQL queries and how? Layotto是否应该支持SQL查询,如何支持?

Should Layotto support SQL queries and how?

The current state API is essentially a key/value model API and does not support SQL query.

So should L8 support sql query? We open this issue to see your opinions

User scenario:

  1. DB sharding: If users have done mysql sharding, the application needs routing when querying mysql.It is necessary for apps to import a data middleware SDK(such as shardingsphere-JDBC) or query through a db proxy (such as ShardingSphere-Proxy), or through a db mesh sidecar (such as ShardingSphere-Sidecar)
  2. Portability: If users want to deploy their apps across different clouds, or do some migration such as migrating from mysql cluster to a distributed relational database (TiDB, Oceanbase, etc.), as long as the new db supports the mysql protocol, users don't have to modify their code.Portability like this can be achieved by using db proxy or db mesh.

Question:

  1. Should L8 function as a db mesh sidecar? In other words, do you have any needs in sql query through L8?
  2. If we want L8 to support sql query, what kind of API should be provided?
    a. Binding API like Dapr
    b. Support Mysql protocal

Welcome to express your opinion!

chinese:

Layotto是否应该支持SQL查询,如何支持?

目前的state API本质上是key/value模型的存储、不支持灵活的传SQL。那么L8要不要支持sql查询呢?发个issue看看大家的意见

用户场景:

  1. 分库分表:假如用户做了mysql分库分表,应用在查询mysql的时候需要路由,就要集成数据中间件的sdk(例如shardingsphere-JDBC)或者经过独立进程的db proxy(例如ShardingSphere-Proxy),或者经过作为sidecar的db mesh(例如ShardingSphere-Sidecar)
  2. 可移植:假如用户要跨云部署,要做一些移植,比如从mysql分库分表换成用分布式关系库(TiDB,Oceanbase等),只要新的库也支持mysql协议,使用db proxy或db mesh的应用就可以无需修改代码、具备随时移植到新环境的能力

问题:

  1. runtime要不要做db mesh的功能呢?或者说,您有通过runtime支持sql查询的需求么?
  2. 如果要支持sql查询,是提供什么样的API呢?
    a. 像Dapr一样的binding API
    b. 直接支持SQL查询

欢迎大家发表自己的看法~~

You will define L8, v0.1.0 requirement gathering 由你定义L8,需求征集

You will define L8, v0.1.0 requirement gathering

Layotto is currently landing inside Alipay, and the future roadmap is still in discussion. We sincerely invite all of you who are interested in service mesh and multi-runtime architecture: What feature do you think should be developed in the v0.1.0 version of Layotto?
You are welcome to put forward your needs and ideas, and you are also welcome to vote on other people’s needs (you can send your vote directly to the comment)
We will choose the features that you(and alipay inner users) need most to implement first, and if necessary, we can also hold a community meeting for everyone to have a discussion~

Requirements that have been collected so far:

feature scenario issue pr
NoSQL API(such as Hbase) Users say that there are scenarios where hbase is used by multiple languages, and the routing logic is relatively special, and the open source SDK cannot meet their needs. They ask whether it can be supported by Layotto
Object storage API(such as OSS in aliyun) Users say that there are scenarios where Object storage is used by multiple languages. They ask whether it can be supported by Layotto #98
SQL API(DB Mesh) Users pass sql to Layotto, Layotto handles the logic of db sharding and shields the implementation layer
Service registration and discovery API app performs service registration and discovery through API
Distributed lock API #96 #100
Leader election API
Demo of Layotto+istio Provide a demo showing how Layotto can be integrated with istio,so that a single sidecar can be used as both the data plane of the service mesh and the application runtime
mini project based on Layotto Use Layotto to implement a mini project, similar to istio's bookshop demo project, to demonstrate the usage of Layotto and to run integration tests
java sdk #79

chinese version:

由你定义L8,v0.1.0需求征集

Layotto目前在蚂蚁内部落地中,下一步的演进方向还在摸索。我们诚邀各位对service mesh和multi-runtime架构感兴趣的同学,你觉得Layotto的v0.1.0版应该做哪些需求?
欢迎提出你的需求和idea,也欢迎对其他人提的需求投票哈(投票直接发到comment里即可)
我们会选择大家最有需求的功能优先实现,如有需要也可以开个社区会议大家聊一聊~

目前已经征集到的需求:

feature scenario issue pr
NoSQL API(比如Hbase) 用户反馈有多语言调用hbase的场景,而且路由逻辑比较特殊、开源sdk不能满足需求,询问能否通过Layotto支持
对象存储API(比如OSS) 用户反馈有多语言调用OSS的场景,能否通过Layotto支持 #98
SQL API (DB Mesh) 用户传sql给Layotto,Layotto处理分库分表逻辑,屏蔽实现层
服务注册发现API app通过API进行服务注册、发现
分布式锁API #96 #100
Leader选举API
集成layotto和istio的demo 提供Layotto和istio整合的demo,演示同一个sidecar既能作为service mesh的data plane又能作为应用运行时
mini project based on Layotto 使用Layotto实现一个迷你项目,类似于istio的bookshop演示项目,用于演示Layotto的用法、用于跑集成测试
java sdk #79

add more features for go sdk

What would you like to be added:
Add more features for go sdk. Add those sdk API in Dapr go-sdk

Why is this needed:

We want users can migrate easily from Layotto to Dapr,or from Dapr to Layotto,so the sdk should be compatible

Add unit tests for RPC related code

What would you like to be added:
Understand the implementation of the rpc module and add unit tests for it, so that the test coverage of the module reaches 80%
You can ask @MoonShining for help if you encounter any problems in the process.
The code path is in components/rpc
In addition, there are rpc related codes in pkg/grpc/api.go and pkg/runtime/runtime.go files

Reference materials:rpc design document

After completing this improvement, you will learn the principle of Layotto supporting rpc, how Layotto and mosn are integrated to reuse service mesh capabilities

chinese:
看明白rpc模块的实现原理并为rpc相关模块补充单测,使模块的单测覆盖率达到80%
过程中遇到问题可以问 @MoonShining
代码路径在 components/rpc
另外pkg/grpc/api.go 和pkg/runtime/runtime.go 文件里也有rpc相关的代码

参考资料:rpc设计文档

完成这项优化后,你将学习到Layotto支持rpc的原理、Layotto是如何和mosn整合到一起复用service mesh能力的

Why is this needed:
Currently the test coverage of rpc component is not high:
image

Add unit tests for runtime/runtime.go and grpc/api.go

What would you like to be added:
Add unit tests for runtime/runtime.go and grpc/api.go.
These two files are the core engine code of Layotto, responsible for component lifecycle management and grpc request processing respectively.

chinese:
为runtime/runtime.go和grpc/api.go添加单元测试。这两个文件是layotto的核心引擎代码,各自负责组件生命周期管理(启动、注册、初始化)和grpc请求处理,完成此任务可以很好的了解Layotto架构

Why is this needed:
Currently our test coverage is not high :(

add comment to exported function/method/interface/variable

What would you like to be added:
Add comment to exported function/method/interface/variable.

Why is this needed:
There are many exported function/method/interface/variable that have no comment, it would be helpful if have comment.

[bug]the configuration item `keyPrefix` conflicts in etcd lock component

What happened:
There is a configuration item named keyPrefix in etcd lock component,and it conflicts with the common configuration item of distributed lock components which is also called keyPrefix.
image

image

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

[Proposal] distributed lock api design

中文版在下面

0. tl;dl

Add TryLock and Unlock API. The Lock Renewal API is controversial and will not be added into the first version

1. Evaluation of products on the market

System try lock Blocking lock(based on watch) Availability Write operations are linearizable sequencer(chubby's feature) Lock renewal
Stand-alone redis yes x unavailable when single failure yes yes(need poc) yes
redis cluster yes x yes no. Locks will be unsafe when fail-over happens yes(need poc) yes
redis Redlock yes
nacos ×
consul yes
eureka ×
zookeeper yes yes yes. the election completes within 200 ms yes yes use zxid as sequencer yes
etcd yes yes yes yes yes use revision yes lease.KeepAlive

There are some differences in feature supporting.

2. High-level design

2.1. API

2.1.0. Design principles

We are faced with many temptations. In fact, there are many lock related features that can be supported (blocking locks, reentrant locks, read-write locks, sequencer, etc.)

But after all, our goal is to design a general API specification, so we should be as conservative as possible in API definition.Start simple, abstract the simplest and most commonly used functions into API specifications, and wait for user feedback before considering adding more abstraction into API specification.

2.1.1. TryLock/Unlock API

The most basic locking and unlocking API.

TryLock is non-blocking, it return directly if the lock is not obtained.

proto:

// Distributed Lock API
rpc TryLock(TryLockRequest)returns (TryLockResponse) {}

rpc Unlock(UnlockRequest)returns (UnlockResponse) {}

message TryLockRequest {
  string store_name = 1;
  // resource_id is the lock key.
  string resource_id = 2;
  // client_id will be automatically generated if not set
  string client_id = 3;
  // expire is the time before expire
  int64 expire = 4;
}

message TryLockResponse {

  bool success = 1;

  string client_id = 2;
}

message UnlockRequest {
  string store_name = 1;
  // resource_id is the lock key.
  string resource_id = 2;

  string client_id = 3;
}

message UnlockResponse {
  enum Status {
    SUCCESS = 0;
    LOCK_UNEXIST = 1;
    LOCK_BELONG_TO_OTHERS = 2;
  }

  Status status = 1;
}

Q: What is the time unit of the expire field?

A: Seconds.

Q: Can we force the user to set the number of seconds to be large enough(instead of too small)?

A: There is no way to limit it at compile time or startup, forget it

Q: Why not add metadata field

A: Try to be conservative at the beginning, wait until someone feedbacks that there is a need, or find that there is a need to be added in the process of implementing the component

Q: How to add features such as sequencer and reentrant locks in the future?

A: Add feature options in the API parameters,and the component must also implement the Support() function

2.1.2. Lock Renewal

Solution A: add an API "LockKeepAlive"

rpc LockKeepAlive(stream LockKeepAliveRequest) returns (stream LockKeepAliveResponse){}
  
message LockKeepAliveRequest {
  string store_name = 1;
  // resource_id is the lock key.
  string resource_id = 2;

  string client_id = 3;
  // expire is the time to expire
  int64 expire = 4;
}

message LockKeepAliveResponse {
  enum Status {
    SUCCESS = 0;
    LOCK_UNEXIST = 1;
    LOCK_BELONG_TO_OTHERS = 2;
  }
  string store_name = 1;
  // resource_id is the lock key.
  string resource_id = 2;

  Status status = 3;
}

The input parameters and return results of this API are all streams. App and sidecar only need to maintain one connection. Each time the lock needs to be renewed, the connection is reused to transfer the renewal request.

Q: Why not put the lock renewal as a stream parameter into tryLock?

A: Many businesses do not need to renew leases, so we want trylock to be as simple as possible;

Single responsibility principle.When we want to add a blocking lock,the renewal API can be reused;

Q: The renewal logic is too complicated, can we make it transparent to users?

A: sdk shields this layer of logic, starts a thread/coroutine/nodejs timing event, and automatically renews the lease

Solution B: Users do not perceive the renewal logic, and automatically renew the lease. App and sidecar maintain a heartbeat for failure detect

Disadvantages/difficulties:

  1. If you reuse a public heartbeat, it is difficult to customize the heartbeat interval

The solution is to ensure that the heartbeat interval is low enough, such as 1 time per second

  1. How to ensure reliable failure detection?

For example, the following java code, unlock method may fail:

try{

}finally{
  lock.unlock()
}

If it is a lock in JVM, unlock can guarantee success (unless the entire JVM fails), but unlock may fail if it is called via the network. How to ensure that the heartbeat is interrupted after the call fails?

This requires the app to report some fine-grained status to the heartbeat detection.

We can define the http callback SPI, which is polled and detected by the Layotto actuator, and the data structure returned by the callback is as follows:

{
  "status": "UP",
  "details": {
    "lock": [
      {
        "resource_id": "res1",
        "client_id": "dasfdasfasdfa",
        "type": "unlock_fail"
      }
    ],
    "xxx": []
  }
}

The application has to handle status collection, reporting, cleaning up after the report is successful, and limiting the map capacity (for example, what if the map is too large when report fails too much times?), which requires the app to implement some complex logic, and it must be put in the SDK.

  1. This implementation is actually the same as lease renewal. It opens a separate connection for status management, and user reports the status through this public connection when necessary.

  2. API spec relies on heartbeat logic. It relies on the heartbeat interval and the data structure returned by the heartbeat. It is equivalent to that the API spec relies on the implementation of Layotto, unless we can also standardize the heartbeat API (including interval, returned data structure, etc.)

in conclusion

At present, everyone has different opinions on the solution of lease renewal, and the lease renewal function will not be added in the first version.

Personally I prefer the solution A.Let the SDK shields the renewal logic. Although users have to directly deal with the lease renewal logic when using grpc, lease renewal is a common solution for distributed locks, and it is not hard for developers to understand.

I put it here to see everyone's opinions

3. Future work

  • Reentrant Lock

There will be some counting logic.We need to consider whether all locks support reentrancy by default, or add a feature option in the parameter to identify that the user needs it to be reentrant

  • Blocking lock

  • sequencer

4. Reference

How to do distributed locking

The Chubby lock service for loosely-coupled distributed systems

.wasm module hot reload

What would you like to be added:
Let Layotto monitor whether the .wasm file has changed, and reload the .wasm file if there is a change.
Achieve the effect of dynamically replacing .wasm at runtime

Why is this needed:
In order to implement subsequent functions:

  1. FaaS based on wasm
  2. Let developers change from developing sdk to developing wasm in the future, so that they can operate and maintain their wasm module independently

How ​​to implement it

  1. After startup, monitor whether the specified .wasm file has changed. You can use some open source libraries, such as https://github.com/fsnotify/fsnotify
  2. Once the file changed, call wasm related API to reload the file

Reference
Multilingual programming based on WASM
The WASM engine used by Layotto is WASMER .You can refer to official documents and examples to use API

chinese

需要实现什么功能
让Layotto能监听.wasm文件有没有变更,有变更的话重新热加载.wasm文件
达到运行时动态替换.wasm的效果

做这个功能的价值
为了实现后续功能:

  1. 基于wasm做FaaS
  2. 让开发者以后从开发sdk改成开发wasm,能够独立运维.见 #166

大致方案

  1. 启动后监听指定.wasm文件有没有变更。可以使用一些开源库,例如https://github.com/fsnotify/fsnotify
  2. 监听到变更后,调wasm相关API重新加载该文件

Reference
使用WASM进行多语言编程
Layotto使用的WASM引擎是WASMER 可以参考官方文档和example使用API

Add unit tests for WASM related code; 为wasm模块补充单测

What would you like to be added:
Add unit tests for WASM related code:
understand the implementation of the wasm module and add unit tests for it. The unit test coverage of the module should reach 60%.
If you encounter any problems in the process, you can ask @zhenjunMa for help.
The code path is pkg/wasm/

By doing this improvement,you will learn that how Layotto supports multilingual programming based on WASM

chinese:
看明白wasm模块的实现原理并补充单测,使模块的单测覆盖率达到60%
过程中遇到问题可以问 @zhenjunMa
代码路径为 pkg/wasm/

做完这些优化后,你将学习到Layotto如何通过wasm支持多语言编程

Why is this needed:
Currently the test coverage of wasm is low:
image

[Proposal]Sequencer API design

Add a distributed sequencer API to generate a global unique id,like Leaf

English explanation will be added later

Sequencer API设计文档

本文档讨论"生成分布式唯一(自增)id"的API

1. 需求

1.1. 生成全局唯一id

Q: 什么时候需要生成全局唯一id?

A: db不帮你自动生成的时候。比如:

  • db做了分库分表,没帮你自动生成id,你又需要一个全局唯一的业务id
  • 没走db,比如请求到了后端要生成一个traceId

1.2. 对该id有递增的需求。具体来说有很多种:

  • 不需要递增。这种情况UUID能解决,虽然缺点是比较长。本API暂时不考虑这种情况
  • “趋势递增”。不追求一定递增,大部分情况在递增就行。

Q: 什么场景需要趋势递增?

A: 对b+树类的db来说cache friendly。不过这种场景其实没有全局趋势递增需求,可以分表递增,用不到全局趋势递增;

拿来排序查最新数据。比如查最新消息时,不想新增个时间戳字段、建索引,想直接按id排序查最新的100条:

select * from message order by message-id limit 100

再比如nosql之类的在时间戳字段上加索引很难,分页查最新数据的时候,只想按id查

  • sharding内单调递增。比如Tidb的自增id 能保证单台服务器上生成的id递增,没法保证全局(在多台服务器上)单调递增
  • 全局单调递增

1.3. 可能会有自定义id schema的需求

比如要求id的格式为"前8位是uid,后8位是自增id"这样的需求

1.4. 可能有信息安全相关需求

如果ID是连续的,恶意用户的扒取工作就非常容易做了,直接按照顺序下载指定URL即可;

如果是订单号就更危险了,竞对可以直接知道用户一天的单量。所以在一些应用场景下,会需要ID无规则、不规则。

2. 产品调研

系统 能否保证生成的id唯一 趋势递增 严格递增 可用性 信息安全
单机redis yes.需要特殊配置redis服务器,把两种落盘策略都打开、每次写操作都写磁盘 ,避免丢数据 yes yes.前提是宕机重启不丢数据 有单点故障风险
redis 主从复制+sentinel no.复制是异步的,即使用Wait命令等待同步复制还是可能在fo后丢数据,见文档 yes 取决于会不会丢数据
redis cluster 同上 yes 同上
snowflake no.(时钟回拨等情况可能导致id重复;需要依赖外部存储) yes no good
Leaf snowflake yes yes no good
Leaf segment yes yes no
Leaf segment只部署一台Leaf服务器 yes yes yes 有单点故障风险
zookeeper yes yes yes
etcd yes yes yes
mysql单库单表 yes yes yes 有单点故障风险

3. grpc API设计

3.1. proto定义

// Sequencer API
rpc GetNextId(GetNextIdRequest )returns (GetNextIdResponse) {}


message GetNextIdRequest {
  string store_name = 1;
  // key is the identifier of a sequencer.
  string key = 2;
  
  SequencerOptions options = 3;
  // The metadata which will be sent to the component.
  map<string, string> metadata = 4;
}

// SequencerOptions configures requirements for incremental and uniqueness guarantee
message SequencerOptions {  
  enum AutoIncrement {
    // WEAK means a "best effort" incrementing service.But there is no strict guarantee   
    WEAK = 0;
    // STRONG means a strict guarantee of global monotonically increasing
    STRONG = 1;
  }
  
//  enum Uniqueness{
//    // WEAK means a "best effort" unqueness guarantee.
//    // But it might duplicate in some corner cases.
//    WEAK = 0;
//    // STRONG means a strict guarantee of global uniqueness
//    STRONG = 1;
//  }

  AutoIncrement increment = 1;
//  Uniqueness uniqueness=2;
}

message GetNextIdResponse{
  int64 next_id=1;
}

Q: 是否由Layotto帮用户按需拼id?

A: API和Layotto运行时不管这事,由sdk或者用户自己处理,或者某个特殊组件想实现这个feature也可以。

Q: 返回类型是string还是int64

如果返回string,假如用户用了某个返回int64的实现,在用户代码中把返回的string转成了int64,那他迁移到别的组件,这个转换过程可能报错

如果返回int64,组件就没法帮用户做一些定制拼id的事情了

为了可移植性,选择int64。拼id的事情在sdk里做

Q: int64溢出问题怎么处理?

暂不考虑

3.2. 关于唯一性的争议

API中原先定义了用户传参SequencerOptions.Uniqueness枚举值 ,其中WEAK代表"尽量保证全局唯一,但是极小概率可能重复",要求业务代码如果拿到id去写库时,做好id重复、重试的心理准备;而STRONG代表严格全局唯一,用户代码不考虑id重复、重试之类的事情。

  • 定义这个枚举值的原因(好处)

如果要保证严格唯一,组件实现起来会比较重。比如单机redis常用的落盘策略就不行,可能导致宕机重启之后丢数据、生成一个重复id;比如直接在sidecar里写一个snowflake算法就不行,因为可能会有时钟回拨导致id重复的问题(NTP时钟同步、闰秒等情况都有可能导致时钟回拨)。Leaf的snowflake实现 依赖zookeeper判断时钟回拨;

  • 定义这个枚举值的缺点

给用户带来更多理解成本

  • 结论

存在争议,本期先不添加该枚举值。默认返回的结果一定能保证全局唯一(STRONG)。

4. 组件API

package sequencer

type Store interface {
	// Init this component.
	//
	// The number 'BiggerThan' means that the id generated by this component must be bigger than this number.
	//
	// If the component find that currently the storage can't guarantee this,
	// it can do some initialization like inserting a new id equal to or bigger than this 'BiggerThan' into the storage,
	// or just return an error
	Init(metadata Configuration) error

	GetNextId(*GetNextIdRequest) (*GetNextIdResponse, error)

	// GetSegment returns a range of id.
	// 'support' indicates whether this method is supported by the component.
	// Layotto runtime will cache the result if this method is supported.
	GetSegment(*GetSegmentRequest) (support bool, result *GetSegmentResponse, err error)
}

type GetNextIdRequest struct {
	Key      string
	Options  SequencerOptions
	Metadata map[string]string
}

type SequencerOptions struct {
	AutoIncrement AutoIncrementOption
}

type AutoIncrementOption string

const (
	WEAK   AutoIncrementOption = "weak"
	STRONG AutoIncrementOption = "strong"
)

type GetNextIdResponse struct {
	NextId int64
}

type GetSegmentRequest struct {
	Size     int
	Key      string
	Options  SequencerOptions
	Metadata map[string]string
}

type GetSegmentResponse struct {
	Segment []int64
}

type Configuration struct {
	BiggerThan int64             `json:"bigger_than"`
	Properties map[string]string `json:"properties"`
}

Q: 要不要在runtime层实现缓存?

如果runtime做缓存,需要组件实现方法:

GetSegment(*GetSegmentRequest) (support bool, result *GetSegmentResponse, err error)

可以先定义接口,组件先不实现,以后有性能需求再实现

参考资料

设计分布式唯一id生成

架构 细聊分布式ID生成方法

Leaf——美团点评分布式ID生成系统

[feature]Support State API

What would you like to be added:
Support State API and try to be compatible with Dapr's

Why is this needed:

[Proposal] OSS api design

Dapr现状

dapr现在在bindings里面支持了aliyun OSS的实现,具体实现可以参照oss

dapr的Binding的分为Input和Out前者就是个类似于sub的能力,收到事件,调用回调函数。后者就是对后端的组件进行一些CRUD操作,如下:

const (
	GetOperation    OperationKind = "get"
	CreateOperation OperationKind = "create"
	DeleteOperation OperationKind = "delete"
	ListOperation   OperationKind = "list"
)

  // Invokes binding data to specific output bindings
  rpc InvokeBinding(InvokeBindingRequest) returns (InvokeBindingResponse) {}


// InvokeBindingRequest is the message to send data to output bindings
message InvokeBindingRequest {
  // The name of the output binding to invoke.
  string name = 1;

  // The data which will be sent to output binding.
  bytes data = 2;  //对象数据

  // The metadata passing to output binding components
  // 
  // Common metadata property:
  // - ttlInSeconds : the time to live in seconds for the message. 
  // If set in the binding definition will cause all messages to 
  // have a default time to live. The message ttl overrides any value
  // in the binding definition.
  map<string,string> metadata = 3;  //OSS的key存在这里面

  // The name of the operation type for the binding to invoke
  string operation = 4;  //增删改查
}

// InvokeBindingResponse is the message returned from an output binding invocation
message InvokeBindingResponse {
  // The data which will be sent to output binding.
  bytes data = 1;

  // The metadata returned from an external system
  map<string,string> metadata = 2;
}

InvokeBinding是一个Unary的rpc调用,这对于OSS这种支持大文件需要流式传输的功能肯定是没法支持的。

方案

方案一:

增加InvokeService为stream的rpc调用:

  // InvokeService do rpc calls
  rpc InvokeService(InvokeServiceRequest) returns (InvokeResponse) {}

	改为:

	// InvokeService do rpc calls
  rpc InvokeService(stream InvokeServiceRequest) returns (stream InvokeResponse) {}

先给Dapr提了个Issue来跟踪:
dapr/dapr#3338

方案二:

本身dapr的binding的概念就不是一个很好的抽象,感觉就是什么都可以做。但又不知道应该用来做哪些东西。对于OSS这种流式传输的,我们也可以在pb中单独抽象出一类接口:

  // Get file with stream.
  rpc GetFile(GetFileRequest) returns (stream GetFileResponse) {}

  // Put file with stream.
  rpc PutFile(stream PutFileRequest) returns (google.protobuf.Empty) {}

message GetFileRequest {
  //
  string store_name = 1;
  // The name of the file or object want to get.
  string name = 2;
  // The metadata for user set.
  map<string,string> metadata = 3;
}

message GetFileResp {
  bytes data = 1;
}

message PutFileRequest {
  string store_name = 1;
  // The name of the file or object want to put.
  string name = 2;
  // The data which will be store.
  bytes data = 3;
  // The metadata for user set.
  map<string,string> metadata = 4;
}

License needed

As an open source project, we need to attach a proper license to this project.

Typically this is done by adding a LICENSE file in the root directory and attach the licensen declaration in every files.

I suggest we apply Apache2 to the project.

How do you think?

Change package path to mosn.io/layotto

What would you like to be added:
Change package path github.com/layotto/layotto to mosn.io/layotto.

Why is this needed:
Currently our package path is github.com/layotto/layotto, after open source our repository address will be github/mosn/layotto, in order to avoid user-aware codebase changes, we need to point the package path to mosn.io/layotto. After this change, no matter how our codebase address changes, it will not affect the package path referenced by the users.

[BUG] demo/rpc/http,http requset path error

go run demo/rpc/http/echoclient/echoclient.go -d 'hello layotto'
2021/06/22 17:15:16 rpc error: code = Unknown desc = http response code 400, body: 400 Bad Request
exit status 1

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Let users run our demo without go environment

What would you like to be added:

  • Start Layotto with docker
  • develop SDK in different languages.
  • run quickstart with java sdk
  • run quickstart with js sdk

Why is this needed:
Currently users have to run our demo with go environment preinstalled

gengrate other language pb files

What would you like to be added:
other languane call layotto need user self gengerate pb client

Why is this needed:
easy to call layotto by other language

run make build-image with error

What happened:
Errors occurred when running make build-image

$ make build-image

docker build --rm -t godep-builder build/contrib/builder/binary
[+] Building 15.7s (5/5) FINISHED                                                                                                                                                                                                                                                                                       
 => [internal] load build definition from Dockerfile                                                                                                                                                                                                                                                               0.0s
 => => transferring dockerfile: 166B                                                                                                                                                                                                                                                                               0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                                                                                                  0.0s
 => => transferring context: 2B                                                                                                                                                                                                                                                                                    0.0s
 => [internal] load metadata for docker.io/library/golang:1.14.13                                                                                                                                                                                                                                                 15.6s
 => CACHED [1/1] FROM docker.io/library/golang:1.14.13@sha256:0ae302aea084fbfe4d0f0d1d6a7d424218df7ddba0d66a2f6bbdb15b95e6b6ac                                                                                                                                                                                     0.0s
 => exporting to image                                                                                                                                                                                                                                                                                             0.0s
 => => exporting layers                                                                                                                                                                                                                                                                                            0.0s
 => => writing image sha256:bfafe1a90b09cf7d6155c073d8af1d38e6c6af3c77ab7f151a5f8ec0a21ffbd6                                                                                                                                                                                                                       0.0s
 => => naming to docker.io/library/godep-builder                                                                                                                                                                                                                                                                   0.0s
docker run --rm -v /Users/jason/go/src/github.com/layotto/layotto:/go/src/github.com/layotto/layotto -w /go/src/github.com/layotto/layotto godep-builder make build-local
GO111MODULE=off CGO_ENABLED=1 go build \
        -ldflags "-B 0x9a204ee2f8abac510c72f363f767d0baef3cefe9 -X main.Version=0.1.0(bee45b6) -X github.com/layotto/layotto/pkg/types.IstioVersion=" \
        -v -o runtime \
        github.com/layotto/layotto/cmd/layotto
proto/runtime/v1/mosn.pb.go:9:2: cannot find package "github.com/golang/protobuf/proto" in any of:
        /usr/local/go/src/github.com/golang/protobuf/proto (from $GOROOT)
        /go/src/github.com/golang/protobuf/proto (from $GOPATH)
pkg/grpc/api.go:7:2: cannot find package "github.com/golang/protobuf/ptypes/empty" in any of:
        /usr/local/go/src/github.com/golang/protobuf/ptypes/empty (from $GOROOT)
        /go/src/github.com/golang/protobuf/ptypes/empty (from $GOPATH)
pkg/common/performance.go:5:2: cannot find package "github.com/shirou/gopsutil/cpu" in any of:
        /usr/local/go/src/github.com/shirou/gopsutil/cpu (from $GOROOT)
        /go/src/github.com/shirou/gopsutil/cpu (from $GOPATH)
pkg/common/performance.go:6:2: cannot find package "github.com/shirou/gopsutil/mem" in any of:
        /usr/local/go/src/github.com/shirou/gopsutil/mem (from $GOROOT)
        /go/src/github.com/shirou/gopsutil/mem (from $GOPATH)
cmd/layotto/main.go:16:2: cannot find package "github.com/urfave/cli" in any of:
        /usr/local/go/src/github.com/urfave/cli (from $GOROOT)
        /go/src/github.com/urfave/cli (from $GOPATH)
pkg/services/configstores/apollo/repository.go:6:2: cannot find package "github.com/zouyx/agollo/v4" in any of:
        /usr/local/go/src/github.com/zouyx/agollo/v4 (from $GOROOT)
        /go/src/github.com/zouyx/agollo/v4 (from $GOPATH)
pkg/services/configstores/apollo/repository.go:7:2: cannot find package "github.com/zouyx/agollo/v4/env/config" in any of:
        /usr/local/go/src/github.com/zouyx/agollo/v4/env/config (from $GOROOT)
        /go/src/github.com/zouyx/agollo/v4/env/config (from $GOPATH)
pkg/services/configstores/apollo/change_listener.go:5:2: cannot find package "github.com/zouyx/agollo/v4/storage" in any of:
        /usr/local/go/src/github.com/zouyx/agollo/v4/storage (from $GOROOT)
        /go/src/github.com/zouyx/agollo/v4/storage (from $GOPATH)
pkg/services/configstores/etcdv3/etcdv3.go:6:2: cannot find package "go.etcd.io/etcd/clientv3" in any of:
        /usr/local/go/src/go.etcd.io/etcd/clientv3 (from $GOROOT)
        /go/src/go.etcd.io/etcd/clientv3 (from $GOPATH)
pkg/services/configstores/etcdv3/etcdv3.go:7:2: cannot find package "go.etcd.io/etcd/mvcc/mvccpb" in any of:
        /usr/local/go/src/go.etcd.io/etcd/mvcc/mvccpb (from $GOROOT)
        /go/src/go.etcd.io/etcd/mvcc/mvccpb (from $GOPATH)
proto/runtime/v1/mosn.pb.go:11:2: cannot find package "google.golang.org/grpc" in any of:
        /usr/local/go/src/google.golang.org/grpc (from $GOROOT)
        /go/src/google.golang.org/grpc (from $GOPATH)
proto/runtime/v1/mosn.pb.go:12:2: cannot find package "google.golang.org/grpc/codes" in any of:
        /usr/local/go/src/google.golang.org/grpc/codes (from $GOROOT)
        /go/src/google.golang.org/grpc/codes (from $GOPATH)
proto/runtime/v1/mosn.pb.go:13:2: cannot find package "google.golang.org/grpc/status" in any of:
        /usr/local/go/src/google.golang.org/grpc/status (from $GOROOT)
        /go/src/google.golang.org/grpc/status (from $GOPATH)
pkg/filter/network/tcpcopy/portrait_data.go:10:2: cannot find package "mosn.io/api" in any of:
        /usr/local/go/src/mosn.io/api (from $GOROOT)
        /go/src/mosn.io/api (from $GOPATH)
pkg/filter/network/tcpcopy/persistence/persistence.go:11:2: cannot find package "mosn.io/mosn/pkg/configmanager" in any of:
        /usr/local/go/src/mosn.io/mosn/pkg/configmanager (from $GOROOT)
        /go/src/mosn.io/mosn/pkg/configmanager (from $GOPATH)
cmd/layotto/main.go:18:2: cannot find package "mosn.io/mosn/pkg/featuregate" in any of:
        /usr/local/go/src/mosn.io/mosn/pkg/featuregate (from $GOROOT)
        /go/src/mosn.io/mosn/pkg/featuregate (from $GOPATH)
pkg/grpc/grpc.go:6:2: cannot find package "mosn.io/mosn/pkg/filter/network/grpc" in any of:
        /usr/local/go/src/mosn.io/mosn/pkg/filter/network/grpc (from $GOROOT)
        /go/src/mosn.io/mosn/pkg/filter/network/grpc (from $GOPATH)
cmd/layotto/main.go:21:2: cannot find package "mosn.io/mosn/pkg/filter/stream/flowcontrol" in any of:
        /usr/local/go/src/mosn.io/mosn/pkg/filter/stream/flowcontrol (from $GOROOT)
        /go/src/mosn.io/mosn/pkg/filter/stream/flowcontrol (from $GOPATH)
pkg/filter/network/tcpcopy/strategy/switch.go:6:2: cannot find package "mosn.io/mosn/pkg/log" in any of:
        /usr/local/go/src/mosn.io/mosn/pkg/log (from $GOROOT)
        /go/src/mosn.io/mosn/pkg/log (from $GOPATH)
cmd/layotto/main.go:22:2: cannot find package "mosn.io/mosn/pkg/metrics/sink" in any of:
        /usr/local/go/src/mosn.io/mosn/pkg/metrics/sink (from $GOROOT)
        /go/src/mosn.io/mosn/pkg/metrics/sink (from $GOPATH)
cmd/layotto/main.go:23:2: cannot find package "mosn.io/mosn/pkg/metrics/sink/prometheus" in any of:
        /usr/local/go/src/mosn.io/mosn/pkg/metrics/sink/prometheus (from $GOROOT)
        /go/src/mosn.io/mosn/pkg/metrics/sink/prometheus (from $GOPATH)
cmd/layotto/main.go:24:2: cannot find package "mosn.io/mosn/pkg/mosn" in any of:
        /usr/local/go/src/mosn.io/mosn/pkg/mosn (from $GOROOT)
        /go/src/mosn.io/mosn/pkg/mosn (from $GOPATH)
cmd/layotto/main.go:25:2: cannot find package "mosn.io/mosn/pkg/network" in any of:
        /usr/local/go/src/mosn.io/mosn/pkg/network (from $GOROOT)
        /go/src/mosn.io/mosn/pkg/network (from $GOPATH)
pkg/filter/network/tcpcopy/portrait_data.go:12:2: cannot find package "mosn.io/mosn/pkg/types" in any of:
        /usr/local/go/src/mosn.io/mosn/pkg/types (from $GOROOT)
        /go/src/mosn.io/mosn/pkg/types (from $GOPATH)
cmd/layotto/main.go:26:2: cannot find package "mosn.io/pkg/buffer" in any of:
        /usr/local/go/src/mosn.io/pkg/buffer (from $GOROOT)
        /go/src/mosn.io/pkg/buffer (from $GOPATH)
pkg/filter/network/tcpcopy/strategy/fuse.go:6:2: cannot find package "mosn.io/pkg/log" in any of:
        /usr/local/go/src/mosn.io/pkg/log (from $GOROOT)
        /go/src/mosn.io/pkg/log (from $GOPATH)
pkg/filter/network/tcpcopy/strategy/switch.go:7:2: cannot find package "mosn.io/pkg/utils" in any of:
        /usr/local/go/src/mosn.io/pkg/utils (from $GOROOT)
        /go/src/mosn.io/pkg/utils (from $GOPATH)
make: *** [Makefile:26: build-local] Error 1
make: *** [build-image] Error 2

What you expected to happen:
No errors occurred

How to reproduce it (as minimally and precisely as possible):
Simply run make build-image

Anything else we need to know?:
GOPATH: /Users/jason/go
GOROOT: /usr/local/Cellar/go/1.15.6/libexec

[Proposal]Isolation and code reuse of components

Problems to solve:

  1. Key conflicts. For example, if the user's state, lock, and sequencer APIs all use redis, then the keys may conflict
  2. Code redundancy. For example,state, lock, and sequencer APIs have redis component,and same code of redis client are copied three times

How to solve them:

  1. Key isolation. Make changes at the runtime layer, such as automatically prefixing the key with lock||appid||key1, sequencer||appid||key1
  2. Extract template code for certain storage as utility file. e.g. move all redis template code into redis_utils.go

[proposal]Progressive Service Mesh: Layotto rpc integrates with service registry like zookeeper,without istio

What would you like to be added:
Layotto rpc can use open source distributed coordination service (like zookeeper or nacos) for service discovery,without the pains of deploying istio

Why is this needed:
Istio is too complex to deploy and integrate with current microservice infrastructures(like service registry).For companies and users who are interested at service mesh and looking for a trial(in their own dev environment or in a small part of their production cluster),the cost of learning,deploying and integrating with istio is too high.
There might be several benefits if we add the feature that the sidecar can interact with service registry directly:

  1. Our quick-start demos can start up many products of sofastack,such as some sofaboot applications publishing their services to sofa-registry and invoking each other via layotto.
  2. Companies and users who are interested at service mesh can introduce the tech stack into their cluster progressively.Imagine it:just deploy the sidecar and add some configuration items to make it integrated with your current microservice infrastructures,and then you are a service mesh pro!

This idea was first proposed by @JervyShi ,and I think it's great

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.