sogou / srpc Goto Github PK
View Code? Open in Web Editor NEWRPC framework based on C++ Workflow. Supports SRPC, Baidu bRPC, Tencent tRPC, thrift protocols.
License: Apache License 2.0
RPC framework based on C++ Workflow. Supports SRPC, Baidu bRPC, Tencent tRPC, thrift protocols.
License: Apache License 2.0
如果我要从后端实时推送数据到客户端,应用workflow的rpc框架,要怎么应用来做到实时推送呢
你们都不写测试的啊!
ProtocolBuf 自带了EmptyParam 的支持,但是在proto文件中使用导入,SRPC生成的时候报找到不到 google/protobuf/empty.proto。
所以,如何让SRPC支持空参数?
自己修改生成的RPC函数应该可以实现,但是很多地方都要修改。
对了,生成的是C++语言的。
srpc框架实现了thrift framed协议,因此可以和原生thrift进行互通。
原生thrift由于没有提供连接复用,小伙伴过去在使用thrift时往往会自己封装连接池,并使用半同步方式提升thrift的性能。
srpc提供了非常好的连接复用和线程复用,并且提供了兼容原thrift使用方式的接口,无论是client还是server性能都远超原thrift。
但小伙伴在业务升级时,可能会先升级client或server,逐步替换。
而我们在使用原生thrift client的半同步接口与srpc thrift server进行通信时,需要注意:
srpc server的网络模型是一发一收,这意味着我们对一个连接调用thrift的send_method()
和recv_method()
接口时也必须保证消息一发一收。不能对同一个连接连续调用多次send_method()
,原生thrift server会忽略多次发送的行为,但srpc thrift server内部网络模型会认为是错误而关掉连接。
以我们的tutorial里定义的Echo举个例子:
service Example {
EchoResult Echo(1:string message, 2:string name);
}
我们会使用以下两个半同步接口:
void send_Echo(const std::string& message, const std::string& name);
void recv_Echo(EchoResult& _return);
则小伙伴如果对原生thrift进行连接池封装,常见做法可能是:
// 建立若干个这样的client,每个client相当于一个连接
std::shared_ptr<TSocket> socket(new TSocket(IP, PORT));
std::shared_ptr<TTransport> transport(new TFramedTransport(socket));
std::shared_ptr<TProtocol> protocol(new TBinaryProtocol(transport));
ExampleClient client(protocol);
transport->open();
// 然后放到自己实现的连接池里管理起来
conn_pool->add_conn(&client);
使用时,如果不能针对同一个连接保证一发一收,则会被srpc thrift server被认为是错误包而关闭,client端会得到CLOSE_WAIT:
conn_pool->get_conn()->send_Echo("hello", "srpc"); // 发送
... // 做别的事情
conn_pool->get_conn()->recv_Echo(ret); // 接收,此连接很可能与发送时用的那个连接不同
因此,如果一定要用原生thrift并自己封装连接池,尽量这样:
auto *conn = conn_pool->get_conn(); // 自行保证连接拿出来后被占用
conn->send_Echo("hello", "srpc"); // 发送
... // 做别的事情
conn->recv_Echo(res); // 接收
conn_pool->put_conn(conn); // 归还
最后~~~
还是建议大家直接使用srpc thrift client,简单方便,再也不用自行封装连接池了。
client本身就是个连接池,接口简洁,性能优异:
Example::ThriftClient client(IP, PORT); // 一步建立多连接异步client
client.send_Echo("hello", "srpc"); // 发送
... // 做别的事情
client.recv_Echo(res); // 接收
支持服务注册中心吗?
/usr/local/include/srpc/rpc_task.inl:169:58: error: no type named 'Series' in 'WFServerTask<srpc::BaiduStdRequest, srpc::BaiduStdResponse>'
class RPCSeries : public WFServerTask<RPCREQ, RPCRESP>::Series
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~
/usr/local/include/srpc/rpc_task.inl:449:20: note: in instantiation of member class 'srpc::RPCServerTask<srpc::BaiduStdRequest, srpc::BaiduStdResponse>::RPCSeries' requested here
SERIES *series = dynamic_cast<SERIES *>(series_of(this));
^
/usr/local/include/srpc/rpc_task.inl:132:2: note: in instantiation of member function 'srpc::RPCClientTask<srpc::BaiduStdRequest, srpc::BaiduStdResponse>::message_out' requested here
RPCClientTask(const std::string& service_name,
^
/usr/local/include/srpc/rpc_client.h:57:20: note: in instantiation of member function 'srpc::RPCClientTask<srpc::BaiduStdRequest, srpc::BaiduStdResponse>::RPCClientTask' requested here
auto *task = new TASK(this->service_name,
^
./example.srpc.h:222:21: note: in instantiation of function template specialization 'srpc::RPCClientsrpc::RPCTYPEBRPC::create_rpc_client_task' requested here
auto *task = this->create_rpc_client_task("Echo", std::move(done));
^
1 warning and 3 errors generated.
/usr/local/include/srpc/rpc_context.inl:119:60: 错误:对成员‘get_attachment’的请求有歧义
return task_->get_resp()->get_attachment(attachment, len);
各位小伙伴,srpc会根据IDL文件自动生成两份代码:
server.pb_skeleton.cc
和client.pb_skeleton.cc
;server.thrift_skeleton.cc
和client.thrift_skeleton.cc
;原生成的代码里thrift部分有误,会默认启动SRPCServer
和SRPCClient
,已经改成默认启动ThriftFramed协议的ThriftServer
和ThriftClient
了,main()函数如下:
int main()
{
unsigned short port = 1412;
ThriftServer server; // 这里已修改为默认启动ThriftFramed协议的server
ExampleServiceImpl example_impl;
server.add_service(&example_impl);
server.start(port);
wait_group.wait();
server.stop();
return 0;
}
SRPC的Thrift Framed server/client可以和原生的其他语言的thrift互通,使用非常方便,欢迎大家尝试~
这3个地方的模板参数是多余的吧? c++20会编译报错"error: expected unqualified-id before ')' token"
srpc/src/thrift/rpc_thrift_idl.h
Line 86 in 39678c2
srpc/src/thrift/rpc_thrift_idl.inl
Line 53 in 39678c2
srpc/src/thrift/rpc_thrift_idl.inl
Line 674 in 39678c2
从CMake的官网下载CMake的二进制文件并安装,建议CMake版本 >= 3.6
我们假设您当前的路径为:E:/GitHubProjects
由于一些原因,我建议您使用vcpkg来安装依赖项
打开Powershell/cmd/bash,拉取vcpkg,并安装依赖项:
git clone https://github.com/microsoft/vcpkg.git
cd vcpkg
.\bootstrap-vcpkg.bat
# 安装依赖项
# win32
.\vcpkg.exe install zlib:x86-windows protobuf:x86-windows openssl:x86-windows snappy:x86-windows lz4:x86-windows
# amd64
.\vcpkg.exe install zlib:x64-windows protobuf:x64-windows openssl:x64-windows snappy:x64-windows lz4:x64-windows
# 之所以要指定架构并分别安装两个架构的库的原因是因为vcpkg有迷之bug会导致cmake找不到包
# 注意!不推荐将vcpkg全局集成,这会导致vcpkg的包污染您的项目,如果您希望集成vcpkg的包到某一个项目,请您使用nuget的本地仓库
从官方仓库拉取代码,并编译:
# 回到上一级目录
cd ..
git clone --recursive https://github.com/sogou/srpc.git
cd srpc
cd workflow
# 将workflow切换到windows分支
git checkout windows
#使用cmake生成vs解决方案,我当前的环境是cmake 3.23.0,visual studio 2022
# 编译32位版本
cmake -B build32 -S . -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake -G "Visual Studio 17" -A Win32
cmake --build build32 --config Debug
cmake --build build32 --config Release
# 编译64位版本
cmake -B build64 -S . -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake -G "Visual Studio 17" -A x64
cmake --build build64 --config Debug
cmake --build build64 --config Release
接下来编译srpc:
# 回到上一级目录
cd ..
# 编译32位版本
cmake -B build32 -S . -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake -G "Visual Studio 17" -A Win32
cmake --build build32 --config Debug
cmake --build build32 --config Release
# 编译64位版本
cmake -B build64 -S . -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake -G "Visual Studio 17" -A x64
cmake --build build64 --config Debug
cmake --build build64 --config Release
# 直接在srpc的目录下编译
# 编译32位版本
cmake -B buildt32 -S tutorial -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake -G "Visual Studio 17" -A Win32
cmake --build buildt32 --config Debug
cmake --build buildt32 --config Release
# 编译64位版本
cmake -B buildt64 -S tutorial -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake -G "Visual Studio 17" -A x64
cmake --build buildt64 --config Debug
cmake --build buildt64 --config Release
至此,windows上srpc的整个编译流程就结束了,可以尝试运行一下例子测试一下效果
在ubuntu18.04编译srpc的时候,cmake出现“install TARGETS given no LIBRARY DESTINATION for shared library target
"srpc-shared".”的错误,这要如何解决
srpc_generator protobuf ./echo.proto ./
echo.proto must not set "option cc_generic_services = true" for srpc.
MacOS 10.15.7 环境下 make 命令报错:
openssl 使用 homebrew 安装, 版本为 LibreSSL 2.8.3, workflow 也已正常安装。 看上去是 architecture 设置不对
已设置环境变量:
OPENSSL_ROOT_DIR=/usr/local/opt/openssl
OPENSSL_LIBRARIES=/usr/local/opt/openssl/lib
cmake 变量也已设置
cmake -DOPENSSL_ROOT_DIR=/usr/local/opt/openssl -DOPENSSL_LIBRARIES=/usr/local/opt/openssl/lib
请教一下现在thrift 多参数支持吗
原生thrift接口参数填写为多个
背景:GPU的云服务器太贵,故在公司局域网内部署了一台GPU。
问题: 云端的程序(“ srpc_pb_client有外网IP”),能用srpc 调用该GPU( srpc_pb_server无外网IP)吗?
文件名来自srpc/tutorial里面:
srpc_pb_client
srpc_pb_server
请问关于接口的压力测试是使用什么工具做的?
请问现在thrift idl 里边定义union是支持的吗?
我试了定义的并没有被解析出来生成对应代码
srpc使用了c++11标准,同时要求protobuf版本在3.12以上,是否是必须得?3.5版本是否可用?workflow也使用了c++11,但是vs2013仅支持部分,能否支持?
hi 您好,workflow支持Windows平台使用,且有专有的windows分支。srpc支持Windows平台使用么?
你好,我们要将一个pdf转图片的程序改造成一个rpc服务。用户传入一个pdf,服务转换成图片传回。这数据量还是蛮大的,这样的场景srpc能支持吗?
另一个问题,像grpc它支持二进制然后它有许多第三方的语言绑定,那srpc我要实现以上的服务,如果用http感觉要编码转成文本开销是很大的,如果用tcp的话,其它语言如java等岂不是要实现一个客户端?
SRPC支持与gRPC对接吗
gcc/g++ 版本是 11.2.1 20220127 (Red Hat 11.2.1-9) (GCC)
protoc 的版本是 libprotoc 3.11.4
编译 srpc 的源码报错:
[ 61%] Building CXX object src/compress/CMakeFiles/compress.dir/rpc_compress_snappy.cc.o In file included from /usr/local/include/google/protobuf/message.h:120, from /home/srpc/src/rpc_basic.h:22, from /home/srpc/src/compress/rpc_compress_snappy.h:20, from /home/srpc/src/compress/rpc_compress_snappy.cc:19: /usr/local/include/google/protobuf/arena.h: In member function ‘void* google::protobuf::Arena::AllocateInternal(bool)’: /usr/local/include/google/protobuf/arena.h:536:15: error: cannot use ‘typeid’ with ‘-fno-rtti’ 536 | AllocHook(RTTI_TYPE_ID(T), n); | ^~~~~~~~~~~~ /usr/local/include/google/protobuf/arena.h: In member function ‘T* google::protobuf::Arena::CreateInternalRawArray(size_t)’: /usr/local/include/google/protobuf/arena.h:599:15: error: cannot use ‘typeid’ with ‘-fno-rtti’ 599 | AllocHook(RTTI_TYPE_ID(T), n); |
加了CXXFLAGS -fno-rtti 报错信息如下:
/home/srpc/workflow/src/manager/UpstreamManager.cc: In static member function ‘static int UpstreamManager::upstream_add_server(const string&, const string&, const AddressParams*)’: /home/srpc/workflow/src/manager/UpstreamManager.cc:169:34: error: ‘dynamic_cast’ not permitted with ‘-fno-rtti’ 169 | UPSGroupPolicy *policy = dynamic_cast<UPSGroupPolicy *>(ns->get_policy(name.c_str())); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/srpc/workflow/src/manager/UpstreamManager.cc: In static member function ‘static int UpstreamManager::upstream_remove_server(const string&, const string&)’: /home/srpc/workflow/src/manager/UpstreamManager.cc:185:34: error: ‘dynamic_cast’ not permitted with ‘-fno-rtti’ 185 | UPSGroupPolicy *policy = dynamic_cast<UPSGroupPolicy *>(ns->get_policy(name.c_str())); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/srpc/workflow/src/manager/UpstreamManager.cc: In static member function ‘static int UpstreamManager::upstream_delete(const string&)’: /home/srpc/workflow/src/manager/UpstreamManager.cc:197:34: error: ‘dynamic_cast’ not permitted with ‘-fno-rtti’ 197 | UPSGroupPolicy *policy = dynamic_cast<UPSGroupPolicy *>(ns->del_policy(name.c_str())); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/srpc/workflow/src/manager/UpstreamManager.cc: In static member function ‘static std::vector<std::basic_string<char> > UpstreamManager::upstream_main_address_list(const string&)’: /home/srpc/workflow/src/manager/UpstreamManager.cc:211:34: error: ‘dynamic_cast’ not permitted with ‘-fno-rtti’ 211 | UPSGroupPolicy *policy = dynamic_cast<UPSGroupPolicy *>(ns->get_policy(name.c_str())); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/srpc/workflow/src/manager/UpstreamManager.cc: In static member function ‘static int UpstreamManager::upstream_disable_server(const string&, const string&)’: /home/srpc/workflow/src/manager/UpstreamManager.cc:223:34: error: ‘dynamic_cast’ not permitted with ‘-fno-rtti’ 223 | UPSGroupPolicy *policy = dynamic_cast<UPSGroupPolicy *>(ns->get_policy(name.c_str())); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/srpc/workflow/src/manager/UpstreamManager.cc: In static member function ‘static int UpstreamManager::upstream_enable_server(const string&, const string&)’: /home/srpc/workflow/src/manager/UpstreamManager.cc:239:34: error: ‘dynamic_cast’ not permitted with ‘-fno-rtti’ 239 | UPSGroupPolicy *policy = dynamic_cast<UPSGroupPolicy *>(ns->get_policy(name.c_str())); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/srpc/workflow/src/manager/UpstreamManager.cc: In static member function ‘static int UpstreamManager::upstream_replace_server(const string&, const string&, const AddressParams*)’: /home/srpc/workflow/src/manager/UpstreamManager.cc:256:34: error: ‘dynamic_cast’ not permitted with ‘-fno-rtti’ 256 | UPSGroupPolicy *policy = dynamic_cast<UPSGroupPolicy *>(ns->get_policy(name.c_str())); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
我想使用srpc做数据IO服务器,只使用workflow时,有现成的例子,照着做就行(参考http_file_server),但在使用srpc时不知道如何将workflow中的代码和srpc无缝衔接起来,具体问题描述如下:
我在server的Echo中函数体中设置了一个IO读任务,并设置了回调,代码大致如下:
void Echo(WWIORequest *request, WWIOResponse *response, srpc::RPCContext *ctx) override {
WFFileIOTask *pread_task;
pread_task = WFTaskFactory::create_pread_task(fd, buf, size, 0,
pread_callback);
pread_task->user_data = response;
}
如果参照http_file_server的代码 ,则应该将以下4行类似代码整合到Echo函数中:
pread_task->user_data = resp; /* pass resp pointer to pread task. */
server_task->user_data = buf; /* to free() in callback() */
server_task->set_callback([](WFHttpTask *t){ free(t->user_data); });
series_of(server_task)->push_back(pread_task);
但是,server_task在srpc的环境中是不存在的,所以有以下几个问题:
1)没有类似server_task对象,buf中的数据如何传递给pread_task的回调函数?
2)没有类似server_task对象,释放buf怎么释放?如果是使用ctx->get_series()->set_callback函数的话,在里面的lamda函数该如何写呢?
3)series_of(server_task)->push_back(pread_task);这一行在srpc语境中等效于workflow的语句是ctx->get_series()->push_back(pread_task)吗?
辛苦大佬回答。
我在windows下安装了protobuf,并加入了环境变量。在用cmake对srpc进行编译的时候报错,报错内容如下:
CMake Error at src/CMakeLists.txt:17 (find_package):
Could not find a package configuration file provided by "Protobuf" with any
of the following names:
ProtobufConfig.cmake
protobuf-config.cmake
Add the installation prefix of "Protobuf" to CMAKE_PREFIX_PATH or set
"Protobuf_DIR" to a directory containing one of the above files. If
"Protobuf" provides a separate development package or SDK, be sure it has
been installed.
作者大大您好:目前我想用workflow实现客户端文件压缩传输的功能,当前主要是先完成批量文件下载的功能。
主要思路为利用ParallelWork和SeriesWork,在串行流中根据第一个httptask获取服务器中文件大小,文件超出阈值则在串行流中增加后续httptask分块下载,SeriesWork的callback中进行文件完整性检验与合并。
但是workflow中并不支持文件压缩的功能,srpc中是有这样的能力的,但是我不太懂rpc,所以想问问有没有什么小示例来解决我的疑惑,同时作为一个网络服务初学者,想请您咨询一下我的思路有没有啥问题,希望能获得您的帮助!
此处使用了apache thrift提供的thrift文件(thrift后缀github文本编辑器不支持上传,所以换成了txt后缀)
tutorial.txt
shared.txt
使用srpc_generator生成后发现部分地方生成错误,如下图红框
总结:出现如上错误的问题,我猜测是tutorial.thrift文件中的Struct Work下的num1赋了初始值0,导致代码生成工具没有成功识别
Mac环境,编译命令
g++ -o server server.cc example.pb.cc -std=c++11 -lsrpc -I/usr/local/opt/[email protected]/include -lprotobuf
提示:
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
SRPC supports generating and reporting tracing and spans, which can be reported in multiple ways, including exporting data locally or to OpenTelemetry.
Since SRPC follows the data specification of OpenTelemetry and the specification of w3c trace context, now we can use RPCSpanOpenTelemetry as the reporting plugin.
The report conforms to the Workflow style, which is pure asynchronous task and therefore has no performance impact on the RPC requests and services.
After the plugin RPCSpanOpenTelemetry
is constructed, we can use add_filter()
to add it into server or client.
For tutorial/tutorial-02-srpc_pb_client.cc, add 2 lines like the following :
int main()
{
Example::SRPCClient client("127.0.0.1", 1412);
RPCSpanOpenTelemetry span_otel("http://127.0.0.1:55358"); // jaeger http collector ip:port
client.add_filter(&span_otel);
...
For tutorial/tutorial-01-srpc_pb_server.cc, add the similar 2 lines. We also add the local plugin to print the reported data on the screen :
int main()
{
SRPCServer server;
RPCSpanOpenTelemetry span_otel("http://127.0.0.1:55358");
server.add_filter(&span_otel);
RPCSpanDefault span_log; // this plugin will print the tracing info on the screen
server.add_filter(&span_log);
...
make the tutorial and run both server and client, we can see some tracing information on the screen.
We can find the span_id: 04d070f537f17d00 in client become parent_span_id: 04d070f537f17d00 in server:
Open the show page of Jaeger, we can find our service name Example and method name Echo. Here are two span nodes, which were reported by server and client respectively.
As what we saw on the screen, the client reported span_id: 04d070f537f17d00 and server reported span_id: 00202cf737f17d00, these span and the correlated tracing information can be found on Jaeger, too.
How long to collect a trace, and the number of reported retries and other parameters can be specified through the constructor parameters of RPCSpanOpenTelemetry
. Code reference: src/module/rpc_span_policies.h
The default value is to collect up to 1000 trace information per second, and features such as transferring tracing information through the srpc framework transparently have also been implemented, which also conform to the specifications.
We can also use add_attributes()
to add some other informations as OTEL_RESOURCE_ATTRIBUTES.
Please notice that our service name "Example" is set also thought this attributes, the key of which is service.name
. If service.name
is also provided in OTEL_RESOURCE_ATTRIBUTES by users, then srpc service name takes precedence. Refers to : OpenTelemetry#resource
SRPC provides log()
and baggage()
to carry some user data through span.
API :
void log(const RPCLogVector& fields);
void baggage(const std::string& key, const std::string& value);
As a server, we can use RPCContext
to add log annotation:
class ExampleServiceImpl : public Example::Service
{
public:
void Echo(EchoRequest *req, EchoResponse *resp, RPCContext *ctx) override
{
resp->set_message("Hi back");
ctx->log({{"event", "info"}, {"message", "rpc server echo() end."}});
}
};
As a client, we can use RPCClientTask
to add log on span:
srpc::SRPCClientTask *task = client.create_Echo_task(...);
task->log({{"event", "info"}, {"message", "log by rpc client echo()."}});
workflow内置Consul Client,可以参考这个issue的说明:
sogou/workflow#1021
经腾讯公司授权,我们SRPC项目开源了腾讯TRPC协议的实现。这也是TRPC的第一个开源实现,腾讯公司的同学可以试用一下。server侧依然只支持连接池模式的访问,不支持pipeline以及乱序返回。
1、链路跟踪有吗, 有埋点方案与可视化工具吗
2、性能测试,用的什么工具,如WIKI介绍中的图,是怎么生成的?
EulerOS,aarch64,protobuf-3.5.0,gcc-7.3.0
cd srpc
make -j128
[ 62%] Building CXX object src/compress/CMakeFiles/compress.dir/rpc_compress.cc.o
[ 65%] Building CXX object src/compress/CMakeFiles/compress.dir/rpc_compress_snappy.cc.o
In file included from /usr/include/google/protobuf/message.h:118:0,
from /home/x/code/srpc/src/rpc_basic.h:22,
from /home/x/code/srpc/src/compress/rpc_compress_snappy.h:20,
from /home/x/code/srpc/src/compress/rpc_compress_snappy.cc:19:
/usr/include/google/protobuf/arena.h: In member function ‘void* google::protobuf::Arena::AllocateInternal(bool)’:
/usr/include/google/protobuf/arena.h:654:15: error: cannot use typeid with -fno-rtti
AllocHook(RTTI_TYPE_ID(T), n);
^
/usr/include/google/protobuf/arena.h: In member function ‘T* google::protobuf::Arena::CreateInternalRawArray(size_t)’:
/usr/include/google/protobuf/arena.h:693:15: error: cannot use typeid with -fno-rtti
AllocHook(RTTI_TYPE_ID(T), n);
^
make[3]: *** [src/compress/CMakeFiles/compress.dir/build.make:76: src/compress/CMakeFiles/compress.dir/rpc_compress_snappy.cc.o] Error 1
make[3]: Leaving directory '/home/x/code/srpc/build.cmake'
make[2]: *** [CMakeFiles/Makefile2:405: src/compress/CMakeFiles/compress.dir/all] Error 2
make[2]: Leaving directory '/home/x/code/srpc/build.cmake'
make[1]: *** [Makefile:152: all] Error 2
make[1]: Leaving directory '/home/x/code/srpc/build.cmake'
make: *** [GNUmakefile:13: all] Error 2
SRPC是基于搜狗的明星开源项目C++ Workflow开发,并且完美衔接。workflow是搜狗的异步网络与计算引擎,并包含多个通用协议的实现。大家可以先参考workflow项目的使用方法,应该可以更好理解SRPC。
GitHub地址:https://github.com/sogou/workflow
hello!想问一下会出linux中使用srpc的CMakelists写法的示例吗?因为感觉现在CMakeLists还是挺常用的,作为刚刚开始学习这方面的新人感觉QuickStart还是有点不够用,如果能多几个示例就更好了:)
thriftserver 在具有多service情况下,调用失败,初步跟踪了下,把方法名称作为service 的name,导致find失败,始终返回一个。
用c++filt解析结果为
vtable for protocol::HttpMessage
系统为CentOS 7
ProtoBuf version:3.13.0
项目的CMakeLists.txt 如下,srpc编译好后,将include和静态链接库放入项目里
set(SRPC_LIB srpc)
list(APPEND SRPC_INCLUDE_DIR
${ClickHouse_SOURCE_DIR}/contrib/srpc/_include
${ClickHouse_SOURCE_DIR}/contrib/srpc/workflow/_include
)
dbms_target_link_libraries(PRIVATE ${SRPC_LIB})
dbms_target_include_directories(PRIVATE ${SRPC_INCLUDE_DIR})
请问可能是哪里出了问题呢?
workflow内部提供了一个非常好用的first_timeout()
的接口,为每个网络任务发出时控制第一次回复超时的时间,我们可以用来实现许多丰富的功能,srpc目前用来作为watch的功能:request发到远端后保持长连接,只要还没到达first_timeout()
所控制的超时时间,期间远端都可以给我发送response,这个功能用作服务发现监控节点变更非常好用。
在srpc中为每个task提供的接口如下,叫做watch_timeout:
struct RPCTaskParams
{
int send_timeout;
int watch_timeout;
// ...
};
先前加入这个功能的时候,对于通过RPC_CLIENT_PARAMS_DEFAULT设置过全局的RPCClientParams的情况,watch_timeout_由于疏忽没有初始化,可能会是一个随机值。由于workflow的first_timeout()
接口单位为毫秒,如果被随机设置成一个值比较小的正整数,则很容易触发本地超时。
目前已修复,希望各位小伙伴升级到最新版本。也欢迎大家尝试使用这个功能,还有任何问题欢迎与我们反馈。感谢~
之前rpc server端默认是长连接,rpc client端默认为短连接,这使得用户需要手动修改client参数里的keep_alive_timeout来实现长连接client,如果没有修改的话性能大受影响。新的代码client改为默认长连接。
比如server需要返回好多条的message的场景下,使用gRPC的streaming RPC就可以。请问使用srpc有什么好的方法实现吗?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.