Giter VIP home page Giter VIP logo

grpc-java's Introduction

gRPC โ€“ An RPC library and framework

gRPC is a modern, open source, high-performance remote procedure call (RPC) framework that can run anywhere. gRPC enables client and server applications to communicate transparently, and simplifies the building of connected systems.

Homepage: grpc.io
Mailing List: [email protected]

Join the chat at https://gitter.im/grpc/grpc

To start using gRPC

To maximize usability, gRPC supports the standard method for adding dependencies to a user's chosen language (if there is one). In most languages, the gRPC runtime comes as a package available in a user's language package manager.

For instructions on how to use the language-specific gRPC runtime for a project, please refer to these documents

  • C++: follow the instructions under the src/cpp directory
  • C#/.NET: NuGet packages Grpc.Net.Client, Grpc.AspNetCore.Server
  • Dart: pub package grpc
  • Go: go get google.golang.org/grpc
  • Java: Use JARs from Maven Central Repository
  • Kotlin: Use JARs from Maven Central Repository
  • Node: npm install @grpc/grpc-js
  • Objective-C: Add gRPC-ProtoRPC dependency to podspec
  • PHP: pecl install grpc
  • Python: pip install grpcio
  • Ruby: gem install grpc
  • WebJS: follow the grpc-web instructions

Per-language quickstart guides and tutorials can be found in the documentation section on the grpc.io website. Code examples are available in the examples directory.

Precompiled bleeding-edge package builds of gRPC master branch's HEAD are uploaded daily to packages.grpc.io.

To start developing gRPC

Contributions are welcome!

Please read How to contribute which will guide you through the entire workflow of how to build the source code, how to run the tests, and how to contribute changes to the gRPC codebase. The "How to contribute" document also contains info on how the contribution process works and contains best practices for creating contributions.

Troubleshooting

Sometimes things go wrong. Please check out the Troubleshooting guide if you are experiencing issues with gRPC.

Performance

See the Performance dashboard for performance numbers of master branch daily builds.

Concepts

See gRPC Concepts

About This Repository

This repository contains source code for gRPC libraries implemented in multiple languages written on top of a shared C++ core library src/core.

Libraries in different languages may be in various states of development. We are seeking contributions for all of these libraries:

Language Source
Shared C++ [core library] src/core
C++ src/cpp
Ruby src/ruby
Python src/python
PHP src/php
C# (core library based) src/csharp
Objective-C src/objective-c
Language Source repo
Java grpc-java
Kotlin grpc-kotlin
Go grpc-go
NodeJS grpc-node
WebJS grpc-web
Dart grpc-dart
.NET (pure C# impl.) grpc-dotnet
Swift grpc-swift

grpc-java's People

Contributors

apolcyn avatar benjaminp avatar buchgr avatar carl-mastrangelo avatar creamsoup avatar dapengzhang0 avatar dnvindhya avatar ejona86 avatar elharo avatar ericgribkoff avatar groakley avatar jdcormie avatar jtattermusch avatar jyane avatar larry-safran avatar lidizheng avatar lisafc avatar lukaszx0 avatar markb74 avatar ran-su avatar sanjaypujare avatar sergiitk avatar stanley-cheung avatar temawi avatar voidzcy avatar yifeizhuang avatar zhangkun83 avatar zhenlian avatar zpencer avatar zsurocking avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grpc-java's Issues

Daemon Threads in grpc

We have to decide how to deal with daemon threads. Grpc uses non-daemon threads by default and so the pattern

ServerImpl server = NettyServerBuilder. ...
// non blocking
server.start()

works fine.

However, if one was to provide a custom executor that uses daemon threads the above would terminate immediately. It's quite likely that people will unknowingly provide daemon threads to grpc by passing in custom Netty EventLoopGroup.

e.g.

EventLoopGroup boss = new NioEventLoopGroup(1);
EventLoopGroup worker = new NioEventLoopGroup(serverThreads);

ServerImpl server = NettyServerBuilder
            .userBossEventLoopGroup(boss)
            .workerEventLoopGroup(worker)
            .build();

Netty's EventLoops use a ForkJoinPool executor by default, which uses daemon threads.

The same issue exists on the client.
OkHttp uses non-daemon threads by default as @ejona86 pointed out to me.

One solution would be to add an infinite awaitTermination() method that does what the name suggests. Another solution would be to create a dummy non-daemon thread on startup and thus ensure that the JVM doesn't exit prematurely.

@nmittler @louiscryan

Add timing to QPS Client.

Currently the Java and C++ versions of the QPS Client allow one to perform a fixed number of calls. When using them for benchmarking I think it would make much more sense if we could tell the client to hit the server for some period of time (i.e. 30 minutes) i.e. 1 million calls just mean something completely different on GCE, Prod and your local workstation.

WDYT @vjpai . If you agree, I could also come up with a PR for the C++ version.

Buffer Messages until TLS Handshake and HTTP2 Negotiation complete

When grpc uses Netty as the client transport all RPC calls (aka HTTP2 Streams) block until the TLS Handshake and the HTTP2 negotiation is complete.

This blocking implementation (in grpc) is currently required as Netty's SslHandler doesn't buffer messages until the Handshake is complete ("You must make sure not to write a message while the handshake is in progress unless you are renegotiating."), and there is nothing to stop the user from starting to make RPC calls immediately.

This behavior comes with two problems:

  1. With RPC calls blocking until the TLS Handshake is complete, every call launched before the TLS Handshake and HTTP2 Negotiation are done will block its thread from which one would expect async behavior though.
  2. In cases when a DirectExecutor is being used it might lead to the EventLoop blocking forever (deadlock effectively). There is several scenarios how a deadlock could happen. One such scenario is when you are writing a server in Netty and within that server you want to connect to a grpc service to fetch some data. If you now use a DirectExecutor and reuse the EventLoop of the server with the grpc client, the TLS handshake would block the server's EventLoop, which is also the very EventLoop responsible for completing the TLS HandShake. That way neither the server nor the client would ever make progress again.

@nmittler , @ejona86 and I talked about this problem earlier today and we agreed to get rid of the blocking behavior by adding an additional ChannelHandler to the end of the pipeline (tail) that will buffer any data until TLS & HTTP2 are working. After that it will send the buffered messages through the pipeline and remove itself from the pipeline.

@nmittler @ejona86 @louiscryan

Implement outbound flow control

Should be token-based a la reactive streams. Although it also seems that maybe we will only ever have at most 1 token passed to the application.

Currently we provide no method of pushback to the application and buffer infinitely as the application sends.

Replace use of InputStream in transport APIs

The current use of InputStream has a few problems:

  • it is non-idiomatic to split the responsibility for opening and closing a stream; it also prevents features such as try-with-resources
  • it essentially forces extra byte copying in order to consume the data

I haven't looked at the implementation, but since the comments say that the InputStream is non-blocking, I assume that it is used instead of byte[] simply to prevent modifications of the data when it is passed to multiple listeners (a reasonable concern).

Since there already appears to be a Guava dependency, the obvious alternative would be to pass a (inherently read-only) ByteSource instead.

(ByteString from protobuf would be another option, but for various reasons ByteSource is probably a better choice)

Remove blocking parts from NettyClientTransport

NettyClientTransport#newStream is currently a blocking operation. It blocks until the HEADERS frame has been written on the wire. This is behavior is not what people who use our asynchronous API would come to expect.

The blocking also is the cause for severe performance issues in the QPS Client as it results in more or less in as many threads being created as there are concurrent calls going on (We have seen ~850 Threads for 1000 concurrent calls, resulting in OOM).

The blocking may also lead to deadlocking the EventLoop in cases where a DirectExecutor is used. One scenario where a deadlock might happen is when the EventLoop is not able to completely flush the HEADERS frame on the wire because then Netty would internally create a task to flush the remaining bytes and put this task in its task queue. This task can never be completed though as the EventLoop Thread is blocked by our very own newStream method waiting for the task to be completed ...

This issue depends on #116 and #118 to be resolved first.

Investigate excessive flushing in Netty

Determine ways to reduce number of flushes we perform. For example, if the header frame is being sent, flushing afterward is generally a waste since a DATA frame typically follows. We want methods that allow smart semi-automatic flushing or using knowledge from application layer.

Add getPort() or similar to ServerImpl

I noticed that there doesn't seem to be a way to programatically retrieve a way to get the port a ServerImpl is listening on. Could we add a method that provides this information?

I would argue it's useful for when one wants to bind to port 0 i.e.

ServerImpl server = NettyServerBuilder.build();
server.start()

System.out.println("Server listening on port " + server.getPort());

Channel interface needs shutdown/close

With the elimination of Service the Channel interface is now insufficient for normal use.

While ChannelImpl has a shutdown Channel does not and intercepting a ChannelImpl immediately converts it into Channel.

Closeable/AutoCloseable would be fine too

Figure out what names we want for Client Foos vs Server Foos

We have Call on both Client and Server. When adding the server, we chose ServerCall as the name of the server-side Call, and discussed renaming the client-side to ClientCall. ClientInterceptor was named based on this idea. However, the rename hasn't happened yet and now some have suggested having the Client names simply lack "Client" (so it would remain "Call"). It seems it has simply been too long since the original discussion for us to remember what was decided.

We need to decide to either prefix Client with Client and Server with Server or only prefix Server with Server. After the decision, whatever needs fixing needs to be fixed.

Note that in the transport we must have ClientStream prefixed with Client, since Stream is a shared interface between Client and Server.

OkHttp transport has high latency

For some unknown reason, OkHttp transport regressed in performance dramatically, considering it used to be beating Netty. Unfortunately we don't know when. We need to do some profiling and figure out where the latency is coming from.

Buffer RPC Calls for when the MAX_CONCURRENT_STREAMS limit is hit.

The number of concurrent RPC calls we can do is limited by HTTP2's MAX_CONCURRENT_STREAMS setting. Currently when using Netty as the client transport, each call made after this limit is reached blocks its calling thread until the number of active streams goes below the maximum again. The blocking is necessary as otherwise Netty would simply reject the stream with a PROTOCOL_ERROR, thus we want to buffer those calls and only pass them to Netty once there is room for new streams again.

Similar to #116 a user would again expect asynchronous behavior here.

The proposed solution to this problem is to remove the before mentioned buffering / blocking from grpc-java and let Netty handle it instead. To do this we will add a new Http2ConnectionEncoder implementation to Netty that acts as a decorator to the DefaultHttp2ConnectionEncoder. It will intercept calls to writeHeaders, writeData and writeRstStream and buffer all frames of streams that have been created after the maximum streams limit was reached and pass through the others. The encoder will also add a listener to the connection so that when an active stream is closed the next stream from the buffer can be created. A call to writeRstStream will cause the buffered stream to be deleted from the buffer. Frames other than HEADERS, DATA and RST_STREAM will be passed directly to the DefaultHttp2ConnectionEncoder.

We propose to contribute this change back to Netty as it will likely also be useful for other people using Netty's HTTP2 codec.

Tidy up Javadoc

Need to ensure that Javadoc in the major interfaces and exposed types is of high quality and properly makes of referencing annotations.

Suggest we divide and conquer as follows...

Perform hostname checking on :authority before issuing call

We allow users to override the authority per-call, but we currently don't do any verification that that authority would be permitted for the current server. We should verify the provided authority against the TLS cert of the connection and fail in some way if the cert is not good for the requested authority. We would cache these verifications for the connection in a simple hash map.

It is the Java equivalent of grpc/grpc#471

Do we need both gradle and maven?

Keeping dependencies and other details in pom.xml and build.gradle will become a tedious overhead. Any reason to support two build systems?

Remove Guava's Service from our immediate API

Service doesn't gain us anything and is just painful for our users. We should remove it and just make our own API (start and stop, plus health-checking API).

This has already been done for Channel. It still needs to be done for Server, but I'm already working on that.

Optimize buffer usage in MessageFramer

We currently allocate a large non-direct buffer per stream due to MessageFramer, only to copy immediately out of it. We should instead write directly to the transport-native buffer, which will cause us to have a WriteableBuffer or some such like our current Buffer class that is ready-only.

"No cached instance found" exception in integration test

The exception does not cause any problem other than noise, but we should still figure out what is going wrong.

java.lang.IllegalArgumentException: No cached instance found for grpc-default-executor
    at io.grpc.SharedResourceHolder.releaseInternal(SharedResourceHolder.java:144)
    at io.grpc.SharedResourceHolder.release(SharedResourceHolder.java:115)
    at io.grpc.AbstractChannelBuilder$2.run(AbstractChannelBuilder.java:109)
    at io.grpc.ChannelImpl.shutdown(ChannelImpl.java:113)
    at io.grpc.testing.integration.AbstractTransportTest.teardown(AbstractTransportTest.java:128)
    at io.grpc.testing.integration.TestServiceClient.teardown(TestServiceClient.java:160)
    at io.grpc.testing.integration.TestServiceClient.access$000(TestServiceClient.java:51)
    at io.grpc.testing.integration.TestServiceClient$1.run(TestServiceClient.java:65)

Plugin is not building following current instructions

Final gradle install is giving me:

Could not resolve all dependencies for configuration ':grpc-netty:compile'.

Could not find io.netty:netty-codec-http2:5.0.0.Alpha2-SNAPSHOT.
Searched in the following locations:
https://repo1.maven.org/maven2/io/netty/netty-codec-http2/5.0.0.Alpha2-SNAPSHOT/maven-metadata.xml
https://repo1.maven.org/maven2/io/netty/netty-codec-http2/5.0.0.Alpha2-SNAPSHOT/netty-codec-http2-5.0.0.Alpha2-SNAPSHOT.pom
https://repo1.maven.org/maven2/io/netty/netty-codec-http2/5.0.0.Alpha2-SNAPSHOT/netty-codec-http2-5.0.0.Alpha2-SNAPSHOT.jar
file:/usr/local/google/home/lcarey/.m2/repository/io/netty/netty-codec-http2/5.0.0.Alpha2-SNAPSHOT/maven-metadata.xml
file:/usr/local/google/home/lcarey/.m2/repository/io/netty/netty-codec-http2/5.0.0.Alpha2-SNAPSHOT/netty-codec-http2-5.0.0.Alpha2-SNAPSHOT.pom
file:/usr/local/google/home/lcarey/.m2/repository/io/netty/netty-codec-http2/5.0.0.Alpha2-SNAPSHOT/netty-codec-http2-5.0.0.Alpha2-SNAPSHOT.jar
Required by:
io.grpc:grpc-netty:0.1.0-SNAPSHOT

Update route guide gradle file with task that just generates gRPC code

Currently there are two tasks in the gradle file, one of which builds and runs the server, one of which builds and runs the client.

In the tutorial, I'd like the users to just generate the gRPC code with protoc without running anything, it'd be useful if the gradle file offered an option to do this

Race for Netty between cancel and stream creation

AbstractClientStream.cancel won't cancel the stream on the wire if it appears the stream has not yet been allocated, as is described by the comment:

// Only send a cancellation to remote side if we have actually been allocated
// a stream id and we are not already closed. i.e. the server side is aware of the stream.

However, what happens if this is the case, is that the transport is not notified of the stream destruction, and the stream will still eventually be created by the transport and not be cancelled. This issue does not seem a problem with the OkHttp transport, since it allocates the stream id before returning any newly created stream. However, Netty delays id allocation until just before the stream headers are sent, which 1) is always done asynchronously and 2) may be strongly delayed due to MAX_CONCURRENT_STREAMS.

It appears that the optimization in AbstractClientStream should be removed outright and sendCancel's doc be updated to specify the expectation to handle such cases (as opposed to directly cause RST_STREAM). Both OkHttp and Netty seem to be handling such cases already. More importantly, the optimization seems highly prone for races given that id allocation is occurring in the transport thread whereas AbstractClientStream.cancel is happening on some application thread; using the normal synchronization between application and transport threads seems more than efficient enough and simpler.

Channel-state API

At this moment, creating TCP connections are created lazily on first call of a Channel, and if the TCP connection goes down it isn't reconnected until a subsequent call. However, some users will want the TCP connection to be created and maintained during the lifetime of the Channel.

This "constant connection" behavior does not make as much sense when accessing a third party service, as the service may purposefully be disconnecting idle clients, but is very reasonable in low-latency, intra-datacenter communication.

We need an API to choose between those behaviors and to export failure information about the Channel. All of this is bundled together for the moment under the name "health-checking API," but we can split it apart as it makes sense.

They are tied together for the moment because certain operations like "wait until Channel is healthy" assume that the channel will actively try to connect.

Some notes from @louiscryan:

Do we want to canonicalize transport failure modes into an enum or are we
happy with a boolean indicating transient vs. durable. What failure modes
will we have

  • wire incompatability which can occur at any time and while is in theory
    transient you may not want your application to continue working
  • unreachable
  • internal implementation error
  • redirection. the addressed service has moved elsewhere

Naming inconsistency in NettyServerBuilder

I noticed that NettyServerBuilder has the methods userBossEventLoopGroup and workerEventLoopGroup.

Why the user prefix in userBossEventLoopGroup, why not simply bossEventLoopGroup?

Decompression occurring in Transport thread

Apparently we are decompressing in the transport thread just so that we are able to provide the correct byte length to messageRead(). It seems we should remove the length argument to messageRead(), use Buffers.openStream(nextFrame, true), pass that stream to messageRead() (instead of calling toByteArray), and then set nextFrame = null.

Idea config breaks clean gradle build

$ gradle clean
$ gradle build
FAILURE: Build failed with an exception.

* Where:
Build file '/home/ejona/clients/grpc-java/integration-testing/build.gradle' line: 31

* What went wrong:
A problem occurred evaluating project ':stubby-integration-testing'.
> java.lang.NullPointerException (no error message)

Commenting out line 31 works around the problem, but obviously doesn't solve it:

excludeDirs = [file('.gradle')]
//excludeDirs += files(file("$buildDir/").listFiles())
excludeDirs -= file("$buildDir/generated-sources")

Consider making Channel/Server abstract classes

They were interfaces previously because of the bane of Service and the need to extend Abstract*Service in ChannelImpl/ServerImpl. Now that Service is gone from those APIs, we could swap to using abstract classes to give us greater ability to add to the APIs in the future.

Netty HTTP/2 negotiation fails silently if ALPN/NPN in classpath, but jetty_alpn not in bootclasspath

If you properly have ALPN/NPN in your classpath, but lack jetty_alpn in your bootclasspath, then we just hang after sending a SETTINGS frame. ALPN never happens, but "unsupported" isn't even called because that is normally called by jetty_alpn.

This makes it hard for users to determine what is wrong with their setup.

The only idea I have for a fix is to set a boolean when our ClientProvider is called and detect if it isn't set.

Remove Guava's Service from transport API

For similar reasons as #21, but more for ourselves instead of our users.

This will allow us to be much more precise and have nuances explicitly like how a connection can GOAWAY for new streams but keep the old streams processing.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.