grpc-ecosystem / go-grpc-middleware Goto Github PK
View Code? Open in Web Editor NEWGolang gRPC Middlewares: interceptor chaining, auth, logging, retries and more.
License: Apache License 2.0
Golang gRPC Middlewares: interceptor chaining, auth, logging, retries and more.
License: Apache License 2.0
return nil, grpc.Errorf(codes.InvalidArgument, err.Error())
I want to not use codes.InvalidArgument =3, so give me a opt to let me choose what errcode to return
What's the best way to suppress certain response output. In this case we have health checks that are called every so often which we don't really want to fill up the logs. Is the best solution to wrap the grpc_logrus.UnaryServerInterceptor handler with my own handler to filter out the calls.
Could give a full example for logging with zap?
The readme confuses me several days...
From the code, we can see:
ServerField = zap.String("span.kind", "server")
zap.String("grpc.code", code.String()),
I use elk, when put log parsed to es, it will fail.
Let me customize the grpc.code
field,
for example: use grpc_code
Since about 2 weeks ago there is a tagged release for golang/protobuf. It would probably make sense to use this as the constraint in the Gopkg file instead of master.
Hi,
I am using most of go-grpc-middleware and I have a question regarding logs. I wish to include the payload in the standard output or have a way to correlate both of them.
In order to get, ideally, this king of output:
{
"level": "info",
"msg": "finished unary call",
"grpc.code": "OK",
"grpc.method": "Ping",
"grpc.service": "mwitkow.testproto.TestService",
"grpc.start_time": "2006-01-02T15:04:05Z07:00",
"grpc.request.deadline": "2006-01-02T15:04:05Z07:00",
"grpc.request.value": "something",
"grpc.time_ms": 1.345,
"peer.address": {
"IP": "127.0.0.1",
"Port": 60216,
"Zone": ""
},
"span.kind": "server",
"system": "grpc"
"grpc.request.content": {
"msg" : {
"value": "something",
"sleepTimeMs": 9999
}
},
"custom_field": "custom_value",
"custom_tags.int": 1337,
"custom_tags.string": "something",
}
Below, an extract of my interceptor.
grpc_zap.UnaryServerInterceptor(logger.Zap, opts...),
grpc_zap.PayloadUnaryServerInterceptor(logger.Zap, alwaysLoggingDeciderServer),
One simple way could be to pass a GUID in the context and flag both logs with it. Then we could aggregate both results in Grafana. It' not optimal but would work, though. ๐ค
By the way, thanks for the amazing work!
func UnaryClientInterceptor(logger *zap.Logger, opts ...Option) grpc.UnaryClientInterceptor {ยฌ
logger should can be the struct of sugar.
If I only wanted to run the auth middleware on auth protected endpoints, can this be accomplished?
In the sample code it is encouraged to create a log entry, store it in the context and use it in the rest of the request to log. As far as I can tell this is not thread safe. What is thought process behind this example? I would like to use this issue as a place to have a discussion on logging in this manner.
How does the opentracing part of this repo relate to https://github.com/grpc-ecosystem/grpc-opentracing/tree/master/go/otgrpc (otgrpc)?
It look like both package build middleware, using https://github.com/opentracing/opentracing-go
Are they alternative implementations? Are you in touch with the authors of otgrpc?
I'm trying to log some per-request metadata. The code below gets it done, producing:
grpc.code=OK grpc.method=Ping user=98765 peer.address=127.0.0.1:53878 span.kind=server system=grpc
I was wondering if there's a better way?
On the client side doing:
import "google.golang.org/grpc/metadata"
md := metadata.Pairs("user", "98765")
ctx := metadata.NewOutgoingContext(context.Background(), md)
client.Ping(ctx, &pb_testproto.PingRequest{})
And on the server, something like:
func UnaryServerMetdataTagInterceptor(fields ...string) grpc.UnaryServerInterceptor {
return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
if ctxMd, ok := metadata.FromIncomingContext(ctx); ok {
for _, field := range fields {
if values, present := ctxMd[field]; present {
tags := grpc_ctxtags.Extract(ctx)
tags = tags.Set(field, strings.Join(values, ","))
}
}
}
return handler(ctx, req)
}
}
myServer := grpc.NewServer(
grpc_middleware.WithUnaryServerChain(
grpc_ctxtags.UnaryServerInterceptor(),
UnaryServerMetdataTagInterceptor("user"),
grpc_logrus.UnaryServerInterceptor(logrusEntry, logrusOpts...),
),
...
)
Related, is there a reason grpc_ctxtags. RequestFieldExtractorFunc
doesn't get access to the per-request context? Would the addition of a MetadataExtractorFunc
in grpc_ctxtags.options
be welcomed?
Hi,
I'm using the zap logging interceptor with the ctxtags interceptor. Running into an issue where the peer.address
key appears twice.
Wondering if anyone else has run into this issue?
2017-12-24T22:46:05.822-0800 INFO zap/server_interceptors.go:40 finished unary call {"peer.address": "[::1]:63312", "grpc.start_time": "2017-12-24T22:46:05-08:00", "system": "grpc", "span.kind": "server", "grpc.service": "...", "grpc.method": "...", "peer.address": "[::1]:63312", "grpc.code": "OK", "grpc.time_ms": 398.6860046386719}
...
logger, err := zap.NewDevelopment()
if err != nil {
log.Fatalf("failed to initialize zap logger: %v", err)
}
grpc_zap.ReplaceGrpcLogger(logger)
kaParams := keepalive.ServerParameters{
MaxConnectionIdle: 60 * time.Minute,
Time: 60 * time.Minute,
}
s := grpc.NewServer(
grpc.KeepaliveParams(kaParams),
grpc_middleware.WithUnaryServerChain(
grpc_ctxtags.UnaryServerInterceptor(),
grpc_zap.UnaryServerInterceptor(logger),
grpc_recovery.UnaryServerInterceptor(),
),
)
...
I'm trying to figure out how to insert the stack trace of an error into my logrus generated messages. The errors are constructed using the github.com/pkg/errors
package, for example: errors.Wrapf(err, "Failed to execute query")
. Ideally, I'd like this to show up in my JSON log as follows:
{
"timestamp": "2017-12-29T03:29:26Z",
"system": "grpc",
"message": "finished unary call",
"level": "info",
"error": "rpc error: code = Internal desc = invalid UpdatePartyName request. Expected a Person or Organization",
"action": "CreateParty",
"stacktrace": "github.com/myrepo/myproj/manager.init\ngithub.com/myrepo/myproj/manager/manager.go:84\ngithub.com/myrepo/myproj/server.init\n\u003cautogenerated\u003e:1\ngithub.com/myrepo/myproj/cmd.init\n\u003cautogenerated\u003e:1\nmain.init\n\u003cautogenerated\u003e:1\nruntime.main\n/usr/local/Cellar/go/1.9.2/libexec/src/runtime/proc.go:183\nruntime.goexit\n/usr/local/Cellar/go/1.9.2/libexec/src/runtime/asm_amd64.s:2337"
}
I've been able to use the WithCodes
function to change the error codes, but this doesn't allow me to hook into the log message to insert any new details. Can anyone point me in the right direction?
I'm not sure if I use your library correctly, but I'm getting race with this code:
package main
import (
"context"
"github.com/mwitkow/go-grpc-middleware/util/metautils"
"google.golang.org/grpc/metadata"
)
func main() {
md := metadata.Pairs("key", "value")
parent := metadata.NewContext(context.Background(), md)
for i := 0; i < 1000; i++ {
go func(parent context.Context) {
ctx, cancel := context.WithCancel(parent)
defer cancel()
metautils.SetSingle(ctx, "key", "val")
}(parent)
}
}
The idea is that I receive gRPC request to service A which then calls concurrently multiple services (let's say B, C and D). I re-use parent context but I set some timeout for those requests. Connections between A and B-D are using retry logic from this repository (5 retries, 1 second timeout). So the race is in metautils.SetSingle()
where multiple writes are performed on metadata map (storing x-retry-attempty header). Is it intended to not work concurrently or I'm doing something wrong? Above example is narrowed down to calling metautils.SetSingle()
as it's not easy to reproduce, but I can prepare more adequate example if needed.
Hi,
i use grpc_auth and grpc_prometheus in a gRPC server like that :
server := grpc.NewServer(
grpc.StreamInterceptor(grpc_prometheus.StreamServerInterceptor),
grpc.UnaryInterceptor(
grpc_middleware.ChainUnaryServer(
middleware.ServerLoggingInterceptor(true),
grpc_auth.UnaryServerInterceptor(authenticate),
grpc_prometheus.UnaryServerInterceptor,
otgrpc.OpenTracingServerInterceptor(tracer, otgrpc.LogPayloads()))),
)
And auth :
func authenticate(ctx context.Context) (context.Context, error) {
glog.V(2).Info("Check authentication")
token, err := grpc_auth.AuthFromMD(ctx, "basic")
if err != nil {
return nil, err
}
userID, err := auth.CheckBasicAuth(token)
if err != nil {
return nil, grpc.Errorf(codes.Unauthenticated, err.Error())
}
newCtx := context.WithValue(ctx, transport.UserID, userID)
return newCtx, nil
}
I try to access services without credentials i ve got this response :
rpc error: code = 16 desc = Unauthorized
So it works fine
But in the /metrics exported for Prometheus, i don't see any metrics with code = Unauthenticated
grpc_server_handled_total{grpc_code="Unauthenticated", grpc_method="xxxxx",grpc_service="xxxxxxx",grpc_type="unary"} 0
Any idea ?
Now the value is only for string.
Does it possible to save a struct( object) to value ?
{"level":"info","ts":1498443458.2312229,"caller":"servant.git/main.go:95","msg":"{\"Item1\":\"aaa\",\"Item2\":222}","system":"grpc","span.kind":"server","grpc.service":"pb.Greeter","grpc.method":"SayHello"}
let the msg is a object.
Hi,
When using the logging middleware, its difficult to use custom errors in handlers/other interceptors because the logging middleware expects an rpcError
type to determine the grpc error code.
By adding another func (that extracts a grpc error code from an error) to the logging options, it would be easy to enable the use of custom errors types. This can be done without changing the current default behavior.
I'll submit a PR.
Does it sound reasonable to support a logging implementation using glog ? We currently use glog and would like to use that for grpc logger.
Haven't checked the code yet, but I saw in the documentation that the logrus middleware awaits a logrus.Entry
. Wouldn't it be better to rely on the logrus.FieldLogger
interface?
grpc.request.deadline
and grpc.start_time
use d.Format(time.RFC3339)
which means maximum precision is in seconds. I believe it would be useful to use d.Format(time.RFC3339Nano)
at least.
Best would be a configuration option for me to format all logged timestamps as desired.
I can work on a PR for this change if it makes sense.
If you use grpc_logrus with dependency vendoring (with dep or glide) importing logrus as github.com/sirupsen/logrus
(recommended by author, see sirupsen/logrus#543 and sirupsen/logrus#553 (comment)) will cause build error
case-insensitive import collision
Hi, I have a question if you know of any existing Go library providing support for file and blob streaming over gRPC?
Example use cases include microservices providing a API for converting media formats (image, audio, video) or microservices providing file compression, text-to-speech, etc.
I found this thread grpc/grpc-go#414 , this seam to be a pretty common need so I figure others must have had dealt with it.
Pardon the lackluster issue instead of a proper pull request, but I'm under time pressure and would otherwise just forget this.
While I was rolling my own middleware for gRPC, I noticed https://github.com/grpc-ecosystem/go-grpc-middleware/blob/master/chain.go#L18
My suggestion is somewhat self-explanatory and looks as follows:
func ChainUnaryServer(interceptors ...grpc.UnaryServerInterceptor) grpc.UnaryServerInterceptor {
n := len(interceptors)
if n > 1 {
curI := 0
lastI := n-1
return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
var chainHandler grpc.UnaryHandler
chainHandler = func(currentCtx context.Context, currentReq interface{}) (interface{}, error) {
if curI == lastI {
return handler(currentCtx, currentReq)
}
curI++
return interceptors[curI](currentCtx, currentReq, info, chainHandler)
}
return interceptors[0](ctx, req, info, chainHandler)
}
}
if n == 1 {
return interceptors[0]
}
// n == 0
return func(ctx context.Context, req interface{}, _ *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
return handler(ctx, req)
}
}
Avoids a loop, n lamba constructions and n additional function calls. Adds n if conditions and n increments, but that should still be considerably cheaper. Branches are ordered by most likely occurrence - it's a chain after all, so I assume n is > 1. The built lambdas end up in the hot path, so I think a little bit of microoptimization won't hurt. Coincidentally, I think it's easier to reason about.
Admittedly I have not actually benchmarked it against the code in grpc_middleware (apologies - really low on time) but it should (cough, cough) be quite a bit faster going by common sense. I have however been using this approach in a deployment - with no issues.
If someone wants to pick it up and work it into the interceptor factories, please go ahead. Otherwise I'll work on a proper PR, but that won't be sooner than in 2-3 weeks.
The client variants of the logging interceptors were added in #33. Following this change, logging interceptors do not behave correctly in their default configurations i.e. without specifying Options
as overrides
Unless WithLevels(...)
is specified as an option, calls to the logging interceptor panic, as o.levelFunc
is nil
Unless WithLevels(...)
is specified as an option, calls to the logging interceptor are logged at the incorrect levels as the default is the DefaultCodeToLevel
for both server and client interceptors
I am trying to add json object as part of my grpc_ctxtags (which show up as jsonPayload in stackdriver). I am shoving the proto object as is, and is being converted to a "string" instead of a json object. For more visuals see this:
jsonPayload: {
caller: "zap/server_interceptors.go:66"
conf: "encoding:LINEAR16 sample_rate_hertz:44100 header:"RIFF\277\377\377\377WAVEfmt \020\000\000\000\001\000\001\000D\254\000\000\210X\001\000\002\000\020\000data\233\377\377\377" language_code:"en-US" session_id:"102k0KxT9EISz-G1IVynLmIUg" session_owner_id:"24f80fa2-2a82-4e7b-a1e1-ab3c2b2dcfd9" stream_start_time:<seconds:1513280382 nanos:649000000 > context:<view:SCHEDULE > "
....
I want the conf object to be a json instead of a raw string. What is the advice on this?
I will post some codes there to show the examples.
Google has a bunch of error detail protobuf messages here: https://godoc.org/google.golang.org/genproto/googleapis/rpc/errdetails
I am currently performing validations by hand like so:
s, _ := status.Newf(codes.InvalidArgument, "invalid input").WithDetails(&errdetails.BadRequest{
FieldViolations: []*errdetails.BadRequest_FieldViolation{
{
Field: "SomeRequest.email_address",
Description: "INVALID_EMAIL_ADDRESS",
},
{
Field: "SomeRequest.username",
Description: "INVALID_USER_NAME",
},
},
})
return s.Err()
I am wondering if you guys can considering the following:
errdetails.BadRequest
to report which field is invalid.I'm confused what SystemField
is intended to represent, as it seems that it's being used as both a key (
My assumption would be that it would represent the value of the "system" field in the log message. Could I get some clarification here? I'll happily submit a pull request to correct it, if this is indeed an oversight. Thanks!
Hello there,
Recently, I am working on migrating from glide to dep in our project. Our project depends on this but when dep try to solve dependencies it iterates a lot branch that this project has. After I checked, a lot of them are already merged into master which means they are useless and can be removed from branch. I am wiring this to notify you guys and wish you can remove unused branches.
Thanks in advance.
I like being able to define tags in proto, and have the ctx_tags middleware extract them. This then can be used from request logging interceptors later down the middleware chain.
Can I do this for response logging too? It seems by default the logrus middleware when used with ctx_tags just logs the request. If I add a payload interceptor that is always true, then it logs out the response in grpc.response.content, as a full json struct.
I was hoping it would log like request, and use the opentrace format and populate only the tagged fields.
Any ideas?
This is the sort of logs I'm getting
{"app":"ticket_svc","grpc.code":"OK","grpc.method":"GetTicket","grpc.request.id":"e5179cd4-4c03-41f8-bc07-52d9fcf7bc85","grpc.service":"actourex.core.service.ticket.Command","grpc.time_ms":42,"level":"info","msg":"finished unary call","peer.address":{"IP":"::1","Port":52958,"Zone":""},"severity":"INFO","span.kind":"server","system":"grpc","time":"2017-07-05T18:14:10Z"}
# it would be nice if I could figure out how to have this print with keys like: grpc.response.id = blah, grpc.response.some_other_tagged_field = blah2
{"app":"ticket_svc","grpc.response.content":{ ... the full json payload... },"level":"info","msg":"server response payload logged as grpc.request.content field","severity":"INFO","time":"2017-07-05T18:14:10Z"}
I was checking out https://github.com/mwitkow/grpc-proxy, because i want to pass the incoming GRPC messages onto NATS - it makes Microservices so much easier i find.
However, in the Issues, the problem was that the GRPC team would not accept the customisation you did so that you could get access to the binary[] of data in the stream.
So, would the new Interceptors allow this Proxying to be achieved ?
It looks like google.golang.org/grpc/metadata
is now being vendored by this project. This breaks the public API of this package, which might not be your intention?
I don't believe google.golang.org/grpc/metadata
needs to be vendored by this package. It kinda breaks type compatibility between this package and other packages using metadata
.
somefile.go:74:46: cannot use stream (type "google.golang.org/grpc".ServerStream) as type "github.com/grpc-ecosystem/go-grpc-middleware/vendor/google.golang.org/grpc".ServerStream in argument to grpc_middleware.WrapServerStream:
"google.golang.org/grpc".ServerStream does not implement "github.com/grpc-ecosystem/go-grpc-middleware/vendor/google.golang.org/grpc".ServerStream (wrong type for SendHeader method)
have SendHeader("google.golang.org/grpc/metadata".MD) error
want SendHeader("github.com/grpc-ecosystem/go-grpc-middleware/vendor/google.golang.org/grpc/metadata".MD) error
Any interceptors chained after the retry interceptor are not re-executed in subsequent retry attempts.
For example if we have:
grpc_middleware.ChainUnaryClient(
grpc_retry.UnaryClientInterceptor()
grpc_prometheus.UnaryClientInterceptor)
and want the grpc_prometheus interceptor to see and time each retry independently then currently it will intercept only the first attempt.
(This was possibly a regression in 5d4723c)
I have a pull request, with tests, in:
#100
Prior to looking into using the grpc_ctxtags middleware I was using the context myself to propagate the tags. I have the grpc_ctxtags wired up for my grpc server interceptors, but I also have a separate worker that is not a grpc server and I'm not seeing an easy way to use a common chunk of code for the tags and logging with the grpc_ctxtags as it returns a no-op tag when it was not initialized with the interceptor and I'm not seeing any other way to initialize it.
Should probably change the Sirupsen import to the lowercased version. (unfortunately)
Currently if the default duration function is used we lose precision on the duration if the call is sub millisecond.
Link to Playground
https://play.golang.org/p/P_oPECXyCU
ohai!
Just wanted to know why the backoff strategy used here was linear, versus exponential? Was it an omission, a lack of opinion or a choice made on purpose?
As in the title
When I have custom code in stream interceptor right after calling handler(srv, wrapped)
. When client cancels context, the code after the handler is not executed.
Is this grpc error, or middeware error?
Examples
Package (HandlerUsageUnaryPing)
Package (Initialization)
Package (InitializationWithDurationFieldOverride)
These links are not working.
in this example, we see the following code:
func Example_deadlinecall() error {
client := pb_testproto.NewTestServiceClient(cc)
pong, err := client.Ping(
newCtx(5*time.Second),
&pb_testproto.PingRequest{},
grpc_retry.WithMax(3),
grpc_retry.WithPerRetryTimeout(1*time.Second))
if err != nil {
return err
}
fmt.Printf("got pong: %v", pong)
return nil
}
But when I use one of those "option modifiers" as grpc_retry.WithMax() in a client call, it fails with a nullpointer exception. This is because the callOption wasn't set.
It seems to work well when I pass it as a modifier to the constructor of grpc_retry.UnaryClientInterceptor, such as in the test.
I do not yet fully understand the code architecture, and I don't know how a grpc.CallOption
is to be usted. My question is: Am I overlooking something? Is the example just wrong? Or is the implementation wrong?
Currently to use ServiceAuthFuncOverride one needs to introduce some dummy interceptor:
func dummyInterceptor(ctx context.Context) (context.Context, error) {
return ctx, nil
}
...
s := grpc.NewServer(
grpc.StreamInterceptor(grpc_auth.StreamServerInterceptor(dummyInterceptor)),
grpc.UnaryInterceptor(grpc_auth.UnaryServerInterceptor(dummyInterceptor)),
)
Is there better way to do this?
Hi,
What guys do you think of a new server incerceptor thay would stall API calls for a given duration in order to prevent DoS / bruteforce.
Use case : delay every API call by 200 ms to prevent API DoS.
I know this can be done easily using an auth interceptor with a custom sleep-based function but it doesn't sound very good because it has nothing to do with auth.
metadata.NewContext
and metadata.FromContext
have been removed from the grpc-go repository and now metautils
fails to build.
utils/metautils/single_key.go#25
metadata: Remove NewContext and FromContext for gRFC L7:
grpc/grpc-go@596a6ac
grpc/grpc-go#1392
As the title asked, I have a server written in go, but the client may be C#, so can those middlewares support C# gRPC?
I was wondering why the serverStreamingRetryingStream
does not empty its buffer bufferedSends
after these have successfully been sent when the stream is reestablished?
We would like to be able to use the logrus logging interceptor to:
I was able to configure the log level for the server messages (e.g. grpc_logrus.UnaryServerInterceptor) with grpc_logrus.WithLevels. However, for payload messages it looks to be hardcoded code link.
Is there interest in adding something similar for the payload levels? I'm open to submitting a pull request, but am not sure the best approach for where to set the payload level; some ideas: Add an option such as grpc_logrus.WithPayloadLevels? Pass it directly into the Payload*Interceptor function? Have it returned from the decider function? Other?
go get is giving errors: does anybody else get the same error?
% go get github.com/grpc-ecosystem/go-grpc-middleware/retry
# github.com/grpc-ecosystem/go-grpc-middleware/util/metautils
../../../../grpc-ecosystem/go-grpc-middleware/util/metautils/nicemd.go:21: undefined: metadata.FromIncomingContext
../../../../grpc-ecosystem/go-grpc-middleware/util/metautils/nicemd.go:33: undefined: metadata.FromOutgoingContext
../../../../grpc-ecosystem/go-grpc-middleware/util/metautils/nicemd.go:69: undefined: metadata.NewOutgoingContext
../../../../grpc-ecosystem/go-grpc-middleware/util/metautils/nicemd.go:76: undefined: metadata.NewIncomingContext
% go get github.com/grpc-ecosystem/go-grpc-middleware
# github.com/grpc-ecosystem/go-grpc-middleware
../../grpc-ecosystem/go-grpc-middleware/chain.go:77: undefined: grpc.UnaryClientInterceptor
../../grpc-ecosystem/go-grpc-middleware/chain.go:81: undefined: grpc.UnaryInvoker
../../grpc-ecosystem/go-grpc-middleware/chain.go:87: undefined: grpc.UnaryInvoker
../../grpc-ecosystem/go-grpc-middleware/chain.go:88: undefined: grpc.UnaryInvoker
../../grpc-ecosystem/go-grpc-middleware/chain.go:88: undefined: grpc.UnaryClientInterceptor
../../grpc-ecosystem/go-grpc-middleware/chain.go:88: undefined: grpc.UnaryInvoker
../../grpc-ecosystem/go-grpc-middleware/chain.go:106: undefined: grpc.StreamClientInterceptor
../../grpc-ecosystem/go-grpc-middleware/chain.go:110: undefined: grpc.Streamer
../../grpc-ecosystem/go-grpc-middleware/chain.go:116: undefined: grpc.Streamer
../../grpc-ecosystem/go-grpc-middleware/chain.go:117: undefined: grpc.Streamer
../../grpc-ecosystem/go-grpc-middleware/chain.go:117: too many errors
Would be helpful
When creating a client interceptor, despite selecting an opt with WithDecider set to a function that always returns false, I have noticed I am still getting DEBUG entries to zap, for example unary client calls would get a "finished client unary call" for every call
should there be something like
err := invoker(ctx, method, req, reply, cc, opts...)
+ if !o.shouldLog(method, err) {
+ return err
+ }
logFinalClientLine(o, logger.With(fields...), startTime, err, "finished client unary call")
in client_interceptors.go's Unary/StreamClientInterceptor functions to prevent the call to logFinalClientLine similar to how this looks like being handled on the server-side of things?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.