importcjj / mobc Goto Github PK
View Code? Open in Web Editor NEWA generic connection pool for Rust with async/await support
Home Page: https://docs.rs/mobc
License: Apache License 2.0
A generic connection pool for Rust with async/await support
Home Page: https://docs.rs/mobc
License: Apache License 2.0
I mean, when calling Pool::get().
Hey!
I ran into an issue when testing if my service works without redis
This is the config of the pool
const CACHE_POOL_MAX_OPEN: u64 = 32;
const CACHE_POOL_MAX_IDLE: u64 = 8;
const CACHE_POOL_TIMEOUT_SECONDS: u64 = 5;
const CACHE_POOL_EXPIRE_SECONDS: u64 = 30;
pub async fn connect() -> Result<RedisPool> {
let client = redis::Client::open(CONFIG.cache.host.as_str()).map_err(RedisClientError)?;
let manager = RedisConnectionManager::new(client);
Ok(Pool::builder()
.get_timeout(Some(Duration::from_secs(CACHE_POOL_TIMEOUT_SECONDS)))
.max_open(CACHE_POOL_MAX_OPEN)
.max_idle(CACHE_POOL_MAX_IDLE)
.max_lifetime(Some(Duration::from_secs(CACHE_POOL_EXPIRE_SECONDS)))
.build(manager))
}
async fn get_con(pool: &RedisPool) -> Result<RedisCon> {
pool.get().await.map_err(|e| {
error!("error connecting to redis: {}", e);
RedisPoolError
})
}
So the get_timeout
is set to five seconds and when redis is completely gone, it returns an error immediately. However, with a timeout (dns still available, but doesn't respond), it took forever to fail. With the above config it took 90 (30 seconds default + 2 retries?) seconds and if I lowered the timeout (e.g. to 500 ms), or removed it, it took 30 seconds, which I don't really understand.
I now added a select!
around my get_con
code, which works, but do you have any idea, why this might be happening? I looked through the code for get_timeout
and it seems fine 🤔
Hey hey.
I'm trying to use mobc to pool TcpStream
, and I notice that the wrapper Connection
doesn't implement AsyncRead
or AsyncWrite
, which means the connection can't be passed around to some contexts where those traits are required. Are there any plans to support a connection (or connection variant?) that supports these?
See issue in ahash and fix in metrics that is not picked up here.
The async API is coming.
see redis-rs/redis-rs#232
At first, the pool will spawn multi-workers to make new connections. And then, the pool will acquire the internals lock frequently to check if the initialization has been finished, the lock is also needed by workers, which causes some workers can't submit the connections in time.
The 0.17 version of the redis
crate contains TLS/SSL support. I'm currently using a forked version of this repo, and I have opened a PR for this: #42.
Currently the idle lifetime is only checked when the connection is requested.
It should also be checked by the cleaner.
Imagine the following scenario:
I will push a fix soon
The crates.io page for mobc_postgres links to https://github.com/importcjj/mobc-postgres, which has been archived (presumably in favor of this repository).
Inspired by r2d2-cluster-redis (sync) and redis-cluster-async (without pool), I created mobc-redis-cluster based on mobc-redis. https://github.com/rogeriob2br/mobc-redis-cluster.
Can we insert it in the suported list?
Hi,
After upgrading to mobc version 0.6.1 i'm getting the below error.
Here is my list of dependencies, i'm unsure if that makes a difference.
[dependencies]
actix-web = "3.3.2"
actix-rt = "1.1.1"
listenfd = "0.3.3"
serde = "1.0.117"
jsonwebtoken = "7.2.0"
serde_derive = "1.0.117"
chrono = "0.4.19"
rdkafka = "0.24.0"
env_logger = "0.8.2"
log = "0.4.11"
mobc = { version = "0.6.1", features = ["tokio"] }
thiserror = "1.0.22"
uuid = { version = "0.8.1", features = ["v4"] }
mobc-redis = "0.5.4"
And here is the error.
Compiling mobc v0.6.1
error[E0599]: no function or associated item named `new` found for struct `tokio::runtime::Runtime` in the current scope
--> /Users/mike/.cargo/registry/src/github.com-1ecc6299db9ec823/mobc-0.6.1/src/runtime.rs:41:46
|
41 | rt: tokio::runtime::Runtime::new().unwrap(),
| ^^^ function or associated item not found in `tokio::runtime::Runtime`
error: aborting due to previous error
For more information about this error, try `rustc --explain E0599`.
error: could not compile `mobc`
Thanks
Hey,
we use mobc as part of Prisma and we getting into a situation where mobc complete stops serving any connections.
If I create a HTTP server using hyper and create a mobc pool via Quaint.
A repo with the reproduction can be found here https://github.com/garrensmith/mobc-error-example
I then use apache benchmark with a request like this:
ab -v 4 -c 200 -t 120 http://127.0.0.1:4000/
Once apache benchmark has stopped. The connections in postgres go to either to a much lower than the original number of connections I've set to open or completely to zero. If I log State
from Mobc it will report it has 10 active connections. Which is incorrect.
However if I try and start apache benchmark and run it again, it will either run a lot slower and with fewer connections. Or not run at all because it cannot acquire a new connection from mobc.
I've tried a few things in the code but I cannot see why this is happening. I even tried #60 but that didn't fix it.
Any help would be really appreciated.
I am very green on rust so please be indulgent, but I am investigating a pool depletion bug (see prisma/prisma#5977) from Prisma (which uses mobc under a lot of layers).
Now if I understand correctly if the pool is above the limit, the request is added to conn_requests
(https://github.com/importcjj/mobc/blob/master/src/lib.rs#L546).
This is a HashMap, which I think should be replaced by a FIFO queue since the way it is queried (https://github.com/importcjj/mobc/blob/master/src/lib.rs#L661) means that a "random" request can be served (aka not necessarily the oldest).
Imagine a scenario with a pool of 1 and a get timeout of 5s, request A comes in at 0s and takes 5s to complete. Request B comes in at 2s and waits. Request C comes in at 3s and waits. A finishes, by random luck C gets the connection and takes 3s to complete. Request B will timeout where it should have not. In a real system with a legit backlog, this means a request could wait for a connection for a long time.
So once the connection goes out of scope, the Drop
trait makes sure that the connection is returned to the pool. This is assumed by the put_conn
(https://github.com/importcjj/mobc/blob/master/src/lib.rs#L637) function. This function then tries to give the connection back to a request that is waiting for one. Now I believe a loop is missing here: https://github.com/importcjj/mobc/blob/master/src/lib.rs#L660-L673
Again, imagine a scenario with a pool of 1 and a get timeout of 5s, request A comes in at 0s and takes 10s to complete. Request B comes in at 2s and waits. Request C comes in at 8s and waits. A finishes, B is selected to get the connection but the request has been cancelled so the connection is just returned to the pool directly (https://github.com/importcjj/mobc/blob/master/src/lib.rs#L665). C is never served and timeout after 5s.
Hi all,
I am currently working on a project using this crate which is awesome and has helped me loads. I am having trouble fitting it into my webserver as I need the Pool struct to implement Debug functionality. Would it be possible to add into this crate?
Thanks for any help!
Currently there are a bunch of FIXME for dropped result.
I think we should find a way to report those back to the user.
Minimum would be to log them, but maybe a callback system could be good? What do you think?
Version 0.7.0 that was just released is missing an entry in the changelog file. Would appreciate if you could document the changes!
Hey!
I've been using mobc-redis in a project for a while and I found a strange issue, which so far only happened in production (so with a lot of load) and which I haven't been able to reproduce locally.
It seems that in some cases, a connection becomes somehow "corrupted" and only returns PONG
on con.get()
calls, although the actual value in redis is a valid string. Also deleting that value from redis didn't change anything, as the problem seemed to be at the connection-level.
However, the connection doesn't appear to be broken (as the ping's when recycling it succeed), so the misbehaviour persists through several client-requests, as the connection is successfully recycled.
I had the same issue using the old CMD/GET API.
At first I thought this might happen in very long-running connections, so I reduced the max_lifetime
to 30 seconds and that helped, but it still happens (although, as I said, very rarely). And if it happens it's limited to this one connection and stops once the connection is dropped.
I'm using:
mobc-redis = "0.5.1"
mobc = "0.5.7"
This is the config for the pool:
const CACHE_POOL_MAX_OPEN: u64 = 32;
const CACHE_POOL_MAX_IDLE: u64 = 8;
const CACHE_POOL_TIMEOUT_SECONDS: u64 = 5;
const CACHE_POOL_EXPIRE_SECONDS: u64 = 30;
pub async fn connect() -> Result<RedisPool> {
let client = redis::Client::open(CONFIG.cache.host.as_str()).map_err(RedisClientError)?;
let manager = RedisConnectionManager::new(client);
Ok(Pool::builder()
.max_open(CACHE_POOL_MAX_OPEN)
.max_idle(CACHE_POOL_MAX_IDLE)
.get_timeout(Some(Duration::from_secs(CACHE_POOL_TIMEOUT_SECONDS)))
.max_lifetime(Some(Duration::from_secs(CACHE_POOL_EXPIRE_SECONDS)))
.build(manager))
}
and this is how e.g. a get
works:
async fn get_con(pool: &RedisPool) -> Result<RedisCon> {
pool.get().await.map_err(RedisPoolError)
}
pub async fn get_str(pool: &RedisPool, key: &str) -> Result<String> {
let mut con = get_con(&pool).await?;
let value = con.get(key).await.map_err(RedisCMDError)?;
FromRedisValue::from_redis_value(&value).map_err(RedisTypeError)
}
So as you can see, it's a pretty basic setup.
I'm not sure if this is an issue with the underlying redis-library, or with recycling connections - it seems the PONG
response of the ping on the connection gets stuck somehow?
I tried to locally reproduce it with redis-rs and with mobc-redis by doing billions of get's, but have never seen it. Maybe you have an idea what could be the issue?
Anyway, thanks for this fantastic project, besides this issue (which I fixed by validating the return-value from the get
), it's been working great, same for mobc-postgres. 👍
Any help you could provide would be greatly appreciated.
mobc_pool_connections_idle
is defined as Number of currently unused Pool Connections (waiting for the next pool query to run)
.
But it is instantiated with max_open
:
gauge!(IDLE_CONNECTIONS, max_open as f64);
https://github.com/importcjj/mobc/blob/f8c9779141ab3b8079d5dff0317273a7ee029998/src/lib.rs#L383C34-L383C43
max_open
is defined as Sets the maximum number of connections managed by the pool.
That means that on start, it just reports the "connection limit", and then later the number of "connections left to connection limit".
Even though this might be an interesting metric, it is not what the description of the actual metric implies.
Instead of being initialized with max_open
, this gauge should probably start at 0 and then reflect the difference between mobc_pool_connections_open
and mobc_pool_connections_busy
.
Hey,
I just published this crate, which supports RabbitMQ connection pooling for mobc, if you want to add it to the Readme.
Cheers
If you connect to postgres by creating a new Pool
, and the connection configuration has a non-existing database, mobc will wait a while and then throw Error::Timeout
. Would be nicer if we'd get the exact reason why the connection failed and it would happen immediately instead of waiting until the timeout.
Hi
just added support for ArangoDB; if you want to add it in your Readme.
https://github.com/inzanez/mobc-arangors
can
run in the async-std-1.0 runtime.can not
run in the async-std-1.0 runtime.can
run in the Tokio-0.2 runtime.Hi there,
I use the newest mobc-redis version 0.7.0, but the redis::cmd.query_async function does niot work with mobc pool, woudl you like to give me a hand, thank you.
[Cargo.toml]
#redis
redis = { version = "0.19.0", features = ["tokio-comp"] }
mobc = "0.7.0"
mobc-redis = "0.7.0"
[src/main.rs]
use mobc::Pool;
use mobc_redis::{redis, redis::AsyncCommands, redis::Connection};
use mobc_redis::RedisConnectionManager;
let mut conn = pool.get().await.unwrap();
let s: String = redis::cmd("PING").query_async(&mut conn as &mut Connection).await.unwrap();
println!("{}", s);
[cargo build]
let s: String = redis::cmd("PING").query_async(&mut conn as &mut Connection).await.unwrap();
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `mobc_redis::redis::aio::ConnectionLike` is not implemented for `mobc_redis::redis::Connection`
error[E0605]: non-primitive cast: `&mut mobc::Connection<RedisConnectionManager>` as `&mut mobc_redis::redis::Connection`
--> src/main.rs:982:52
|
982 | let s: String = redis::cmd("PING").query_async(&mut conn as &mut Connection).await.unwrap();
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ an `as` expression can only be used to convert between primitive types or to coerce to a specific trait object
Now mobc v0.4.0 is out, I try to find a new version for mobc-postgres, but.. looks like its code was removed😂.
In addition: tokio-postgres v0.5.1 is out.
With Tokio 0.3 out, it would be nice to update, especially since this is supposed to be the last version before 1.0
There is already an updated version of tokio-postgres
For redis, there is an issue to upgrade as well.
the package xxx
depends on mobc-postgres
, with features: tokio
but mobc-postgres
does not have these features.
mobc-postgres version: 0.3.0
Switch to tokio 0.2
the async_std new version 1.6.0 later based on stjepang / smol.
Theoretically it is possible to support both Async_STD and ToKIO runtime
Would be really useful for us to have a setting in mobc that not just reconnects after a certain time, but reconnects after a timeout IF the connection had no activity during that time. This would prevent stale connections of creating trouble in the application.
I can work on this on Monday if needed.
The mobc_pool_connections_opened_total
and mobc_pool_connections_open
metrics are not handled according to their descriptions. As described in the source code, mobc_pool_connections_opened_total
(a counter) refers to the total number of pool connections opened, and mobc_pool_connections_open
(a gauge) refers to the current number of opened connections. However, mobc_pool_connections_opened_total
values are emitted as a gauge, and mobc_pool_connections_open
values are emitted both as gauge and counter. Also, the mobc_pool_connections_open
gauge value is equal to the opposite of mobc_pool_connections_closed_total
, and the mobc_pool_connections_open
counter value is equal to mobc_pool_connections_opened_total
.
Build a new pool, set a metrics recorder, and observe how the desired metrics are registered, incremented, and decremented.
It basically extends the tide.rs
example provided in this repo by adding a LogRecorder
responsible for logging every metric change that is emitted.
Observe how the mobc_pool_connections_opened_total
is incremented as a gauge instead of a counter, and all mobc_pool_connections_open
increments are made as a counter, but all decrements are made as a gauge.
mod foodb;
use foodb::FooManager;
use metrics::{
Counter, CounterFn, Gauge, GaugeFn, Histogram, HistogramFn, Key, KeyName, Recorder,
SetRecorderError, Unit,
};
use std::{sync::Arc, time::Duration};
use tide::Request;
// Defines log handlers for each metric operation
pub(crate) struct MetricHandle(Key);
impl CounterFn for MetricHandle {
fn increment(&self, value: u64) {
log::debug!("counter increment {} -> {}", self.0.name(), value);
}
fn absolute(&self, value: u64) {
log::debug!("counter absolute {} -> {}", self.0.name(), value);
}
}
impl GaugeFn for MetricHandle {
fn increment(&self, value: f64) {
log::debug!("gauge increment {} -> {}", self.0.name(), value);
}
fn decrement(&self, value: f64) {
log::debug!("gauge decrement {} -> {}", self.0.name(), value);
}
fn set(&self, value: f64) {
log::debug!("gauge set {} -> {}", self.0.name(), value);
}
}
impl HistogramFn for MetricHandle {
fn record(&self, value: f64) {
log::debug!("histogram record {} -> {}", self.0.name(), value);
}
}
// Defines a simple Metrics Recorder that logs all metrics changes
struct LogRecorder;
impl Recorder for LogRecorder {
fn describe_counter(&self, key_name: KeyName, _unit: Option<Unit>, description: &'static str) {
log::debug!("counter '{}' -> {}", key_name.as_str(), description);
}
fn describe_gauge(&self, key_name: KeyName, _unit: Option<Unit>, description: &'static str) {
log::debug!("gauge '{}' -> {}", key_name.as_str(), description);
}
fn describe_histogram(
&self,
key_name: KeyName,
_unit: Option<Unit>,
description: &'static str,
) {
log::debug!("histogram '{}' -> {}", key_name.as_str(), description);
}
fn register_counter(&self, key: &Key) -> Counter {
Counter::from_arc(Arc::new(MetricHandle(key.clone())))
}
fn register_gauge(&self, key: &Key) -> Gauge {
Gauge::from_arc(Arc::new(MetricHandle(key.clone())))
}
fn register_histogram(&self, key: &Key) -> Histogram {
Histogram::from_arc(Arc::new(MetricHandle(key.clone())))
}
}
static RECORDER: LogRecorder = LogRecorder;
pub fn set_recorder() -> Result<(), SetRecorderError> {
metrics::set_recorder(&RECORDER)
}
// Example setup
type Pool = mobc::Pool<FooManager>;
async fn ping(req: Request<Pool>) -> tide::Result {
let pool = req.state();
let conn = pool.get().await.unwrap();
Ok(conn.query().await.into())
}
#[async_std::main]
async fn main() {
env_logger::init();
set_recorder().unwrap();
let manager = FooManager;
let pool = Pool::builder()
.max_open(12)
.max_idle_lifetime(Some(Duration::from_secs(3)))
.build(manager);
let mut app = tide::with_state(pool);
app.at("/").get(ping);
app.listen("127.0.0.1:7777").await.unwrap();
}
mobc_pool_connections_opened_total
to be incremented as a counter.mobc_pool_connections_open
to be incremented and decremented as a gauge.If you build a Pool
configured for unlimited open connections (by using .max_open(0)
on the Builder
, as documented), connections will not be reused.
The problem seems to be that while there is logic to treat max_open == 0
as unlimited, there is no such logic to treat max_idle == 0
as unlimited. It causes connections to be closed when returned to the pool. The current logic from put_idle_conn()
seems to be where that side of the logic may be missing:
fn put_idle_conn<M: Manager>(
shared: &Arc<SharedPool<M>>,
mut internals: MutexGuard<'_, PoolInternals<M::Connection, M::Error>>,
conn: Conn<M::Connection, M::Error>,
) {
if internals.config.max_idle > internals.free_conns.len() as u64 {
internals.free_conns.push(conn);
drop(internals);
} else {
conn.close(&shared.state);
}
}
When max_open
is set to 0
, max_idle
gets set to 0
. See this excerpt from Builder::build()
:
let max_idle = self
.max_idle
.unwrap_or_else(|| cmp::min(self.max_open, DEFAULT_MAX_IDLE_CONNS));
But if you set max_open
to 0
in the Builder
, you cannot set max_idle
to greater than 0
. This is also from Builder::build()
(immediately after the above excerpt):
assert!(
self.max_open >= max_idle,
"max_idle must be no larger than max_open"
);
However, once you have Pool
, you can use its .set_max_idle_conns()
method to set max_idle
to something greater than 0, and have it properly reuse connections with max_open
set to 0
.
Here is a test that demonstrates the problem:
use std::time::Duration;
use async_trait::async_trait;
use mobc::{self, Manager, Pool};
pub struct FooConnection;
pub struct FooManager;
#[async_trait]
impl Manager for FooManager {
type Connection = FooConnection;
type Error = std::convert::Infallible;
async fn connect(&self) -> Result<Self::Connection, Self::Error> {
println!("New connection!");
Ok(FooConnection)
}
async fn check(&self, conn: Self::Connection) -> Result<Self::Connection, Self::Error> {
println!("Checked existing connection!");
Ok(conn)
}
}
#[tokio::test]
async fn reuse() {
let pool = mobc::Pool::builder().max_open(2).build(FooManager);
println!("Pool configured for 2 open connections");
run(&pool).await;
let state = pool.state().await;
println!("Pool state: {state:#?}");
assert_eq!(1, state.connections);
assert_eq!(0, state.in_use);
assert_eq!(1, state.idle);
assert_eq!(0, state.max_idle_closed);
// Compare to a pool configured for unlimited open connections
let pool = mobc::Pool::builder().max_open(0).build(FooManager);
println!("Pool configured for unlimited open connections");
run(&pool).await;
let state = pool.state().await;
println!("Pool state: {state:#?}");
assert_eq!(1, state.connections); // Fails; state.connections == 0
assert_eq!(0, state.in_use);
assert_eq!(1, state.idle); // Fails; state.idle == 0
assert_eq!(0, state.max_idle_closed); // Fails; state.max_idle_closed == 2
}
/// Gets a connection from the pool, returns it, then does it again.
async fn run(pool: &Pool<FooManager>) {
pool.get().await.unwrap();
// Give the task spawned by `drop` a moment to work.
tokio::time::sleep(Duration::from_millis(100)).await;
pool.get().await.unwrap();
// Give the task spawned by `drop` a moment to work.
tokio::time::sleep(Duration::from_millis(100)).await;
}
Here's the output of the println!()
statements:
Pool configured for 2 open connections
New connection!
Checked existing connection!
Pool state: Stats {
max_open: 2,
connections: 1,
in_use: 0,
idle: 1,
wait_count: 0,
wait_duration: 28.766µs,
max_idle_closed: 0,
max_lifetime_closed: 0,
}
Pool configured for unlimited open connections
New connection!
New connection!
Pool state: Stats {
max_open: 0,
connections: 0,
in_use: 0,
idle: 0,
wait_count: 0,
wait_duration: 9.113µs,
max_idle_closed: 2,
max_lifetime_closed: 0,
}
You can see that for the Pool
configured for max_open == 2
, it reused the existing connection for the second .get()
. But for the second Pool
, configured for unlimited open connections (max_open == 0
), it made a new connection each time.
https://github.com/importcjj/mobc/blob/master/examples/actix-web.rs
This example can be built but cannot be run with latest stable actix-web.
The latest published version of mobc-redis
is 0.8.0
, but this repo has 0.8.2
0.8.3
, which includes use of an updated version of the redis
crate.
It would be helpful if that were released.
Hi @importcjj could you give me access to the mobc repo and ability to publish to crates.io for mobc.
I've made fixes to mobc https://github.com/prisma/mobc I would like to merge them into the official mobc package and publish them.
I would be happy to take over maintaining the project.
It would be nice if mobc-redis
imported redis
as redis = { version = "0.23" }
rather than redis = { version = "0.23.0" }
. This would allow consumers to this crate to use the current patch release of the redis
crate (0.23.3
) without having to wait for updated here.
This library has a health-check of the connection on check out from pool. This solves the problem of connections being closed by server due to inactivity. But this is suboptimal for handling connections which are broken on checking in to pool.
In rust Drop
of mobc::Connection
can happen at any time. So for example if you have timeout of queries, the following may happen:
redis.get
sends GET\n
to the servermobc::Connection
(the response for GET
can appear in the input queue shortly)Usually this is fixed by setting a flag in the connection structure and dropping it on checking-in.
Surely, we can drop the connection in Connection::check
if health check on checkout is enabled, but as far as I understand that check is specifically suited for sending ping to server rather than simple check of the connection flag on connection. So there is an incentive by the user to disable these health checks in some cases.
But with the scenario above disabling the health check might be security issue in the example above.
Even more, just current approach of sending PING
and receiving PONG
(for redis) is not enough, as technically there can be a pipeline which sends PING, GET
(for whatever reason) and then times out. In this case, sending PING
will receive PONG
anyway, with the response_for_get, PONG
left in the input queue. So next query returns wrong result resulting in the security issue. Presumably this is the issue in #28, and with current health-check it's only made more rare, not solved.
So connection has to raise a flag self.waiting_for_response
and connection pool should drop all those connections on check-in (waiting for responses either on check-in or check-out results in a performance issue so should not be done).
Hopefully not so much catching up versions after this.
Can you add a take
method for mobc::Connection
? @importcjj
I can give up the current connection when I found the error is it.
The code is probably like follow
impl<M: Manager> Connection<M> {
/// Take this Connection from the pool permanently.
pub fn take(self) -> M {}
}
And this method can help convert the redis connection to PubSub connection
: https://docs.rs/redis/0.16.0/redis/aio/struct.Connection.html#method.into_pubsub
Originally posted by @biluohc in #28 (comment)
We need to close the TCP connection when the connection is released
Currently we use Drop and call tokio::spawn to start the task.
But will this cause more overhead? If you can execute a close callback on the current coroutine function?
Thanks.
redis get_async_connection
is Deprecated.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.