Giter VIP home page Giter VIP logo

mobc's Introduction

Mobc

A generic connection pool with async/await support.

Inspired by Deadpool, Sqlx, r2d2 and Golang SQL package.

Changelog

Note: mobc requires at least Rust 1.60.

Usage

[dependencies]
mobc = "0.8"

# For async-std runtime
# mobc = { version = "0.8", features = ["async-std"] }

# For actix-rt 1.0
# mobc = { version = "0.8", features = ["actix-rt"] }

Features

  • Support async/.await syntax
  • Support both tokio and async-std
  • Tokio metric support
  • Production battle tested
  • High performance
  • Easy to customize
  • Dynamic configuration

Adaptors

Backend Adaptor Crate
bolt-client mobc-bolt
tokio-postgres mobc-postgres
redis mobc-redis
arangodb mobc-arangors
lapin mobc-lapin
reql mobc-reql
redis-cluster mobc-redis-cluster

More DB adaptors are welcome.

Examples

More examples

Using an imaginary "foodb" database.

use mobc::{async_trait, Manager};

#[derive(Debug)]
pub struct FooError;

pub struct FooConnection;

impl FooConnection {
    pub async fn query(&self) -> String {
        "PONG".to_string()
    }
}

pub struct FooManager;

#[async_trait]
impl Manager for FooManager {
    type Connection = FooConnection;
    type Error = FooError;

    async fn connect(&self) -> Result<Self::Connection, Self::Error> {
        Ok(FooConnection)
    }

    async fn check(&self, conn: Self::Connection) -> Result<Self::Connection, Self::Error> {
        Ok(conn)
    }
}

Configures

max_open

Sets the maximum number of connections managed by the pool.

0 means unlimited, defaults to 10.

max_idle

Sets the maximum idle connection count maintained by the pool. The pool will maintain at most this many idle connections at all times, while respecting the value of max_open.

max_lifetime

Sets the maximum lifetime of connections in the pool. Expired connections may be closed lazily before reuse.

None meas reuse forever, defaults to None.

get_timeout

Sets the get timeout used by the pool. Calls to Pool::get will wait this long for a connection to become available before returning an error.

None meas never timeout, defaults to 30 seconds.

Variable

Some of the connection pool configurations can be adjusted dynamically. Each connection pool instance has the following methods:

  • set_max_open_conns
  • set_max_idle_conns
  • set_conn_max_lifetime

Stats

  • max_open - Maximum number of open connections to the database.
  • connections - The number of established connections both in use and idle.
  • in_use - The number of connections currently in use.
  • idle - The number of idle connections.
  • wait_count - The total number of connections waited for.
  • wait_duration - The total time blocked waiting for a new connection.
  • max_idle_closed - The total number of connections closed due to max_idle.
  • max_lifetime_closed - The total number of connections closed due to max_lifetime.

Metrics

  • Counters
    • mobc_pool_connections_opened_total - Total number of Pool Connections opened
    • mobc_pool_connections_closed_total - Total number of Pool Connections closed
  • Gauges
    • mobc_pool_connections_open - Number of currently open Pool Connections
    • mobc_pool_connections_busy - Number of currently busy Pool Connections (executing a database query)"
    • mobc_pool_connections_idle - Number of currently unused Pool Connections (waiting for the next pool query to run)
    • mobc_client_queries_wait - Number of queries currently waiting for a connection
  • Histograms
    • mobc_client_queries_wait_histogram_ms - Histogram of the wait time of all queries in ms

Compatibility

Because tokio is not compatible with other runtimes, such as async-std. So a database driver written with tokio cannot run in the async-std runtime. For example, you can't use redis-rs in tide because it uses tokio, so the connection pool which bases on redis-res can't be used in tide either.

mobc's People

Contributors

0xsio avatar alencardc avatar bergmark avatar cking avatar dcormier avatar digartner avatar fitzthum avatar garrensmith avatar importcjj avatar jaboatman avatar janpio avatar jasonpkovalski avatar jrb0001 avatar mnahkies avatar pjb157 avatar rushmorem avatar sytten avatar voidxnull avatar zacaria avatar zupzup avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mobc's Issues

Health-check of connection on check-in

This library has a health-check of the connection on check out from pool. This solves the problem of connections being closed by server due to inactivity. But this is suboptimal for handling connections which are broken on checking in to pool.

In rust Drop of mobc::Connection can happen at any time. So for example if you have timeout of queries, the following may happen:

  1. redis.get sends GET\n to the server
  2. Then timeout happens before receiving response, which essentially drops the future holding the mobc::Connection (the response for GET can appear in the input queue shortly)

Usually this is fixed by setting a flag in the connection structure and dropping it on checking-in.

Surely, we can drop the connection in Connection::check if health check on checkout is enabled, but as far as I understand that check is specifically suited for sending ping to server rather than simple check of the connection flag on connection. So there is an incentive by the user to disable these health checks in some cases.

But with the scenario above disabling the health check might be security issue in the example above.

Even more, just current approach of sending PING and receiving PONG (for redis) is not enough, as technically there can be a pipeline which sends PING, GET (for whatever reason) and then times out. In this case, sending PING will receive PONG anyway, with the response_for_get, PONG left in the input queue. So next query returns wrong result resulting in the security issue. Presumably this is the issue in #28, and with current health-check it's only made more rare, not solved.

So connection has to raise a flag self.waiting_for_response and connection pool should drop all those connections on check-in (waiting for responses either on check-in or check-out results in a performance issue so should not be done).

Access to mobc

Hi @importcjj could you give me access to the mobc repo and ability to publish to crates.io for mobc.

I've made fixes to mobc https://github.com/prisma/mobc I would like to merge them into the official mobc package and publish them.

I would be happy to take over maintaining the project.

Sometimes initialing the pool will be blocked until time out

At first, the pool will spawn multi-workers to make new connections. And then, the pool will acquire the internals lock frequently to check if the initialization has been finished, the lock is also needed by workers, which causes some workers can't submit the connections in time.

Add a new method to unwraps the raw database connection

Can you add a take method for mobc::Connection? @importcjj
I can give up the current connection when I found the error is it.

The code is probably like follow

impl<M: Manager> Connection<M> {
    /// Take this Connection from the pool permanently.
    pub fn take(self) -> M {}
}

And this method can help convert the redis connection to PubSub connection: https://docs.rs/redis/0.16.0/redis/aio/struct.Connection.html#method.into_pubsub

Originally posted by @biluohc in #28 (comment)

`mobc_pool_connections_idle` reports "connection limit" on start, and "connections left to connection limit" later

mobc_pool_connections_idle is defined as Number of currently unused Pool Connections (waiting for the next pool query to run).

But it is instantiated with max_open:

gauge!(IDLE_CONNECTIONS, max_open as f64);

https://github.com/importcjj/mobc/blob/f8c9779141ab3b8079d5dff0317273a7ee029998/src/lib.rs#L383C34-L383C43
max_open is defined as Sets the maximum number of connections managed by the pool.

That means that on start, it just reports the "connection limit", and then later the number of "connections left to connection limit".
Even though this might be an interesting metric, it is not what the description of the actual metric implies.

Instead of being initialized with max_open, this gauge should probably start at 0 and then reflect the difference between mobc_pool_connections_open and mobc_pool_connections_busy.

Mobc should report errors

Currently there are a bunch of FIXME for dropped result.
I think we should find a way to report those back to the user.
Minimum would be to log them, but maybe a callback system could be good? What do you think?

[mobc-redis] get_timeout always 30 seconds with redis

Hey!

I ran into an issue when testing if my service works without redis

This is the config of the pool

const CACHE_POOL_MAX_OPEN: u64 = 32;
const CACHE_POOL_MAX_IDLE: u64 = 8;
const CACHE_POOL_TIMEOUT_SECONDS: u64 = 5;
const CACHE_POOL_EXPIRE_SECONDS: u64 = 30;

pub async fn connect() -> Result<RedisPool> {
    let client = redis::Client::open(CONFIG.cache.host.as_str()).map_err(RedisClientError)?;
    let manager = RedisConnectionManager::new(client);
    Ok(Pool::builder()
        .get_timeout(Some(Duration::from_secs(CACHE_POOL_TIMEOUT_SECONDS)))
        .max_open(CACHE_POOL_MAX_OPEN)
        .max_idle(CACHE_POOL_MAX_IDLE)
        .max_lifetime(Some(Duration::from_secs(CACHE_POOL_EXPIRE_SECONDS)))
        .build(manager))
}

async fn get_con(pool: &RedisPool) -> Result<RedisCon> {
    pool.get().await.map_err(|e| {
       error!("error connecting to redis: {}", e);
       RedisPoolError
   })
}

So the get_timeout is set to five seconds and when redis is completely gone, it returns an error immediately. However, with a timeout (dns still available, but doesn't respond), it took forever to fail. With the above config it took 90 (30 seconds default + 2 retries?) seconds and if I lowered the timeout (e.g. to 500 ms), or removed it, it took 30 seconds, which I don't really understand.

I now added a select! around my get_con code, which works, but do you have any idea, why this might be happening? I looked through the code for get_timeout and it seems fine 🤔

Changelog entry for v0.7.0

Version 0.7.0 that was just released is missing an entry in the changelog file. Would appreciate if you could document the changes!

[mobc-redis] PONG as result for con.get() in broken connection

Hey!

I've been using mobc-redis in a project for a while and I found a strange issue, which so far only happened in production (so with a lot of load) and which I haven't been able to reproduce locally.

It seems that in some cases, a connection becomes somehow "corrupted" and only returns PONG on con.get() calls, although the actual value in redis is a valid string. Also deleting that value from redis didn't change anything, as the problem seemed to be at the connection-level.

However, the connection doesn't appear to be broken (as the ping's when recycling it succeed), so the misbehaviour persists through several client-requests, as the connection is successfully recycled.

I had the same issue using the old CMD/GET API.

At first I thought this might happen in very long-running connections, so I reduced the max_lifetime to 30 seconds and that helped, but it still happens (although, as I said, very rarely). And if it happens it's limited to this one connection and stops once the connection is dropped.

I'm using:

mobc-redis = "0.5.1"
mobc = "0.5.7"

This is the config for the pool:

const CACHE_POOL_MAX_OPEN: u64 = 32;
const CACHE_POOL_MAX_IDLE: u64 = 8;
const CACHE_POOL_TIMEOUT_SECONDS: u64 = 5;
const CACHE_POOL_EXPIRE_SECONDS: u64 = 30;

pub async fn connect() -> Result<RedisPool> {
    let client = redis::Client::open(CONFIG.cache.host.as_str()).map_err(RedisClientError)?;
    let manager = RedisConnectionManager::new(client);
    Ok(Pool::builder()
        .max_open(CACHE_POOL_MAX_OPEN)
        .max_idle(CACHE_POOL_MAX_IDLE)
        .get_timeout(Some(Duration::from_secs(CACHE_POOL_TIMEOUT_SECONDS)))
        .max_lifetime(Some(Duration::from_secs(CACHE_POOL_EXPIRE_SECONDS)))
        .build(manager))
}

and this is how e.g. a get works:

async fn get_con(pool: &RedisPool) -> Result<RedisCon> {
    pool.get().await.map_err(RedisPoolError)
}

pub async fn get_str(pool: &RedisPool, key: &str) -> Result<String> {
    let mut con = get_con(&pool).await?;
    let value = con.get(key).await.map_err(RedisCMDError)?;
    FromRedisValue::from_redis_value(&value).map_err(RedisTypeError)
}

So as you can see, it's a pretty basic setup.

I'm not sure if this is an issue with the underlying redis-library, or with recycling connections - it seems the PONG response of the ping on the connection gets stuck somehow?

I tried to locally reproduce it with redis-rs and with mobc-redis by doing billions of get's, but have never seen it. Maybe you have an idea what could be the issue?

Anyway, thanks for this fantastic project, besides this issue (which I fixed by validating the return-value from the get), it's been working great, same for mobc-postgres. 👍

Any help you could provide would be greatly appreciated.

mobc completely stops serving connections.

Hey,

we use mobc as part of Prisma and we getting into a situation where mobc complete stops serving any connections.
If I create a HTTP server using hyper and create a mobc pool via Quaint.

A repo with the reproduction can be found here https://github.com/garrensmith/mobc-error-example

I then use apache benchmark with a request like this:

ab -v 4 -c 200  -t 120 http://127.0.0.1:4000/

Once apache benchmark has stopped. The connections in postgres go to either to a much lower than the original number of connections I've set to open or completely to zero. If I log State from Mobc it will report it has 10 active connections. Which is incorrect.
However if I try and start apache benchmark and run it again, it will either run a lot slower and with fewer connections. Or not run at all because it cannot acquire a new connection from mobc.

I've tried a few things in the code but I cannot see why this is happening. I even tried #60 but that didn't fix it.

Any help would be really appreciated.

Configuring a `Pool` for unlimited open connections will result in no connection reuse

If you build a Pool configured for unlimited open connections (by using .max_open(0) on the Builder, as documented), connections will not be reused.

The problem seems to be that while there is logic to treat max_open == 0 as unlimited, there is no such logic to treat max_idle == 0 as unlimited. It causes connections to be closed when returned to the pool. The current logic from put_idle_conn() seems to be where that side of the logic may be missing:

fn put_idle_conn<M: Manager>(
    shared: &Arc<SharedPool<M>>,
    mut internals: MutexGuard<'_, PoolInternals<M::Connection, M::Error>>,
    conn: Conn<M::Connection, M::Error>,
) {
    if internals.config.max_idle > internals.free_conns.len() as u64 {
        internals.free_conns.push(conn);
        drop(internals);
    } else {
        conn.close(&shared.state);
    }
}

When max_open is set to 0, max_idle gets set to 0. See this excerpt from Builder::build():

        let max_idle = self
            .max_idle
            .unwrap_or_else(|| cmp::min(self.max_open, DEFAULT_MAX_IDLE_CONNS));

But if you set max_open to 0 in the Builder, you cannot set max_idle to greater than 0. This is also from Builder::build() (immediately after the above excerpt):

        assert!(
            self.max_open >= max_idle,
            "max_idle must be no larger than max_open"
        );

However, once you have Pool, you can use its .set_max_idle_conns() method to set max_idle to something greater than 0, and have it properly reuse connections with max_open set to 0.

Here is a test that demonstrates the problem:

    use std::time::Duration;

    use async_trait::async_trait;
    use mobc::{self, Manager, Pool};

    pub struct FooConnection;
    pub struct FooManager;

    #[async_trait]
    impl Manager for FooManager {
        type Connection = FooConnection;
        type Error = std::convert::Infallible;

        async fn connect(&self) -> Result<Self::Connection, Self::Error> {
            println!("New connection!");
            Ok(FooConnection)
        }

        async fn check(&self, conn: Self::Connection) -> Result<Self::Connection, Self::Error> {
            println!("Checked existing connection!");
            Ok(conn)
        }
    }

    #[tokio::test]
    async fn reuse() {
        let pool = mobc::Pool::builder().max_open(2).build(FooManager);

        println!("Pool configured for 2 open connections");
        run(&pool).await;

        let state = pool.state().await;
        println!("Pool state: {state:#?}");
        assert_eq!(1, state.connections);
        assert_eq!(0, state.in_use);
        assert_eq!(1, state.idle);
        assert_eq!(0, state.max_idle_closed);

        // Compare to a pool configured for unlimited open connections
        let pool = mobc::Pool::builder().max_open(0).build(FooManager);

        println!("Pool configured for unlimited open connections");
        run(&pool).await;

        let state = pool.state().await;
        println!("Pool state: {state:#?}");
        assert_eq!(1, state.connections); // Fails; state.connections == 0
        assert_eq!(0, state.in_use);
        assert_eq!(1, state.idle); // Fails; state.idle == 0
        assert_eq!(0, state.max_idle_closed); // Fails; state.max_idle_closed == 2
    }

    /// Gets a connection from the pool, returns it, then does it again.
    async fn run(pool: &Pool<FooManager>) {
        pool.get().await.unwrap();

        // Give the task spawned by `drop` a moment to work.
        tokio::time::sleep(Duration::from_millis(100)).await;

        pool.get().await.unwrap();

        // Give the task spawned by `drop` a moment to work.
        tokio::time::sleep(Duration::from_millis(100)).await;
    }

Here's the output of the println!() statements:

Pool configured for 2 open connections
New connection!
Checked existing connection!
Pool state: Stats {
    max_open: 2,
    connections: 1,
    in_use: 0,
    idle: 1,
    wait_count: 0,
    wait_duration: 28.766µs,
    max_idle_closed: 0,
    max_lifetime_closed: 0,
}
Pool configured for unlimited open connections
New connection!
New connection!
Pool state: Stats {
    max_open: 0,
    connections: 0,
    in_use: 0,
    idle: 0,
    wait_count: 0,
    wait_duration: 9.113µs,
    max_idle_closed: 2,
    max_lifetime_closed: 0,
}

You can see that for the Pool configured for max_open == 2, it reused the existing connection for the second .get(). But for the second Pool, configured for unlimited open connections (max_open == 0), it made a new connection each time.

redis::cmd.query_async does not work with mobc pool

Hi there,
I use the newest mobc-redis version 0.7.0, but the redis::cmd.query_async function does niot work with mobc pool, woudl you like to give me a hand, thank you.

[Cargo.toml]
#redis
redis = { version = "0.19.0", features = ["tokio-comp"] }
mobc = "0.7.0"
mobc-redis = "0.7.0"


[src/main.rs]
    use mobc::Pool;
    use mobc_redis::{redis, redis::AsyncCommands, redis::Connection};
    use mobc_redis::RedisConnectionManager;

    let mut conn = pool.get().await.unwrap();
    let s: String = redis::cmd("PING").query_async(&mut conn as &mut Connection).await.unwrap();
    println!("{}", s);


[cargo build]
    let s: String = redis::cmd("PING").query_async(&mut conn as &mut Connection).await.unwrap();
    |                                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `mobc_redis::redis::aio::ConnectionLike` is not implemented for `mobc_redis::redis::Connection`       

error[E0605]: non-primitive cast: `&mut mobc::Connection<RedisConnectionManager>` as `&mut mobc_redis::redis::Connection`
   --> src/main.rs:982:52
    |
982 |     let s: String = redis::cmd("PING").query_async(&mut conn as &mut Connection).await.unwrap();
    |                                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ an `as` expression can only be used to convert between primitive types or to coerce to a specific trait object  

Upgrade to Tokio 0.3

With Tokio 0.3 out, it would be nice to update, especially since this is supposed to be the last version before 1.0

There is already an updated version of tokio-postgres

For redis, there is an issue to upgrade as well.

Setting to reconnect if idling too long

Would be really useful for us to have a setting in mobc that not just reconnects after a certain time, but reconnects after a timeout IF the connection had no activity during that time. This would prevent stale connections of creating trouble in the application.

I can work on this on Monday if needed.

Where is mobc-postgres?

Now mobc v0.4.0 is out, I try to find a new version for mobc-postgres, but.. looks like its code was removed😂.
In addition: tokio-postgres v0.5.1 is out.

Wrong database configuration returns Timeout

If you connect to postgres by creating a new Pool, and the connection configuration has a non-existing database, mobc will wait a while and then throw Error::Timeout. Would be nicer if we'd get the exact reason why the connection failed and it would happen immediately instead of waiting until the timeout.

Allow use of latest `redis` patch release

It would be nice if mobc-redis imported redis as redis = { version = "0.23" } rather than redis = { version = "0.23.0" }. This would allow consumers to this crate to use the current patch release of the redis crate (0.23.3) without having to wait for updated here.

Please release new version of `mobc-redis`

The latest published version of mobc-redis is 0.8.0, but this repo has 0.8.2 0.8.3, which includes use of an updated version of the redis crate.

It would be helpful if that were released.

TLS/SSL support for Redis

The 0.17 version of the redis crate contains TLS/SSL support. I'm currently using a forked version of this repo, and I have opened a PR for this: #42.

Debug for Pool struct

Hi all,

I am currently working on a project using this crate which is awesome and has helped me loads. I am having trouble fitting it into my webserver as I need the Pool struct to implement Debug functionality. Would it be possible to add into this crate?

Thanks for any help!

error[E0599]: no function or associated item named `new` found for struct `tokio::runtime::Runtime` in the current scope

Hi,

After upgrading to mobc version 0.6.1 i'm getting the below error.

Here is my list of dependencies, i'm unsure if that makes a difference.

[dependencies]
actix-web = "3.3.2"
actix-rt = "1.1.1"
listenfd = "0.3.3"
serde = "1.0.117"
jsonwebtoken = "7.2.0"
serde_derive = "1.0.117"
chrono = "0.4.19"
rdkafka = "0.24.0"
env_logger = "0.8.2"
log = "0.4.11"
mobc = { version = "0.6.1", features = ["tokio"] }
thiserror = "1.0.22"
uuid = { version = "0.8.1", features = ["v4"] }
mobc-redis = "0.5.4"

And here is the error.

Compiling mobc v0.6.1
error[E0599]: no function or associated item named `new` found for struct `tokio::runtime::Runtime` in the current scope
  --> /Users/mike/.cargo/registry/src/github.com-1ecc6299db9ec823/mobc-0.6.1/src/runtime.rs:41:46
   |
41 |                 rt: tokio::runtime::Runtime::new().unwrap(),
   |                                              ^^^ function or associated item not found in `tokio::runtime::Runtime`

error: aborting due to previous error

For more information about this error, try `rustc --explain E0599`.
error: could not compile `mobc`

Thanks

Metrics related to opened connections are registered incorrectly

Description

The mobc_pool_connections_opened_total and mobc_pool_connections_open metrics are not handled according to their descriptions. As described in the source code, mobc_pool_connections_opened_total (a counter) refers to the total number of pool connections opened, and mobc_pool_connections_open (a gauge) refers to the current number of opened connections. However, mobc_pool_connections_opened_total values are emitted as a gauge, and mobc_pool_connections_open values are emitted both as gauge and counter. Also, the mobc_pool_connections_open gauge value is equal to the opposite of mobc_pool_connections_closed_total, and the mobc_pool_connections_open counter value is equal to mobc_pool_connections_opened_total.

How to reproduce

Build a new pool, set a metrics recorder, and observe how the desired metrics are registered, incremented, and decremented.

Simple example to reproduce

It basically extends the tide.rs example provided in this repo by adding a LogRecorder responsible for logging every metric change that is emitted.

Observe how the mobc_pool_connections_opened_total is incremented as a gauge instead of a counter, and all mobc_pool_connections_open increments are made as a counter, but all decrements are made as a gauge.

mod foodb;
use foodb::FooManager;
use metrics::{
    Counter, CounterFn, Gauge, GaugeFn, Histogram, HistogramFn, Key, KeyName, Recorder,
    SetRecorderError, Unit,
};

use std::{sync::Arc, time::Duration};
use tide::Request;

// Defines log handlers for each metric operation
pub(crate) struct MetricHandle(Key);

impl CounterFn for MetricHandle {
    fn increment(&self, value: u64) {
        log::debug!("counter increment {} -> {}", self.0.name(), value);
    }

    fn absolute(&self, value: u64) {
        log::debug!("counter absolute {} -> {}", self.0.name(), value);
    }
}

impl GaugeFn for MetricHandle {
    fn increment(&self, value: f64) {
        log::debug!("gauge increment {} -> {}", self.0.name(), value);
    }

    fn decrement(&self, value: f64) {
        log::debug!("gauge decrement {} -> {}", self.0.name(), value);
    }

    fn set(&self, value: f64) {
        log::debug!("gauge set {} -> {}", self.0.name(), value);
    }
}

impl HistogramFn for MetricHandle {
    fn record(&self, value: f64) {
        log::debug!("histogram record {} -> {}", self.0.name(), value);
    }
}

// Defines a simple Metrics Recorder that logs all metrics changes
struct LogRecorder;

impl Recorder for LogRecorder {
    fn describe_counter(&self, key_name: KeyName, _unit: Option<Unit>, description: &'static str) {
        log::debug!("counter '{}' -> {}", key_name.as_str(), description);
    }

    fn describe_gauge(&self, key_name: KeyName, _unit: Option<Unit>, description: &'static str) {
        log::debug!("gauge '{}' -> {}", key_name.as_str(), description);
    }

    fn describe_histogram(
        &self,
        key_name: KeyName,
        _unit: Option<Unit>,
        description: &'static str,
    ) {
        log::debug!("histogram '{}' -> {}", key_name.as_str(), description);
    }

    fn register_counter(&self, key: &Key) -> Counter {
        Counter::from_arc(Arc::new(MetricHandle(key.clone())))
    }

    fn register_gauge(&self, key: &Key) -> Gauge {
        Gauge::from_arc(Arc::new(MetricHandle(key.clone())))
    }

    fn register_histogram(&self, key: &Key) -> Histogram {
        Histogram::from_arc(Arc::new(MetricHandle(key.clone())))
    }
}

static RECORDER: LogRecorder = LogRecorder;
pub fn set_recorder() -> Result<(), SetRecorderError> {
    metrics::set_recorder(&RECORDER)
}


// Example setup
type Pool = mobc::Pool<FooManager>;

async fn ping(req: Request<Pool>) -> tide::Result {
    let pool = req.state();
    let conn = pool.get().await.unwrap();
    Ok(conn.query().await.into())
}

#[async_std::main]
async fn main() {
    env_logger::init();
    set_recorder().unwrap();

    let manager = FooManager;
    let pool = Pool::builder()
        .max_open(12)
        .max_idle_lifetime(Some(Duration::from_secs(3)))
        .build(manager);

    let mut app = tide::with_state(pool);
    app.at("/").get(ping);
    app.listen("127.0.0.1:7777").await.unwrap();
}

Expected behavior

  • mobc_pool_connections_opened_total to be incremented as a counter.
  • mobc_pool_connections_open to be incremented and decremented as a gauge.

Notes

  • PR #69 added metrics to mobc.
  • This behavior was reported by Prisma ORM users on issues #18760 and #18761.
  • Prisma uses the mobc emitted metrics to expose part of its metrics. The mapping applied to rename mobc metrics can be found here, and the related doc here.

RabbitMQ Support

Hey,

I just published this crate, which supports RabbitMQ connection pooling for mobc, if you want to add it to the Readme.

Cheers

More examples

  • Redis
  • Postgres basic
  • Postgres statement
  • Diesel
  • Actix
  • Tide

Compatibility problems with async_std and Tokio

In a hard situation

  • Connectors implemented with Tokio-0.1 can run in the async-std-1.0 runtime.
  • Connectors implemented with Tokio-0.2 can not run in the async-std-1.0 runtime.
  • Connectors implemented with async-std-1.0 can run in the Tokio-0.2 runtime.

Support AsyncRead/AsyncWrite?

Hey hey.

I'm trying to use mobc to pool TcpStream, and I notice that the wrapper Connection doesn't implement AsyncRead or AsyncWrite, which means the connection can't be passed around to some contexts where those traits are required. Are there any plans to support a connection (or connection variant?) that supports these?

Compatibility problem

the async_std new version 1.6.0 later based on stjepang / smol.

But , smol support feature for "tokio 02"

Theoretically it is possible to support both Async_STD and ToKIO runtime

Pool should check for max idle lifetime in cleaner

Currently the idle lifetime is only checked when the connection is requested.
It should also be checked by the cleaner.
Imagine the following scenario:

  • You have a low idle lifetime but no max lifetime
  • You have a rush of load on your system, the pool lends a bunch of connections
  • Your system returns to normal load
  • The connections are never freed because nobody request them
    This is the situation faced by prisma/prisma#7644

I will push a fix soon

Possible pool depletion and elevated number of timeouts

I am very green on rust so please be indulgent, but I am investigating a pool depletion bug (see prisma/prisma#5977) from Prisma (which uses mobc under a lot of layers).

Investigation 1: conn_requests storage

Now if I understand correctly if the pool is above the limit, the request is added to conn_requests (https://github.com/importcjj/mobc/blob/master/src/lib.rs#L546).
This is a HashMap, which I think should be replaced by a FIFO queue since the way it is queried (https://github.com/importcjj/mobc/blob/master/src/lib.rs#L661) means that a "random" request can be served (aka not necessarily the oldest).

Imagine a scenario with a pool of 1 and a get timeout of 5s, request A comes in at 0s and takes 5s to complete. Request B comes in at 2s and waits. Request C comes in at 3s and waits. A finishes, by random luck C gets the connection and takes 3s to complete. Request B will timeout where it should have not. In a real system with a legit backlog, this means a request could wait for a connection for a long time.

Investigation 2: put_conn

So once the connection goes out of scope, the Drop trait makes sure that the connection is returned to the pool. This is assumed by the put_conn (https://github.com/importcjj/mobc/blob/master/src/lib.rs#L637) function. This function then tries to give the connection back to a request that is waiting for one. Now I believe a loop is missing here: https://github.com/importcjj/mobc/blob/master/src/lib.rs#L660-L673

Again, imagine a scenario with a pool of 1 and a get timeout of 5s, request A comes in at 0s and takes 10s to complete. Request B comes in at 2s and waits. Request C comes in at 8s and waits. A finishes, B is selected to get the connection but the request has been cancelled so the connection is just returned to the pool directly (https://github.com/importcjj/mobc/blob/master/src/lib.rs#L665). C is never served and timeout after 5s.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.