Giter VIP home page Giter VIP logo

connection_pool's People

Contributors

askl56 avatar benlovell avatar betelgeuse avatar bpo avatar deivid-rodriguez avatar dependabot[bot] avatar djanowski avatar drbrain avatar ferdinandrosario avatar fnordfish avatar ianks avatar isikyus avatar liijunwei avatar mattcamuto avatar mayrsascha avatar mperham avatar ncavig avatar olleolleolle avatar ook avatar petergoldstein avatar phiggins avatar rbishop avatar russcloak avatar ryanlecompte avatar sco11morgan avatar shayonj avatar tamird avatar timcraft avatar ttasanen avatar zettabyte avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

connection_pool's Issues

1.9.1 compatablity

Is connection_pool supported for 1.9.1 , I am running it with ruby ruby-1.9.1-p376 and it fails with this error:

ArgumentError wrong number of arguments (2 for 1) /mts/git/nimbus-gems/1.9.1/gems/connection_pool-2.0.0/lib/connection_pool/timed_stack.rb:43:in `block (2 levels) in pop'
/mts/git/nimbus-gems/1.9.1/gems/connection_pool-2.0.0/lib/connection_pool/timed_stack.rb:34:in `loop'
/mts/git/nimbus-gems/1.9.1/gems/connection_pool-2.0.0/lib/connection_pool/timed_stack.rb:34:in `block in pop'
<internal:prelude>:8:in `synchronize'
/mts/git/nimbus-gems/1.9.1/gems/connection_pool-2.0.0/lib/connection_pool/timed_stack.rb:33:in `pop'
/mts/git/nimbus-gems/1.9.1/gems/connection_pool-2.0.0/lib/connection_pool.rb:69:in `checkout'
/mts/git/nimbus-gems/1.9.1/gems/connection_pool-2.0.0/lib/connection_pool.rb:56:in `with'

reason being that in 1.9.1 ConditionVariable does not take to_wait argument, http://ruby-doc.org/stdlib-1.9.1/libdoc/thread/rdoc/ConditionVariable.html#method-i-wait

v2.2.2 Rubygems push

I was wondering if you would mind pushing v2.2.2 to RubyGems. I've been using this lately for some scripting due to its blocking implementation

Dynamic Resizing at Runtime

Is it possible to dynamically resize the connection pool (up and/or down) at runtime without re-deploying the application with a new size for the connection pool?

undefined method `getset' for ConnectionPool

Hello,

We are using the gem Redis::Semaphore because of some multithreading issues, but this poses a problem when using ConnectionPool to connect to our Redis backend.

This is the error we get: NoMethodError: undefined method 'getset' for #<ConnectionPool:0x007ff213e203a8>

This method is defined in the Redis gem we are using, but apparently not supported by ConnectionPool.

ConnectionPool in a gem + rails 3 + sidekiq

Hello,

First thanks for this library which is apparently very useful especially with Sidekiq. I'm saying apparently because I haven't tested it yet in a production scenario and would like to have your feedback before that.

I'm using it inside an external gem called responsys_api which is an api client for the Oracle Responsys API. The ConnectionPool is wrapped in a SessionPool here. It is used by the client here.
The branch is not merged yet but most of the logic is pushed. I'd make some changes to the code if you have any concern.

Then in the infrastructure we have at thredUP, we make async calls using the client gem inside sidekiq jobs. Because sometimes we create hundreds of thousands jobs but with only 100 connections, we want to make sure we're not exceeding and blocking our account and this is where your gem helps.

So two questions:

  • The connection_pool is not instanciated from inside a worker unlike the sidekiq wiki example but when the client gem is being configured in a rails initializer.
    The pool is then used by the client gem when the job makes a call.
    Question is: do you have any idea if the gem will work in the same way and will not loose its benefit of managing one alone pool for all the jobs ? I'm thinking of different threads issues and so different instances of the ConnectionPool because of the declaration scope. For me the variable will be global since it is in an initializer but...
  • Any chance to set an unlimited timeout ? (timeout = 0). I'm thinking of a scenario with sidekiq where the number of processes would be way greater than the maximum number of api connections we're allowed to make at anytime and so the size of the pool.
    Because of the delta between these two can put a certain number of jobs waiting for the pool and the difficulty to evaluate the timeout in seconds I thought of disabling the timeout.
    What do you think about this ? Any other workaround that would be "cleaner" ?

If it can help an example of the code:

#config/initializers/responsys.rb in the rails app
Responsys.configure do |config|
  config.settings = {
    username: "user",
    password: "password",
    wsdl: "http://oracle.soap.wsdl",
    sessions: { timeout: 1200 } #params passed to the connection_pool in the internal SessionPool object
  }
end
#async job enqueued when the user updates his emailing options
class UserResponsysStatusJob
  include Sidekiq::Worker
  sidekiq_options :queue => :responsys

  def perform(email)
    Responsys::Member.new(email).subscribe("mailing_list") #Uses the gem to make the call after picking up a client in the pool.
  end
end

Also do you think of any integration in the sidekiq UI ? Would be cool to real-time monitor if there's a registered pool to know how many clients are active in the pool.

Thanks for your help!

Can I "warm" a ConnectionPool?

Hi, thanks for the great gem! We use it in react-rails for server rendering.

Let's say my pool is:

ConnectionPool.new { really_long_setup }

And later I'm going to need 10 of those. Is there a way I can do the really_long_setup ahead of time?

I thought maybe

pool_size.times do 
  pool.with { |member| member }
end

but I would expect that to use the same member each time. (Also, I'm not sure how I could tell if the pool was initialized or not.)

Nested checkouts support

Hi,

In the scenario where I need to access redis within a subscribe callback, I need to use nest call of pool.with, which doesn't checkout another connection.

Example:

require 'redis'
require 'connection_pool'

def redis_pool
  @redis_pool ||= ConnectionPool.new { Redis.new }
end

def listen
  redis_pool.with do |redis|
    redis.subscribe('pool', 'non-pool') do |on|
      on.message do |channel, _message|
        redis_pool.with do |redis2|
          puts redis2.object_id == redis.object_id
          puts redis2.info # raises error because it's trying to use the subscribe connection
        end
      end
    end
  end
rescue => e
  puts e.message
end

def publish
  redis_pool.with do |redis|
    redis.publish('pool', 'hello')
  end
end

Thread.new { listen }
sleep 1
publish
sleep 1

# Output:
# true
# Error: Redis::CommandError ERR only (P)SUBSCRIBE / (P)UNSUBSCRIBE / PING / QUIT allowed in this context

The workaround I've thought about was to always use Redis.new when using it to subscribe.

Are there plans to have a nicer usage for this scenario?

Connection pool of pre-existing objects.

I feel like this would be closely related to #85, still I'd like to get this one thing straight.

I have a use case for connection pool, where I need to orchestrate access to third-party API, for which I have 5 sets of credentials (login/password pairs) each of which represents separate request queue on the API provider's side. So what I'd like to be able to do is to create connection pool out of existing object collection, rather than generating them on-demand.

I've created a (probably very naive) implementation of said functionality like this:

accounts = CSV.read("config/accounts.csv").to_a
accounts_que = Queue.new
accounts.each { |acc| accounts_que << acc }
$login_handles = ConnectionPool.new(size: accounts.count, timeout: 5) { accounts_que.pop }

This has been working flawlessly in staging environment, though I'm sure there are numerous pitfalls, and I would rather do something in the lines of:

accounts = CSV.read("config/accounts.csv").to_a
$login_handles = ConnectionPool.new(collection: accounts, timeout: 5)

Would a PR of these sorts be considered for merging?

Question: lazy/eager creation of connections

Current implementation eagerly creates connections up to :size upon init of the pool.

Is this just because it was all that was neccessary for your case, or are there reasons that lazy creation is infeasible or a bad idea?

Would you be amenable to a pull request that allowed lazy creation?

The only issue I can think of is that it would be the caller's responsibility to make sure the connection creation code is thread-safe. But there may be well be other issues I'm not thinking of. Concurrency is tricky.

Remove magic?

I love the idea of having a generic object pool that can be reused across different libraries.

I'm considering using this for redis-rb.

Would you consider a patch to remove the magic of method_missing (and not inheriting from BasicObject)? I think it only adds complexity when debugging potential issues without much gain.

Thank you for your time!

Can't call 'eval' on a proxied Redis connection

We use Connection Pool in front of redis, but we can't call #eval on the redis connection.

Redis supports as #eval as the Redis command of the same name.

ConnectionPool::Wrapper should proxy this to the connection like all other via the method_missing call, except that eval is defined by Kernel, so method_missing never gets invoked.

We fixed it by monkey patching as follows:

#Patch to properly propagate calls to eval()
class ConnectionPool
  class Wrapper
    def eval(*args)
      method_missing(:eval, *args)
    end
  end
end

v2.2.0 tag missing?

Hi.

Today I was trying to debug an error in our app with connection handling.
And in trying to diff all the changed dependencies I noticed that there is no v2.2.0 tag pushed to github.

I'm not sure if that's intentional, or accidental.

Looking at the commit history, I guess that the 2.2.0 version of the gem is at 6007d32

But, nevertheless, a git tag would be helpful.

Thanks.

Access to pool from Wrapper

When using ConnectionPool::Wrapper, we can't access the @pool definition, so it prevents us from checking the pool status.

Let's give that access!

Provide a list of different connections

Hi,

Is it possible to provide a list of hosts to which the pool will connect to randomly? (unless the connection is dead of course)

Some pseudo code:

ConnectionPool::Wrapper.new(:hosts => ["localhost:1234", "localhost:1235", "localhost:1236"]) {|host| Redis.new host}

Thanks.

Recommended way to force reconnect on all connections of a pool

Hello,

Is there a standard way to force the re connection of all connections of a pool?

My first idea would be to just create new ConnectionPool objects and let the garbage collector free those objects, causing the connections to drop.

I am asking because I would need to include something like this in Unicorn pre_fork and Puma on_restart so that the connections be establish again.

Permanently lost connections

I believe this is an issue but don't quite have a test case to confirm it right now.

Anytime we remove a connection from the thread stack and transition it into the global stack (or discard it), if execution is interrupted we will permanently lose the connection but the global stack still counts it as if it exists.

We can fix this in MRI by using handle_interrupt (which is what @tamird did originally).

In JRuby, that's not available. I don't think there's a good solution to that, besides either rewriting connection_pool to follow the same model that ActiveRecord does, where we never pop a connection off the stack and instead flag as in use and then have a reaper that culls them after a set period of inactivity.

Maybe on JRuby you could add a rescue for Timeout::Error in the ensure. That at least covers the common case, but if you have nested timeouts and they all fire you still have the problem.

1.9.3 Race condition?

Running tests with 1.9.2

connection_pool git:(master) rvm use 1.9.2
Using /Users/baguirre/.rvm/gems/ruby-1.9.2-p290
connection_pool git:(master) rake
ruby 1.9.2p290 (2011-07-09 revision 32553) [x86_64-darwin11.2.0]
Loaded suite /Users/baguirre/.rvm/gems/ruby-1.9.2-p290/gems/rake-0.9.2.2/lib/rake/rake_test_loader
Started
.....
Finished in 0.936252 seconds.

5 tests, 7 assertions, 0 failures, 0 errors, 0 skips

Test run options: --seed 26754

Running with 1.9.3

connection_pool git:(master) rvm use 1.9.3
Using /Users/baguirre/.rvm/gems/ruby-1.9.3-p0
connection_pool git:(master) rake
ruby 1.9.3p0 (2011-10-30 revision 33570) [x86_64-darwin11.2.0]
Run options: --seed 50855

# Running tests:

...F.

Finished tests in 0.941313s, 5.3117 tests/s, 7.4364 assertions/s.

  1) Failure:
test_basic_multithreaded_usage(TestConnectionPool) [/Users/baguirre/code/clocktower/ruby/gems/connection_pool/test/test_connection_pool.rb:35]:
--- expected
+++ actual
@@ -1 +1 @@
-[1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3]
+[1, 1, 1, 1, 1, 2, 2, 2, 3, 2, 2, 3, 3, 3, 3]


5 tests, 7 assertions, 1 failures, 0 errors, 0 skips
rake aborted!
Command failed with status (1): [/Users/baguirre/.rvm/rubies/ruby-1.9.3-p0/...]

Tasks: TOP => default => test
(See full trace by running task with --trace)

Remove ConnectionPool::Wrapper#with

If people are changing their code to work with #with in that case ConnectionPool should be used so can the method be removed from Wrapper?

Make ConnectionPool::Wrapper a separate gem

Suggestion

I like how the codebase is clean & small and the README is short & easy to understand.

But, I don't want to use ConnectionPool::Wrapper, especially because "it's not high-performance."

Let's make this gem even tighter by removing ConnectionPool::Wrapper and moving the relevant code & documentation from the README into a separate gem.

For example:

gem name: connection_pool-wrapper
depends on: connection_pool gem

Thoughts?

Nested checkout in same Thread yields same connection

I'm using sidekiq and connection_pool. I was trying to get two Redis connections from one pool in a single thread, one pipelined, one not. I expected to get two different connections using this code example:

Sidekiq.redis do |conn|
  conn.pipelined do
    Sidekiq.redis do |conn2|
      # expect conn != conn2
    end
  end
end

However, it's the same connection, so everything I do with conn2 naturally is pipelined as well.
Is there a way to get two different connections from one pool in a single thread? If not, would that be something you'd consider or accept a pull request for?

can this work with LDAP?

Can this gem be use to connect to different LDAP server in case of one not responding/working? or this is more just to get a connection pool but nothing is really handle if something goes wrong?

Support `#shutdown` on `ConnectionPool::Wrapper` instances

I have a use-case for the #shutdown method (to deterministically close connections after forking). However, in my case, I'm using ConnectionPool::Wrapper since this is old code that later needs to be updated to directly use connection pools.

The problem is that currently, there's no official/API way to get at the wrapped pool object in order to call #shutdown on it.

So, this is a feature request for either a #shutdown method (or some other name) or an accessor on wrappers to get at the wrapped pool. I know that any name chosen could conflict with the various connections/objects people are wrapping, so if that's an issue, perhaps it could even be a configuration option:

  # in class ConnectionPool::Wrapper
  def initialize(options = {}, &block)
    @accessor = options.delete(:accessor) || "pool"
    @pool = ::ConnectionPool.new(options, &block)
  end

  def method_missing(name, *args, &block)
    return @pool if name.to_s == @accessor
    # ... existing code ...

...or something like that. Anything so I can @cache.pool.shutdown { |conn| conn.my_shutdown_stuff } or even @cache.shutdown { ... } on wrappers.

Support for recycling connections / setting a maximum age on connections

It would be nice to be able to set a maximum lifetime for a connection and have it automatically destroyed on checkin once it exceeds that lifetime, so that we can forcibly recycle connections. The most pressing use-case for this is when the resource being connected to is behind a layer 4 (tcp) load balancer and you want to ensure that connections will pick up new hosts periodically. Most other connection pooling libraries that I've used have this functionality, and it seems self-evidently valuable to me.

This would require some fairly invasive changes (e.g., maintaining metadata about each connection at creation time so that we know how old it is), so I figured I'd run the concept by you before writing up a patch.

Getting a Timeout::Error but there's no reason I can see for the connections to still be in use?

I created this gist to show an example of my code...

https://gist.github.com/Altonymous/5e7accc7c64cbbe4c922

I am using sidekiq and connection_pool.

If I set my workers higher than my connection pool size then these errors start happening.. I've looked at the database and no queries are running an exceptionally long time.. none of them are anywhere near the 120 seconds timeout I set.

Also I've checked the jobs themselves and none are running that long. I'm not sure why the connections aren't being released back to the pool or how to give an example that reproduces this issue.

I'm happy to pull any additional information I can, or if you see something in the code that doesn't look right I'm happy to make modifications as well.

How to get resource stats?

The only way I found:

{
  max: pool.instance_variable_get(:@available).instance_variable_get(:@max),
  created: pool.instance_variable_get(:@available).instance_variable_get(:@created),
  busy: pool.instance_variable_get(:@available).instance_variable_get(:@que)
}

Is there more proper way to do it?

Pool is not safe for nested checkouts

require 'rubygems'
require 'connection_pool'

pool = ConnectionPool.new(:size => 1) { Object.new }

t = nil

pool.with_connection do |outer|
  p outer.object_id
  t = Thread.new do
    pool.with_connection do |con|
      p "Thread got connection #{con.object_id}"
    end
  end
  pool.with_connection do |inner|
    p inner.object_id
  end
  sleep 1
end

p 'main is done'

t.join
73820090
73820090
"Thread got connection 73820090"
"main is done"

This easily happens if you have a callstack where somewhere up in the callstack a connection has already been checked out. I think we should either support this (by using a stack for the current thread) or then raise an exception if trying to check out an already checked out connection and document the limitation. @mperham: which way should we go?

read_multi / multi.set with dalli

Hey,

I've been using dalli's read_multi and multi (combined with set) to try optimize our memcache. Both systems work with dalli directly.

We're now going to multithreading and trying to use connection_pool as you recommend in the readme for https://github.com/mperham/dalli, but we keep getting errors.

read_multi raises:
NoMethodError (undefined method 'split' for nil:NilClass)

Multi with set

Rails.cache.dalli.multi do
  array_of_items_to_write_back.each {|key, value|
    Rails.cache.dalli.set(key, value)
  }
end

raises
`NoMethodError (undefined method 'multi' for #ConnectionPool:0x007f8088350d08):``

Any tips on how to use connection_pool with our batch functionality?

Thanks

Question: fibers

The readme says:

Create a pool of objects to share amongst the fibers or threads in your Ruby application:

But looking at the code, there seems to be nothing dealing with fibers, and the current checkout is kept in thread-local storage -- I think two fibers created in the same thread would end up with the same connection. (Which depending on the application logic and nature of the connection may or may not be a problem)

Am I understanding right? Anything special meant by reference to 'fibers' in the readme?

Thanks! This is cool code.

Add back inheriting of BasicObject for Wrapper

As wrapper is a delegator that ideally works as much like the original method I think it should be inheriting BasicObject instead of Object. I can understand the concerns raised in issue #8 by @djanowski when there was no other way to do things. However now that we have a separate wrapper I think the benefits of behaving more closely to the original object outweigh any potential debugging. Any objections?

Consider to add max_idle_time for the connections

We use this library to build connection pool for snowflake JDBC connections. The connections will by default be expired every 4 hours and you need to re-auth after the connection is expired. So some inactive connections in the pool will become invalid after some time.

I think it is worthy to add max_idle_time in the config, we can actively/passively remove connections which has been idle beyond the max_idle_time. What do you think?

Use with Redis and Sinatra

I have set my pool size to 10 but I still allocating 20 connections. 20 connects is the limit placed on me by the redis server. Here's my code. I would expect that the pool would ever only open 10 connections. But then what would it do if it was asked for an 11th? Or am I totally misunderstanding it?

def self.initialize
    return unless @pool.nil?
    @pool = ConnectionPool.new(size: 10, timeout: 5) do
      Redis.new(
        reconnect_attempts: 10,
        reconnect_delay: 1.5,
        reconnect_delay_max: 10.0
      )
    end
  end

  def self.get(key)
    @pool.with do |redis|
      redis.get(key)
    end
  end

  

Shared connection thread safety

Hello, I've been noticing that connection_pool coupled with mperham's shared_connection strategy is leaving data in the DB.

I was able to add some logging in active_record/connection_adapters/abstract/database_statements.rb to produce the following output for one of my Selenium/Rails4/Rspec tests:

    # RSpec Thread: 70184739145520
    # Selenium Server: 70184702836520
    # ARTransaction: BEGIN - Thread: 70184739145520 <<--- RSpec transaction to be rolled back after test
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
      # ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
      # ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
      # ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
      # ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
      # ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
      # ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
      # ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
      # ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
      # ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
      # ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: ROLLBACK - Thread: 70184739145520
    # ARTransaction: Commit - Thread: 70184702836520 <<-- Supposed to be RSpec rolling back everything

The issue:

  1. Near the bottom you'll see a ROLLBACK command. At this point, the Selenium thread issues a transaction BEGIN, but then seemingly checks in the connection before issuing a COMMIT.
  2. RSpec checks out the shared_connection and promptly issues a ROLLBACK (which adversely affects the previous BEGIN)
  3. RSpec then checks the shared_connection back in.
  4. Finally, the Selenium thread checks out the shared_connection and issues a COMMIT, saving instead of rolling back the test data.

Mokey patched rails code to show logging:

 # active_record/connection_adapters/abstract/database_statements.rb
      def begin_transaction(options = {}) #:nodoc:
        ::Rails::logger::info("# ARTransaction: Begin - Thread: #{Thread::current::object_id}")
        @transaction = @transaction.begin(options)
      end

      def commit_transaction #:nodoc:
        ::Rails::logger::info("# ARTransaction: Commit - Thread: #{Thread::current::object_id}")
        @transaction = @transaction.commit
      end

      def rollback_transaction #:nodoc:
        ::Rails::logger::info("# ARTransaction: Rollback - Thread: #{Thread::current::object_id}")
        @transaction = @transaction.rollback
      end

      def reset_transaction #:nodoc:
        ::Rails::logger::info("# ARTransaction: Reset - Thread: #{Thread::current::object_id}")
        @transaction = ClosedTransaction.new(self)
      end

More than one connection pool

For each database entry I want to create a connection pool. How can I achieve that using this gem? Thanks

class User < ActiveRecord::Base
  has_many :databases
end

class Database < ActiveRecord::Base
  belongs_to :user
  # attributes: host, name, username, password
end

# usage
database = Database.first
database.with do |conn|
  conn.exec("select 1")
end

Connection-pool throws strange error

After I upgraded mechanize to the version that uses net-http-persistent (3.0.0) that started to use connection_pool, the latter throws strange error:

        8: from /[...]/.bundle/gems/ruby/2.5.0/bundler/gems/mechanize-b499a8380511/lib/mechanize/page/link.rb:30:in `click'
        7: from /[...]/.bundle/gems/ruby/2.5.0/bundler/gems/mechanize-b499a8380511/lib/mechanize.rb:348:in `click'
        6: from /[...]/.bundle/gems/ruby/2.5.0/bundler/gems/mechanize-b499a8380511/lib/mechanize.rb:464:in `get'
        5: from /[...]/.bundle/gems/ruby/2.5.0/bundler/gems/mechanize-b499a8380511/lib/mechanize/http/agent.rb:280:in `fetch'
        4: from /[...]/.bundle/gems/ruby/2.5.0/gems/net-http-persistent-3.0.0/lib/net/http/persistent.rb:927:in `request'
        3: from /[...]/.bundle/gems/ruby/2.5.0/gems/net-http-persistent-3.0.0/lib/net/http/persistent.rb:642:in `connection_for'
        2: from /[...]/.bundle/gems/ruby/2.5.0/gems/net-http-persistent-3.0.0/lib/net/http/persistent.rb:642:in `ensure in connection_for'
        1: from /[...]/.bundle/gems/ruby/2.5.0/gems/net-http-persistent-3.0.0/lib/net/http/persistent/pool.rb:16:in `checkin'
ConnectionPool::Error (no connections are checked out)

Ok to use #checkout / #checkin in a Rails controller?

Hi!

TL;DR, I'm wondering if this is fair game:

class ThingsController < ApplicationController 
  before_action :checkout_connection 
  after_action :checkin_connection 

  def index 
     # do stuff with @connection 
  end 

  private 

  def checkout_connection 
    @connection = CONNECTION_POOL.checkout 
  end 
  def checkin_connection 
    CONNECTION_POOL.checkin 
  end 
end 

I see that #checkout and #checkin are public methods, but I don't see them in the documentation, so I thought I would ask if that is proper usage. Does it seem ok?

Thanks!

Poor behavior when the maximum number of connections have been opened

The @resource condition variable is waited on in a block depending on the same mutex that is used to notify the condition variable.

Unless I'm mistaken, if that condition variable ever gets waited on, this means it'll block every thread for the full timeout, including the threads returning connections to the pool, which could have notified the waiting thread instead.

https://github.com/mperham/connection_pool/blob/master/lib/connection_pool/timed_stack.rb#L87

Instrumenting checkout and query times?

I'm trying to set up a scheduler that measures the average time it takes to checkout a connection from the pool, and the query time spent while using it, and logging that to $stdout in a format Librato can understand. Can this be done? Or even better, ActiveSupport::Notification support, or some sort of generic LogSubscriber.

Stack VS Queue Flexibility

Hey @mperham,

I am trying to leverage connection_pool in order to maintain a collection of open HTTPS connections.

The motivation for this is that I am trying to make repeated requests to an API server, however, re-negotiating SSL over the web on every request is quite slow.

Right now, I have the following implementation:
config/puma.rb

...

preload_app!

on_worker_boot do
  RequestPool.initialize_connections(thread_count)

  ActiveSupport.on_load(:active_record) do
    ActiveRecord::Base.establish_connection
  end
end

lib/request_pool.rb

require 'http'
require 'connection_pool'

# @note Each key maps to a single connection.
# @note To add a new URL to the connection pool, you must first add it to config/connections.yml
# @example Stores connections to multiple servers in a single hash.
#   RequestPool.new('https://www.google.ca').connection do |conn, ssl_context|
#       conn.get('/route', ssl_context: ssl_context)
#   end
class RequestPool
    @@conn = {}

    SSL_VERSION = :TLSv1_2
    SSL_CIPHERS = "ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:-LOW".freeze

    def self.initialize_connections(size)
        self.config.each do |url|
            @@conn[url] = ConnectionPool.new(size: size) { HTTP.persistent(url).timeout(:per_operation, connect: 10, read: 15) }
        end
    end

    def initialize(base_url)
        raise ArgumentError unless @@conn.has_key? base_url
        @base_url = base_url
    end

    # @note we call .to_s to ensure the respones gets flushed. 
    #   See https://github.com/httprb/http/wiki/Persistent-Connections-(keep-alive) for more details
    def connection
        @@conn[@base_url].with do |conn|
            response = yield conn, ssl_config
            response.to_s
            response
        end
    end

    private

    def self.config
        YAML.load_file("#{Rails.root}/config/connections.yml")[Rails.env]
            .values
    end

    def ssl_config
        OpenSSL::SSL::SSLContext.new.tap do |ctx|
            ctx.ssl_version = SSL_VERSION
            ctx.ciphers     = SSL_CIPHERS
        end
    end
end

If connection_pool were to have the option of using a Stack instead of a Queue, then, the HTTPS connections would be reused more frequently before going stale.

Would this be something that the gem would be willing to incorporate?

Does ConnectionPool::Wrapper need to be slower?

After reading part of the readme saying that ConnectionPool::Wrapper is not high-performance I started thinking whether it has to be that way.

Why for example not use a Delegator, or dynamically define methods in method_missing? I can run some benchmarks, maybe even prepare a PR, but first wanted to ask whether I'm not missing something obvious :)

`close` and `disconnect!` baked into `TimedStack#discard!`

I'm using connection_pool for a socket client that uses the method disconnect to actually close the socket. I noticed that my sockets were not being closed properly after being discarded from the pool and I was puzzled. I found that TimedStack tries to call either close or disconnect! on the connection before discarding. There's some disadvantages to this strategy:

  1. Not all cases can be covered for every possible client out there.
  2. The disconnect method on my socket has the potential to hang, so I need a way to safely timeout if necessary.

After some brainstorming I came up with a few potential solutions. What do you think about adding an interface for closing connections, instead of trying to cover all the common bases in TimedStack?

pool = ConnectionPool.new { new_socket }
pool.close_with do |connection|
  connection.disconnect
end

However, this means that the interface is now a little inconsistent with itself, as the creation block is passed into the constructor and the close block has its own method. Here's some ideas that offer and more consistent interface:

# both use methods
pool = ConnectionPool.new
pool.open_with do
  new_socket
end
pool.close_with do |socket|
  socket.disconnect
end

# lifecycle strategy
pool = ConnectionPool.new do |lifecycle|
  lifecycle.open_with do
    new_socket
  end

  lifecycle.close_with do |socket|
    socket.disconnect
  end
end

# lifecycle object
class Lifecycle
  def open
    new_socket
  end

  def close(socket)
    socket.disconnect
  end
end

pool = ConnectionPool.new(Lifecycle.new)

I would be happy to submit a PR if you like any of the ideas. What do you think?

Redis error when using redis#pipelined

This issue might be related to this one posted a few months ago.

The error occurs when using Redis#pipelined with the redis object instantiated with connection pooling.

Sample code:

require 'redis'                                                                                                                                                                                                                
require 'connection_pool'

$r = ConnectionPool.new(size: 3){ Redis.new(path: '/tmp/redis.sock') }

$r.set 'key', 'value'
puts $r.get 'value' # prints "value"

# Throws an error:

$r.pipelined {
  $r.get 'value' 
}

The pipelined block will throw the following error:

`block in pipelined': no block given (yield) (LocalJumpError)

Here's the full error stack:

/Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:1045:in `block in pipelined': no block given (yield) (LocalJumpError)
  from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:68:in `block in synchronize'
  from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:62:in `call'
  from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:62:in `block (2 levels) in initialize'
  from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize'
  from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:62:in `block in initialize'
  from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:68:in `call'
  from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:68:in `synchronize'
  from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:1042:in `pipelined'
  from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/gems/connection_pool-0.1.0/lib/connection_pool.rb:48:in `method_missing'
  from pooling.rb:9:in `<main>'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.