mperham / connection_pool Goto Github PK
View Code? Open in Web Editor NEWGeneric connection pooling for Ruby
License: MIT License
Generic connection pooling for Ruby
License: MIT License
I posted this question on redis driver project- redis/redis-rb#280
Would like to get some thoughts here as well on this, since we use the awesome connection_pool
gem. Thanks in advance.
Hello,
First thanks for this library which is apparently very useful especially with Sidekiq. I'm saying apparently because I haven't tested it yet in a production scenario and would like to have your feedback before that.
I'm using it inside an external gem called responsys_api
which is an api client for the Oracle Responsys API. The ConnectionPool
is wrapped in a SessionPool
here. It is used by the client here.
The branch is not merged yet but most of the logic is pushed. I'd make some changes to the code if you have any concern.
Then in the infrastructure we have at thredUP, we make async calls using the client gem inside sidekiq jobs. Because sometimes we create hundreds of thousands jobs but with only 100 connections, we want to make sure we're not exceeding and blocking our account and this is where your gem helps.
So two questions:
ConnectionPool
because of the declaration scope. For me the variable will be global since it is in an initializer but...If it can help an example of the code:
#config/initializers/responsys.rb in the rails app
Responsys.configure do |config|
config.settings = {
username: "user",
password: "password",
wsdl: "http://oracle.soap.wsdl",
sessions: { timeout: 1200 } #params passed to the connection_pool in the internal SessionPool object
}
end
#async job enqueued when the user updates his emailing options
class UserResponsysStatusJob
include Sidekiq::Worker
sidekiq_options :queue => :responsys
def perform(email)
Responsys::Member.new(email).subscribe("mailing_list") #Uses the gem to make the call after picking up a client in the pool.
end
end
Also do you think of any integration in the sidekiq UI ? Would be cool to real-time monitor if there's a registered pool to know how many clients are active in the pool.
Thanks for your help!
The readme says:
Create a pool of objects to share amongst the fibers or threads in your Ruby application:
But looking at the code, there seems to be nothing dealing with fibers, and the current checkout is kept in thread-local storage -- I think two fibers created in the same thread would end up with the same connection. (Which depending on the application logic and nature of the connection may or may not be a problem)
Am I understanding right? Anything special meant by reference to 'fibers' in the readme?
Thanks! This is cool code.
Is connection_pool supported for 1.9.1 , I am running it with ruby ruby-1.9.1-p376 and it fails with this error:
ArgumentError wrong number of arguments (2 for 1) /mts/git/nimbus-gems/1.9.1/gems/connection_pool-2.0.0/lib/connection_pool/timed_stack.rb:43:in `block (2 levels) in pop'
/mts/git/nimbus-gems/1.9.1/gems/connection_pool-2.0.0/lib/connection_pool/timed_stack.rb:34:in `loop'
/mts/git/nimbus-gems/1.9.1/gems/connection_pool-2.0.0/lib/connection_pool/timed_stack.rb:34:in `block in pop'
<internal:prelude>:8:in `synchronize'
/mts/git/nimbus-gems/1.9.1/gems/connection_pool-2.0.0/lib/connection_pool/timed_stack.rb:33:in `pop'
/mts/git/nimbus-gems/1.9.1/gems/connection_pool-2.0.0/lib/connection_pool.rb:69:in `checkout'
/mts/git/nimbus-gems/1.9.1/gems/connection_pool-2.0.0/lib/connection_pool.rb:56:in `with'
reason being that in 1.9.1 ConditionVariable does not take to_wait
argument, http://ruby-doc.org/stdlib-1.9.1/libdoc/thread/rdoc/ConditionVariable.html#method-i-wait
I have a use-case for the #shutdown
method (to deterministically close connections after forking). However, in my case, I'm using ConnectionPool::Wrapper
since this is old code that later needs to be updated to directly use connection pools.
The problem is that currently, there's no official/API way to get at the wrapped pool object in order to call #shutdown
on it.
So, this is a feature request for either a #shutdown
method (or some other name) or an accessor on wrappers to get at the wrapped pool. I know that any name chosen could conflict with the various connections/objects people are wrapping, so if that's an issue, perhaps it could even be a configuration option:
# in class ConnectionPool::Wrapper
def initialize(options = {}, &block)
@accessor = options.delete(:accessor) || "pool"
@pool = ::ConnectionPool.new(options, &block)
end
def method_missing(name, *args, &block)
return @pool if name.to_s == @accessor
# ... existing code ...
...or something like that. Anything so I can @cache.pool.shutdown { |conn| conn.my_shutdown_stuff }
or even @cache.shutdown { ... }
on wrappers.
I was wondering if you would mind pushing v2.2.2 to RubyGems. I've been using this lately for some scripting due to its blocking implementation
Hey,
I've been using dalli's read_multi and multi (combined with set) to try optimize our memcache. Both systems work with dalli directly.
We're now going to multithreading and trying to use connection_pool
as you recommend in the readme for https://github.com/mperham/dalli, but we keep getting errors.
read_multi raises:
NoMethodError (undefined method 'split' for nil:NilClass)
Multi with set
Rails.cache.dalli.multi do
array_of_items_to_write_back.each {|key, value|
Rails.cache.dalli.set(key, value)
}
end
raises
`NoMethodError (undefined method 'multi' for #ConnectionPool:0x007f8088350d08):``
Any tips on how to use connection_pool with our batch functionality?
Thanks
I can contribute this myself if I get direction.
How can I access the shutdown block.
And in what cases is it called. I'm referring to using a connection pool inside of sidekiq.
Please tag releases and push them to the repository to make it easier to look at the history using git.
Hi, thanks for the great gem! We use it in react-rails
for server rendering.
Let's say my pool is:
ConnectionPool.new { really_long_setup }
And later I'm going to need 10 of those. Is there a way I can do the really_long_setup
ahead of time?
I thought maybe
pool_size.times do
pool.with { |member| member }
end
but I would expect that to use the same member each time. (Also, I'm not sure how I could tell if the pool was initialized or not.)
Hi,
In the scenario where I need to access redis within a subscribe callback, I need to use nest call of pool.with
, which doesn't checkout another connection.
Example:
require 'redis'
require 'connection_pool'
def redis_pool
@redis_pool ||= ConnectionPool.new { Redis.new }
end
def listen
redis_pool.with do |redis|
redis.subscribe('pool', 'non-pool') do |on|
on.message do |channel, _message|
redis_pool.with do |redis2|
puts redis2.object_id == redis.object_id
puts redis2.info # raises error because it's trying to use the subscribe connection
end
end
end
end
rescue => e
puts e.message
end
def publish
redis_pool.with do |redis|
redis.publish('pool', 'hello')
end
end
Thread.new { listen }
sleep 1
publish
sleep 1
# Output:
# true
# Error: Redis::CommandError ERR only (P)SUBSCRIBE / (P)UNSUBSCRIBE / PING / QUIT allowed in this context
The workaround I've thought about was to always use Redis.new
when using it to subscribe
.
Are there plans to have a nicer usage for this scenario?
https://github.com/mperham/connection_pool/blob/master/lib/connection_pool/timed_stack.rb#L23
I'm just wondering why you chose to call broadcast
instead of signal
Wouldn't this cause unnecessary loop and fail to attain a connection?
Is it possible to dynamically resize the connection pool (up and/or down) at runtime without re-deploying the application with a new size for the connection pool?
As wrapper is a delegator that ideally works as much like the original method I think it should be inheriting BasicObject instead of Object. I can understand the concerns raised in issue #8 by @djanowski when there was no other way to do things. However now that we have a separate wrapper I think the benefits of behaving more closely to the original object outweigh any potential debugging. Any objections?
Hi!
TL;DR, I'm wondering if this is fair game:
class ThingsController < ApplicationController
before_action :checkout_connection
after_action :checkin_connection
def index
# do stuff with @connection
end
private
def checkout_connection
@connection = CONNECTION_POOL.checkout
end
def checkin_connection
CONNECTION_POOL.checkin
end
end
I see that #checkout
and #checkin
are public methods, but I don't see them in the documentation, so I thought I would ask if that is proper usage. Does it seem ok?
Thanks!
require 'rubygems'
require 'connection_pool'
pool = ConnectionPool.new(:size => 1) { Object.new }
t = nil
pool.with_connection do |outer|
p outer.object_id
t = Thread.new do
pool.with_connection do |con|
p "Thread got connection #{con.object_id}"
end
end
pool.with_connection do |inner|
p inner.object_id
end
sleep 1
end
p 'main is done'
t.join
73820090
73820090
"Thread got connection 73820090"
"main is done"
This easily happens if you have a callstack where somewhere up in the callstack a connection has already been checked out. I think we should either support this (by using a stack for the current thread) or then raise an exception if trying to check out an already checked out connection and document the limitation. @mperham: which way should we go?
Current implementation eagerly creates connections up to :size upon init of the pool.
Is this just because it was all that was neccessary for your case, or are there reasons that lazy creation is infeasible or a bad idea?
Would you be amenable to a pull request that allowed lazy creation?
The only issue I can think of is that it would be the caller's responsibility to make sure the connection creation code is thread-safe. But there may be well be other issues I'm not thinking of. Concurrency is tricky.
The only way I found:
{
max: pool.instance_variable_get(:@available).instance_variable_get(:@max),
created: pool.instance_variable_get(:@available).instance_variable_get(:@created),
busy: pool.instance_variable_get(:@available).instance_variable_get(:@que)
}
Is there more proper way to do it?
@djanowski Is your public github email the one you use for Rubygems? I'd like to give you gem push privileges.
I believe this is an issue but don't quite have a test case to confirm it right now.
Anytime we remove a connection from the thread stack and transition it into the global stack (or discard it), if execution is interrupted we will permanently lose the connection but the global stack still counts it as if it exists.
We can fix this in MRI by using handle_interrupt
(which is what @tamird did originally).
In JRuby, that's not available. I don't think there's a good solution to that, besides either rewriting connection_pool to follow the same model that ActiveRecord does, where we never pop a connection off the stack and instead flag as in use and then have a reaper that culls them after a set period of inactivity.
Maybe on JRuby you could add a rescue for Timeout::Error in the ensure. That at least covers the common case, but if you have nested timeouts and they all fire you still have the problem.
Hello,
We are using the gem Redis::Semaphore because of some multithreading issues, but this poses a problem when using ConnectionPool to connect to our Redis backend.
This is the error we get: NoMethodError: undefined method 'getset' for #<ConnectionPool:0x007ff213e203a8>
This method is defined in the Redis gem we are using, but apparently not supported by ConnectionPool.
Making Wrapper
a transparent drop-in requires that method_missing
be paired with respond_to?
russCloak has already implemented a version. https://github.com/russCloak/connection_pool/commit/ea871f86b979bb3de228819dd03a66daa9537dc7
I feel like this would be closely related to #85, still I'd like to get this one thing straight.
I have a use case for connection pool, where I need to orchestrate access to third-party API, for which I have 5 sets of credentials (login/password pairs) each of which represents separate request queue on the API provider's side. So what I'd like to be able to do is to create connection pool out of existing object collection, rather than generating them on-demand.
I've created a (probably very naive) implementation of said functionality like this:
accounts = CSV.read("config/accounts.csv").to_a
accounts_que = Queue.new
accounts.each { |acc| accounts_que << acc }
$login_handles = ConnectionPool.new(size: accounts.count, timeout: 5) { accounts_que.pop }
This has been working flawlessly in staging environment, though I'm sure there are numerous pitfalls, and I would rather do something in the lines of:
accounts = CSV.read("config/accounts.csv").to_a
$login_handles = ConnectionPool.new(collection: accounts, timeout: 5)
Would a PR of these sorts be considered for merging?
Hello,
Is there a standard way to force the re connection of all connections of a pool?
My first idea would be to just create new ConnectionPool
objects and let the garbage collector free those objects, causing the connections to drop.
I am asking because I would need to include something like this in Unicorn pre_fork
and Puma on_restart
so that the connections be establish again.
After I upgraded mechanize
to the version that uses net-http-persistent (3.0.0)
that started to use connection_pool
, the latter throws strange error:
8: from /[...]/.bundle/gems/ruby/2.5.0/bundler/gems/mechanize-b499a8380511/lib/mechanize/page/link.rb:30:in `click'
7: from /[...]/.bundle/gems/ruby/2.5.0/bundler/gems/mechanize-b499a8380511/lib/mechanize.rb:348:in `click'
6: from /[...]/.bundle/gems/ruby/2.5.0/bundler/gems/mechanize-b499a8380511/lib/mechanize.rb:464:in `get'
5: from /[...]/.bundle/gems/ruby/2.5.0/bundler/gems/mechanize-b499a8380511/lib/mechanize/http/agent.rb:280:in `fetch'
4: from /[...]/.bundle/gems/ruby/2.5.0/gems/net-http-persistent-3.0.0/lib/net/http/persistent.rb:927:in `request'
3: from /[...]/.bundle/gems/ruby/2.5.0/gems/net-http-persistent-3.0.0/lib/net/http/persistent.rb:642:in `connection_for'
2: from /[...]/.bundle/gems/ruby/2.5.0/gems/net-http-persistent-3.0.0/lib/net/http/persistent.rb:642:in `ensure in connection_for'
1: from /[...]/.bundle/gems/ruby/2.5.0/gems/net-http-persistent-3.0.0/lib/net/http/persistent/pool.rb:16:in `checkin'
ConnectionPool::Error (no connections are checked out)
I love the idea of having a generic object pool that can be reused across different libraries.
I'm considering using this for redis-rb.
Would you consider a patch to remove the magic of method_missing
(and not inheriting from BasicObject
)? I think it only adds complexity when debugging potential issues without much gain.
Thank you for your time!
connection_pool git:(master) rvm use 1.9.2
Using /Users/baguirre/.rvm/gems/ruby-1.9.2-p290
connection_pool git:(master) rake
ruby 1.9.2p290 (2011-07-09 revision 32553) [x86_64-darwin11.2.0]
Loaded suite /Users/baguirre/.rvm/gems/ruby-1.9.2-p290/gems/rake-0.9.2.2/lib/rake/rake_test_loader
Started
.....
Finished in 0.936252 seconds.
5 tests, 7 assertions, 0 failures, 0 errors, 0 skips
Test run options: --seed 26754
connection_pool git:(master) rvm use 1.9.3
Using /Users/baguirre/.rvm/gems/ruby-1.9.3-p0
connection_pool git:(master) rake
ruby 1.9.3p0 (2011-10-30 revision 33570) [x86_64-darwin11.2.0]
Run options: --seed 50855
# Running tests:
...F.
Finished tests in 0.941313s, 5.3117 tests/s, 7.4364 assertions/s.
1) Failure:
test_basic_multithreaded_usage(TestConnectionPool) [/Users/baguirre/code/clocktower/ruby/gems/connection_pool/test/test_connection_pool.rb:35]:
--- expected
+++ actual
@@ -1 +1 @@
-[1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3]
+[1, 1, 1, 1, 1, 2, 2, 2, 3, 2, 2, 3, 3, 3, 3]
5 tests, 7 assertions, 1 failures, 0 errors, 0 skips
rake aborted!
Command failed with status (1): [/Users/baguirre/.rvm/rubies/ruby-1.9.3-p0/...]
Tasks: TOP => default => test
(See full trace by running task with --trace)
This issue might be related to this one posted a few months ago.
The error occurs when using Redis#pipelined with the redis object instantiated with connection pooling.
Sample code:
require 'redis'
require 'connection_pool'
$r = ConnectionPool.new(size: 3){ Redis.new(path: '/tmp/redis.sock') }
$r.set 'key', 'value'
puts $r.get 'value' # prints "value"
# Throws an error:
$r.pipelined {
$r.get 'value'
}
The pipelined block will throw the following error:
`block in pipelined': no block given (yield) (LocalJumpError)
Here's the full error stack:
/Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:1045:in `block in pipelined': no block given (yield) (LocalJumpError)
from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:68:in `block in synchronize'
from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:62:in `call'
from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:62:in `block (2 levels) in initialize'
from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize'
from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:62:in `block in initialize'
from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:68:in `call'
from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:68:in `synchronize'
from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/bundler/gems/redis-rb-e9e17d65b9c5/lib/redis.rb:1042:in `pipelined'
from /Users/shilov/.rbenv/versions/1.9.3-p0/lib/ruby/gems/1.9.1/gems/connection_pool-0.1.0/lib/connection_pool.rb:48:in `method_missing'
from pooling.rb:9:in `<main>'
I have set my pool size to 10 but I still allocating 20 connections. 20 connects is the limit placed on me by the redis server. Here's my code. I would expect that the pool would ever only open 10 connections. But then what would it do if it was asked for an 11th? Or am I totally misunderstanding it?
def self.initialize
return unless @pool.nil?
@pool = ConnectionPool.new(size: 10, timeout: 5) do
Redis.new(
reconnect_attempts: 10,
reconnect_delay: 1.5,
reconnect_delay_max: 10.0
)
end
end
def self.get(key)
@pool.with do |redis|
redis.get(key)
end
end
We use this library to build connection pool for snowflake JDBC connections. The connections will by default be expired every 4 hours and you need to re-auth after the connection is expired. So some inactive connections in the pool will become invalid after some time.
I think it is worthy to add max_idle_time in the config, we can actively/passively remove connections which has been idle beyond the max_idle_time. What do you think?
For each database entry I want to create a connection pool. How can I achieve that using this gem? Thanks
class User < ActiveRecord::Base
has_many :databases
end
class Database < ActiveRecord::Base
belongs_to :user
# attributes: host, name, username, password
end
# usage
database = Database.first
database.with do |conn|
conn.exec("select 1")
end
It would be nice to be able to set a maximum lifetime for a connection and have it automatically destroyed on checkin once it exceeds that lifetime, so that we can forcibly recycle connections. The most pressing use-case for this is when the resource being connected to is behind a layer 4 (tcp) load balancer and you want to ensure that connections will pick up new hosts periodically. Most other connection pooling libraries that I've used have this functionality, and it seems self-evidently valuable to me.
This would require some fairly invasive changes (e.g., maintaining metadata about each connection at creation time so that we know how old it is), so I figured I'd run the concept by you before writing up a patch.
When using ConnectionPool::Wrapper
, we can't access the @pool
definition, so it prevents us from checking the pool status.
Let's give that access!
I'm using connection_pool
for a socket client that uses the method disconnect
to actually close the socket. I noticed that my sockets were not being closed properly after being discarded from the pool and I was puzzled. I found that TimedStack
tries to call either close
or disconnect!
on the connection before discarding. There's some disadvantages to this strategy:
disconnect
method on my socket has the potential to hang, so I need a way to safely timeout if necessary.After some brainstorming I came up with a few potential solutions. What do you think about adding an interface for closing connections, instead of trying to cover all the common bases in TimedStack
?
pool = ConnectionPool.new { new_socket }
pool.close_with do |connection|
connection.disconnect
end
However, this means that the interface is now a little inconsistent with itself, as the creation block is passed into the constructor and the close block has its own method. Here's some ideas that offer and more consistent interface:
# both use methods
pool = ConnectionPool.new
pool.open_with do
new_socket
end
pool.close_with do |socket|
socket.disconnect
end
# lifecycle strategy
pool = ConnectionPool.new do |lifecycle|
lifecycle.open_with do
new_socket
end
lifecycle.close_with do |socket|
socket.disconnect
end
end
# lifecycle object
class Lifecycle
def open
new_socket
end
def close(socket)
socket.disconnect
end
end
pool = ConnectionPool.new(Lifecycle.new)
I would be happy to submit a PR if you like any of the ideas. What do you think?
If people are changing their code to work with #with in that case ConnectionPool should be used so can the method be removed from Wrapper?
Hi,
Is it possible to provide a list of hosts to which the pool will connect to randomly? (unless the connection is dead of course)
Some pseudo code:
ConnectionPool::Wrapper.new(:hosts => ["localhost:1234", "localhost:1235", "localhost:1236"]) {|host| Redis.new host}
Thanks.
Can this gem be use to connect to different LDAP server in case of one not responding/working? or this is more just to get a connection pool but nothing is really handle if something goes wrong?
Hi.
Today I was trying to debug an error in our app with connection handling.
And in trying to diff all the changed dependencies I noticed that there is no v2.2.0
tag pushed to github.
I'm not sure if that's intentional, or accidental.
Looking at the commit history, I guess that the 2.2.0 version of the gem is at 6007d32
But, nevertheless, a git tag would be helpful.
Thanks.
The @resource
condition variable is waited on in a block depending on the same mutex that is used to notify the condition variable.
Unless I'm mistaken, if that condition variable ever gets waited on, this means it'll block every thread for the full timeout, including the threads returning connections to the pool, which could have notified the waiting thread instead.
https://github.com/mperham/connection_pool/blob/master/lib/connection_pool/timed_stack.rb#L87
I'm using sidekiq and connection_pool. I was trying to get two Redis connections from one pool in a single thread, one pipelined, one not. I expected to get two different connections using this code example:
Sidekiq.redis do |conn|
conn.pipelined do
Sidekiq.redis do |conn2|
# expect conn != conn2
end
end
end
However, it's the same connection, so everything I do with conn2 naturally is pipelined as well.
Is there a way to get two different connections from one pool in a single thread? If not, would that be something you'd consider or accept a pull request for?
After reading part of the readme saying that ConnectionPool::Wrapper
is not high-performance
I started thinking whether it has to be that way.
Why for example not use a Delegator
, or dynamically define methods in method_missing
? I can run some benchmarks, maybe even prepare a PR, but first wanted to ask whether I'm not missing something obvious :)
Hello, I've been noticing that connection_pool coupled with mperham's shared_connection strategy is leaving data in the DB.
I was able to add some logging in active_record/connection_adapters/abstract/database_statements.rb to produce the following output for one of my Selenium/Rails4/Rspec tests:
# RSpec Thread: 70184739145520
# Selenium Server: 70184702836520
# ARTransaction: BEGIN - Thread: 70184739145520 <<--- RSpec transaction to be rolled back after test
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184739145520 => ARTransaction: COMMIT - Thread: 70184739145520
# ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
# ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
# ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
# ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
# ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
# ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
# ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
# ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
# ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: COMMIT - Thread: 70184702836520
# ARTransaction: BEGIN - Thread: 70184702836520 => ARTransaction: ROLLBACK - Thread: 70184739145520
# ARTransaction: Commit - Thread: 70184702836520 <<-- Supposed to be RSpec rolling back everything
# active_record/connection_adapters/abstract/database_statements.rb
def begin_transaction(options = {}) #:nodoc:
::Rails::logger::info("# ARTransaction: Begin - Thread: #{Thread::current::object_id}")
@transaction = @transaction.begin(options)
end
def commit_transaction #:nodoc:
::Rails::logger::info("# ARTransaction: Commit - Thread: #{Thread::current::object_id}")
@transaction = @transaction.commit
end
def rollback_transaction #:nodoc:
::Rails::logger::info("# ARTransaction: Rollback - Thread: #{Thread::current::object_id}")
@transaction = @transaction.rollback
end
def reset_transaction #:nodoc:
::Rails::logger::info("# ARTransaction: Reset - Thread: #{Thread::current::object_id}")
@transaction = ClosedTransaction.new(self)
end
I'm trying to set up a scheduler that measures the average time it takes to checkout a connection from the pool, and the query time spent while using it, and logging that to $stdout
in a format Librato can understand. Can this be done? Or even better, ActiveSupport::Notification
support, or some sort of generic LogSubscriber
.
We use Connection Pool in front of redis, but we can't call #eval on the redis connection.
Redis supports as #eval as the Redis command of the same name.
ConnectionPool::Wrapper should proxy this to the connection like all other via the method_missing call, except that eval is defined by Kernel, so method_missing never gets invoked.
We fixed it by monkey patching as follows:
#Patch to properly propagate calls to eval()
class ConnectionPool
class Wrapper
def eval(*args)
method_missing(:eval, *args)
end
end
end
I created this gist to show an example of my code...
https://gist.github.com/Altonymous/5e7accc7c64cbbe4c922
I am using sidekiq and connection_pool.
If I set my workers higher than my connection pool size then these errors start happening.. I've looked at the database and no queries are running an exceptionally long time.. none of them are anywhere near the 120 seconds timeout I set.
Also I've checked the jobs themselves and none are running that long. I'm not sure why the connections aren't being released back to the pool or how to give an example that reproduces this issue.
I'm happy to pull any additional information I can, or if you see something in the code that doesn't look right I'm happy to make modifications as well.
I like how the codebase is clean & small and the README is short & easy to understand.
But, I don't want to use ConnectionPool::Wrapper
, especially because "it's not high-performance."
Let's make this gem even tighter by removing ConnectionPool::Wrapper
and moving the relevant code & documentation from the README into a separate gem.
For example:
gem name: connection_pool-wrapper
depends on: connection_pool
gem
Thoughts?
Hey @mperham,
I am trying to leverage connection_pool in order to maintain a collection of open HTTPS connections.
The motivation for this is that I am trying to make repeated requests to an API server, however, re-negotiating SSL over the web on every request is quite slow.
Right now, I have the following implementation:
config/puma.rb
...
preload_app!
on_worker_boot do
RequestPool.initialize_connections(thread_count)
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.establish_connection
end
end
lib/request_pool.rb
require 'http'
require 'connection_pool'
# @note Each key maps to a single connection.
# @note To add a new URL to the connection pool, you must first add it to config/connections.yml
# @example Stores connections to multiple servers in a single hash.
# RequestPool.new('https://www.google.ca').connection do |conn, ssl_context|
# conn.get('/route', ssl_context: ssl_context)
# end
class RequestPool
@@conn = {}
SSL_VERSION = :TLSv1_2
SSL_CIPHERS = "ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:-LOW".freeze
def self.initialize_connections(size)
self.config.each do |url|
@@conn[url] = ConnectionPool.new(size: size) { HTTP.persistent(url).timeout(:per_operation, connect: 10, read: 15) }
end
end
def initialize(base_url)
raise ArgumentError unless @@conn.has_key? base_url
@base_url = base_url
end
# @note we call .to_s to ensure the respones gets flushed.
# See https://github.com/httprb/http/wiki/Persistent-Connections-(keep-alive) for more details
def connection
@@conn[@base_url].with do |conn|
response = yield conn, ssl_config
response.to_s
response
end
end
private
def self.config
YAML.load_file("#{Rails.root}/config/connections.yml")[Rails.env]
.values
end
def ssl_config
OpenSSL::SSL::SSLContext.new.tap do |ctx|
ctx.ssl_version = SSL_VERSION
ctx.ciphers = SSL_CIPHERS
end
end
end
If connection_pool were to have the option of using a Stack instead of a Queue, then, the HTTPS connections would be reused more frequently before going stale.
Would this be something that the gem would be willing to incorporate?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.