Giter VIP home page Giter VIP logo

heroku-buildpack-pgbouncer's Introduction

Heroku buildpack: pgbouncer

This is a Heroku buildpack that allows one to run pgbouncer in a dyno alongside application code. It is meant to be used in conjunction with other buildpacks.

The primary use of this buildpack is to allow for transaction pooling of PostgreSQL database connections among multiple workers in a dyno. For example, 10 unicorn workers would be able to share a single database connection, avoiding connection limits and Out Of Memory errors on the Postgres server.

FAQ

  • Q: Why should I use transaction pooling?

  • A: You have many workers per dyno that hold open idle Postgres connections and you want to reduce the number of unused connections. This is a slightly more complete answer from stackoverflow

  • Q: Why shouldn't I use transaction pooling?

  • A: If you need to use named prepared statements, advisory locks, listen/notify, or other features that operate on a session level. Please refer to PGBouncer's feature matrix for all transaction pooling caveats.

Disable Prepared Statements

With Rails 4.1, you can disable prepared statements by appending ?prepared_statements=false to the database's URI. Set the PGBOUNCER_PREPARED_STATEMENTS config var to false for the buildpack to do that for you.

Rails versions 4.0.0 - 4.0.3, reportedly can't disable prepared statements at all. Make sure your framework is up to date before troubleshooting prepared statements failures.

Rails 3.2 - 4.0 also requires an initializer to properly cast the prepared_statements configuration string as a boolean. This initializer is adapted from this commit. In file config/initializers/database_connection.rb insert the following:

require "active_record/connection_adapters/postgresql_adapter"

class ActiveRecord::ConnectionAdapters::PostgreSQLAdapter
  alias initialize_without_config_boolean_coercion initialize
  def initialize(connection, logger, connection_parameters, config)
    if config[:prepared_statements] == 'false'
      config = config.merge(prepared_statements: false)
    end
    initialize_without_config_boolean_coercion(connection, logger, connection_parameters, config)
  end
end

Usage

Example usage:

$ ls -a
Gemfile  Gemfile.lock  Procfile  config/  config.ru

$ heroku buildpacks:add heroku/pgbouncer
Buildpack added. Next release on pgbouncer-test-app will use heroku/pgbouncer.
Run `git push heroku main` to create a new release using this buildpack.

$ heroku buildpacks:add heroku/ruby
Buildpack added. Next release on pgbouncer-test-app will use:
  1. https://github.com/heroku/heroku-buildpack-pgbouncer
  2. https://github.com/heroku/heroku-buildpack-ruby
Run `git push heroku main` to create a new release using these buildpacks.

$ cat Procfile
web:    bin/start-pgbouncer bundle exec unicorn -p $PORT -c ./config/unicorn.rb -E $RACK_ENV
worker: bundle exec rake worker

$ git push heroku main
...
-----> Multipack app detected
-----> Fetching custom git buildpack... done
-----> pgbouncer app detected
       Using pgbouncer version: 1.7-heroku
-----> Fetching and vendoring pgbouncer into slug
-----> Moving the configuration generation script into app/bin
-----> Moving the start-pgbouncer script into app/bin
-----> pgbouncer done
-----> Fetching custom git buildpack... done
...

The buildpack will install and configure pgbouncer to connect to DATABASE_URL over a TLS connection, where available. Prepend bin/start-pgbouncer to any process in the Procfile to run pgbouncer alongside that process.

PgBouncer Version

  • Heroku-20: v1.17.0
  • Heroku-22: v1.17.0
  • Heroku-24: v1.17.0

Multiple Databases

It is possible to connect to multiple databases through pgbouncer by setting PGBOUNCER_URLS to a list of config vars. Example:

$ heroku config:add PGBOUNCER_URLS="DATABASE_URL HEROKU_POSTGRESQL_ROSE_URL"
$ heroku run bash

~ $ env | grep 'HEROKU_POSTGRESQL_ROSE_URL\|DATABASE_URL'
HEROKU_POSTGRESQL_ROSE_URL=postgres://u9dih9htu2t3ll:[email protected]:5482/db6h3bkfuk5430
DATABASE_URL=postgres://uf2782hv7b3uqe:[email protected]:5622/deamhhcj6q0d31

~ $ bin/start-pgbouncer env # filtered for brevity
HEROKU_POSTGRESQL_ROSE_URL=postgres://u9dih9htu2t3ll:[email protected]:6000/db2
DATABASE_URL=postgres://uf2782hv7b3uqe:[email protected]:6000/db1

โš ๏ธ A referenced configuration variable in PGBOUNCER_URLS must not be empty, and must be a valid PostgreSQL connection string.

Follower Replica Databases

As of v0.3.2 of this buildpack, it is possible to use pgbouncer to connect to multiple databases that share a database name, such as a leader and follower. To use, add the follower's config var to PGBOUNCER_URLS as detailed in the Multiple Databases section.

If you are using Octopus Replication to send reads to a replica, make sure to include the color url of your leader in the SLAVE_DISABLED_FOLLOWERS blacklist. Otherwise, Octopus will attempt to use your leader as a read-only replica, potentially doubling your connection count.

Tweak settings

Some settings are configurable through app config vars at runtime. Refer to the appropriate documentation for pgbouncer configurations to see what settings are right for you.

  • PGBOUNCER_AUTH_TYPE Default is scram-sha-256. Can be changed to md5 or plain depending on server support.
  • PGBOUNCER_SERVER_TLS_SSLMODE Default is require.
  • PGBOUNCER_POOL_MODE Default is transaction
  • PGBOUNCER_MAX_CLIENT_CONN Default is 100
  • PGBOUNCER_DEFAULT_POOL_SIZE Default is 1
  • PGBOUNCER_MIN_POOL_SIZE Default is 0
  • PGBOUNCER_RESERVE_POOL_SIZE Default is 1
  • PGBOUNCER_RESERVE_POOL_TIMEOUT Default is 5.0 seconds
  • PGBOUNCER_SERVER_LIFETIME Default is 3600.0 seconds
  • PGBOUNCER_SERVER_IDLE_TIMEOUT Default is 600.0 seconds
  • PGBOUNCER_URLS should contain all config variables that will be overridden to connect to pgbouncer. For example, set this to AMAZON_RDS_URL to send RDS connections through pgbouncer. The default is DATABASE_URL.
  • PGBOUNCER_LOG_CONNECTIONS Default is 1. If your app does not use persistent database connections, this may be noisy and should be set to 0.
  • PGBOUNCER_LOG_DISCONNECTIONS Default is 1. If your app does not use persistent database connections, this may be noisy and should be set to 0.
  • PGBOUNCER_LOG_POOLER_ERRORS Default is 1
  • PGBOUNCER_STATS_PERIOD Default is 60
  • PGBOUNCER_SERVER_RESET_QUERY Default is empty when pool mode is transaction, and "DISCARD ALL;" when session.
  • PGBOUNCER_IGNORE_STARTUP_PARAMETERS Adds parameters to ignore when pgbouncer is starting. Some postgres libraries, like Go's pq, append this parameter, making it impossible to use this buildpack. Default is empty and the most common ignored parameter is extra_float_digits. Multiple parameters can be seperated via commas. Example: PGBOUNCER_IGNORE_STARTUP_PARAMETERS="extra_float_digits, some_other_param"
  • PGBOUNCER_QUERY_WAIT_TIMEOUT Default is 120 seconds, helps when the server is down or the database rejects connections for any reason. If this is disabled, clients will be queued infinitely.

For more info, see CONTRIBUTING.md

Using the edge version of the buildpack

The heroku/pgbouncer buildpack points to the latest stable version of the buildpack published in the Buildpack Registry. To use the latest version of the buildpack (the code in this repository, run the following command:

$ heroku buildpacks:add https://github.com/heroku/heroku-buildpack-pgbouncer

Notes

Currently, the connection string parsing requires the connection string to be in a specific format:

postgres://<user>:<pass>@<host>:<port>/<database>

This corresponds to the regular expression ^postgres(?:ql)?:\/\/([^:]*):([^@]*)@(.*?):(.*?)\/(.*?)$. All components must be present in order for the buildpack to correctly parse the connection string.

heroku-buildpack-pgbouncer's People

Contributors

agriffis avatar beanieboi avatar benrobster avatar benthompson avatar binarycleric avatar brahyt-sf avatar chadbailey59 avatar cyberdelia avatar ddollar avatar edmorley avatar edwardotis avatar gregburek avatar hone avatar jhorman avatar jkutner avatar mble avatar mble-sfdc avatar mehtaphysical avatar nateberkopec avatar nickcatal avatar ryanbrainard avatar schneems avatar svc-scm avatar timshadel avatar tomlea avatar tt avatar valeriecodes avatar zacronos avatar zeke avatar zunda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

heroku-buildpack-pgbouncer's Issues

Are pool settings applied across all databases or per database

I have a Postgres cluster setup with a single leader and multiple followers. We use a replica gem to distribute reads to the followers.

Are the pool settings (PGBOUNCER_MAX_CLIENT_CONN, PGBOUNCER_DEFAULT_POOL_SIZE, etc.) applied to each database?

As an example, say PGBOUNCER_MAX_CLIENT_CONN=200, is that the total number of client connections to PgBouncer, or the total number of client connections per db, so the leader and multiple followers allow 200 each.

Empty passwords in DATABASE_URL will cause errors

Heroku's docker-composer.yml generator will cause DATABASE_URLs such as postgres://postgres:@herokuPostgreSQL:5432/herokudb. This buildpack will not be able to process this because there is no password.

The regex in gen-pgbouncer-conf.sh used to parse out the individual details needs to be updated to reflect the possibility of there not being a password.

buildback running unsecure version of stunnel and openSSL

Version v0.3.2 of the buildpack reports:

2014-12-17T22:06:21.049405+00:00 app[web.1]: 2014.12.17 22:06:21 LOG5[43]: stunnel 5.02 on x86_64-unknown-linux-gnu platform
2014-12-17T22:06:21.049417+00:00 app[web.1]: 2014.12.17 22:06:21 LOG5[43]: Compiled/running with OpenSSL 0.9.8k 25 Mar 2009

Please review stunnel changelog which reports 2 HIGH urgency updates since stunnel 5.02 was released, which include updates to openSSL
https://www.stunnel.org/sdf_ChangeLog.html

SIGTERM handling is incorrect

While running sidekiq with this build pack, we observed that sidekiq would sometimes still be running by the time https://github.com/heroku/heroku-buildpack-pgbouncer/blob/master/bin/start-pgbouncer-stunnel#L133 was reached. This often led to situations where the dyno's main process would terminate before sidekiq finished its graceful shutdown and caused corruption within the sidekiq data structures.

After some experimenting I discovered that the bash script incorrectly assumes that the signal handlers in aux-start and app-start will not interrupt the subshell's wait calls. In my tests, the signal handlers cause the script to immediately proceed to the next line and write to the $psmgr pipe.

I'm preparing a pull request that modifies the signal handling to avoid this case. It should allow the wrapped applications to cleanly exit if 1) stunnel/pgbouncer crash or 2) the dyno shuts down. It still may cause hard app shutdowns if the dyno broadcasts a SIGHUP, but since this is a relatively rare occurrence outside of one-off dynos I'm less concerned with this case.

question - pgbouncer and heroku HA/failover

Preface - not a criticism of the buildpack - just seeking for some clarity.

We just experienced a heroku HA/failover event last night.
I think we discovered (but hadn't previously considered) that the pgbouncer buildpack will not handle DATABASE_URL being updated. A restart of the dyno will not cause the buildpack to (re)generate pgbouncer.ini. The only way to resolve is to redeploy (and rebuild the buildpack).

Is this a reasonable understanding of the situation?

Deploying with master isn't generating config files

I tried to deploy to heroku using heroku-buildpack-pgbouncer master, and I end up with no config files in /app/vendor/pgbouncer or /app/vendor/stunnel/. Guessing one of the last two/three commits caused this? I don't see any errors in the logs at slug compile time.

Update stunnel?

Does the stunnel that this buildpack uses need to be updated to avoid heart bleed?

Or is it using a really old version of stunnel that doesn't use the buggy OpenSSL

Worth setting query timeouts?

Heroku returns a 503 Server Error if a request fails to return after 30 seconds. Should query_timeout and other timeouts get set to a value a little higher than 30 seconds?

Push rejected, failed to compile Multipack app

Getting the following error from Heroku while trying to deploy

-----> Fetching custom git buildpack... done
-----> Multipack app detected
=====> Downloading Buildpack: https://github.com/gregburek/heroku-buildpack-pgbouncer.git
=====> Detected Framework: pgbouncer-stunnel
       Using pgbouncer version: 1.5.4
       Using stunnel version: 4.56
-----> Fetching and vendoring pgbouncer into slug

gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Exiting with failure status due to previous errors

 !     Push rejected, failed to compile Multipack app

Enable stats interface?

I'm trying to debug unusually slow performance. Simple database reads for a single record that hit an index are taking 100-1500ms with any kind of concurrency; normally they take 13ms in the application.

I have a Heroku Postgres database set up with 200 available connections. I've configured the node-postgres poolSize to be set to 50, and set PGBOUNCER_DEFAULT_POOL_SIZE to 50. I'm testing the application by running ab with a concurrency of 25. I can observe in app-level metrics that the majority of time is being spent waiting querying the database (at least, according to the application).

Despite having ~50 connections configured, in Postgres, SELECT * FROM pg_stat_activity WHERE state='active' never goes above about 7.

I am trying to figure out whether there is some kind of bottleneck I am ignoring. I am wondering if I have pgbouncer set up correctly; I'd like to log in to the admin console and run SHOW POOLS; to view the active and waiting connections.

Unfortunately this is difficult because I can't get access to the running application on Heroku.

Is there a way to configure a stats_user that would let me access the admin dashboard? Alternatively, a way to get the "LOG stats" output to print information about the pool usage?

Unclear about how to connect Node app to pgbouncer

Let me first apologize for the vagueness of this question because I am slightly lost.

I am actually running a fork of this repo of a user who upgraded to 1.7.2 and removed the need for stunnel. That being said, the logs of when I run start-pgbouncer and start-pgbouncer-stunnel have the same output/error.

Essentially the crux of the error is my code can't connect to the DATABASE_URL any more.

I don't understand what my Node postgres connection code should be pointing at e.g. should I be using DATABASE_URL_PGBOUNCER?

Error: The server does not support SSL connections
App 138 stderr:     at Socket.<anonymous> (/app/node_modules/pg/lib/connection.js:88:33)

Default settings == ideal settings

Seen this a few times, I think customers feel that they should tweak the default settings if they are moving to different database sizes. We should add a blurb in the README explaining that the default settings are the ideal settings for all Heroku databases sizes.

connection to RDS crashes after going idle

Hi,

I'm attempting to get pgbouncer to talk to an Amazon RDS postgres instance. I'm running unicorn + rails, here's the definition of my web worker in my procfile:

web: bin/start-pgbouncer-stunnel bundle exec unicorn -p $PORT -c ./config/unicorn.rb

We've got the standard before_fork and after_fork connection closing/opening behavior implemented in unicorn.rb. I can get rails to connect initially just fine, but if i leave a web dyno sitting for a while, the connection seems to get closed and then pgbouncer does not seem to be able to reconnect, even with PGBOUNCER_CONNECTION_RETRY set to yes. I see these messsages in the logs:

2014-07-03T21:23:21.068170+00:00 app[web.1]: 2014-07-03 21:23:21.068 17 LOG S-0x1fe4370: DBNAME/DBUSER@unix:6101 new connection to server
2014-07-03T21:23:21.070594+00:00 app[web.1]: 2014.07.03 21:23:21 LOG5[18:140576198854400]: connect_blocking: connected 10.51.154.183:5432
2014-07-03T21:23:21.070600+00:00 app[web.1]: 2014.07.03 21:23:21 LOG5[18:140576198854400]: Service [DATABASE_URL] connected remote server from 172.16.132.138:42039
2014-07-03T21:23:21.077294+00:00 app[web.1]: 2014.07.03 21:23:21 LOG3[18:140576198854400]: SSL_connect: Peer suddenly disconnected
2014-07-03T21:23:21.077400+00:00 app[web.1]: 2014.07.03 21:23:21 LOG5[18:140576198854400]: Connection reset: 0 byte(s) sent to SSL, 0 byte(s) sent to socket
2014-07-03T21:23:21.077841+00:00 app[web.1]: 2014-07-03 21:23:21.077 17 LOG S-0x1fe4370: DBNAME/DBUSER@unix:6101 closing because: server conn crashed? (age=0)

I noticed that the tcp_keepalive settings on RDS were very different than the ones on Heroku Postgres, but I created a parameter group and updated them to match Heroku Postgres:

tcp_keepalives_count                | 9                                                                                                     | Maximum number of TCP keepalive retransmits.
 tcp_keepalives_idle                 | 7200                                                                                                  | Time between issuing TCP keepalives.
 tcp_keepalives_interval             | 75                                                                                                    | Time between TCP keepalive retransmits.

Are there any other server-side configuration changes I should look at that could be affecting this behavior?

Process dying silently

I've had a go at using this build pack to reduce pg connections today in a new staging set up. Unfortunately it's not smooth sailing. When I'm watching the worker process jobs that I'm queueing up, it sometimes stops processing. Nothing is logged except for the pgbouncer "Log stats" line. Nothing indicates that anything has gone wrong, yet a heroku restart will get things in motion again and jobs start streaming past. There's no rhyme or reason - sometimes it will process only a few dozen jobs, other times it will process a couple of thousand then stop. I'm guessing that the child process pgbouncer starts is dying and pgbouncer is unaware somehow. I'm not really sure where to even start debugging this. Any ideas?

My procfile declares our worker process as so

bin/start-pgbouncer-stunnel bundle exec sidekiq -C ./config/sidekiq.yml

The sidekiq.yml file sets us up with 25 concurrency / threads. With one worker and one web dyno, heroku pg:info changes between 2 and 3 connections, but generally looks like pgbouncer is doing it's thing.

PgBouncer should not run as root

While trying to deploy under Deis, the following error prevents pgbouncer from starting:

106 FATAL @src/main.c:736 in function main(): PgBouncer should not run as root

The fix is to add "-u nobody" to the end of the pgbouncer command-line. A better approach is to introduce a PGBOUNCER_OPTIONS variable and append that to the end of the pgbouncer line to allow Deis folks to config:set "-u nobody" if need be.

ENABLE_STUNNEL_AMAZON_RDS_FIX instructions are incorrect

@gregburek

Hi Greg,

In the following commit, you changed the code that processed the ENABLE_STUNNEL_AMAZON_RDS_FIX env variable, and you made the instructions on the README.md incorrect for this feature.

a8ac772

I would recommend reverting your changes to lines 15 and 16 in order to make that part of the generation script correct again. It's the safer option. More detail in my comments on that commit.

Thanks,

Ed

Accessing pgBouncer Admin Console

I have been trying to access the Admin console for pgBouncer because I need to get some statistics from pgBouncer to help in optimizing the pooling, specifically in setting the max_client_conn. Is there a way to access the Admin Console so I can use the Show commands from here.

Can't connect when missing port number in DATABASE_URL

I added an RDS hosted postgres instance and in the environment variable I didn't include the port number (ie. postgress://user:[email protected]/ ) and I would get the following errors:

Apr 20 11:38:49 foobar-staging app/web.1: OperationalError: ERROR: No such user: u38142

Shortly after, I updated the environment variable to include the port number and then everything worked. The behavior is that the application never connects to postgres and can be duplicated by just removing the port number from the URL that you're connecting to.

It seems as though this should be documented for non-heroku hosted instances since Heroku kindly auto-populates DATABASE_URL with the port number for us

Exit status doesn't seem to be propagating

On Heroku support ticket 532736, I noticed that the bin/start-pgbouncer-stunnel script exits with status zero even if its child process exited with an error status and the platform detected that the process "complete" even if the app failed to start the command. It would be the best if the process exits with the status code received with the wait command.

Set prepared statements to false in after_fork for Unicorn

Sanity check here: the recommend setup in the README here uses an initializer, however the typically Unicorn setup will re-eaablish the connection in the after_fork block. Unless this line:

alias initialize_without_config_boolean_coercion initialize

does something I don't understand, I think Unicorn users will still be using prepared statements even with that initializer.

Loses settings with async workers

I've been trying this buildpack with the various servers we run and although it runs fine with most synchronous webservers, it doesn't work well with forked processes or async workers.
The main reason for this is that forking will copy the original enviroment without the overwritten database url.
Most of my async servers have a hook to call a function just before forking though.
So my question is; Is there part of the startup script I can call that will just initialize the environment?

Proper setup for rails app

I've followed your instructions, but I still got tons of errors like this:

ActiveRecord::ConnectionTimeoutError
could not obtain a database connection within 5.000 seconds (waited 5.041 seconds)

It seems like rails is still using database pool setting (which defaults to 5) so no matter how large i set PGBOUNCER_DEFAULT_POOL_SIZE to be, its still going to error out after 5 connections being taken.. or am I wrong?

Can you provide some guideliness on how to configure rails app to have this working properly. Should I just increase active record pool size to some huge number and ignore it?

By default PGBOUNCER_LOG_CONNECTIONS and PGBOUNCER_LOG_DISCONNECTIONS and should not be logged

If you use this buildpack alongside the nginix buildpack for an app that does not use persistent connections. With the default PGBOUNCER_LOG_CONNECTIONS, PGBOUNCER_LOG_DISCONNECTIONS, nginix and heroku router entries you'll result in 4 log messages per request by default.

For an even moderately popular heroku app you end up storing twice the amount of data you otherwise needed.

While logging connections to a remotely hosted pgbouncer makes sense, doing it in this case is a great way to quickly and unintentionally overwhelm whatever logging infrastructure you have configured for your Heroku app.

Is pgbouncer really working?

We switched to pgbouncer a week ago and our setup is:

50 dynos using puma - each dyno has 2 workers and each worker has 5 threads - which means we can simultaneously process 10 requests per dyno. (aka at least 10 connections per dyno are necessary)

We are using database tier which gives us 400 connections.

I configured pgbouncer with following values:
PGBOUNCER_DEFAULT_POOL_SIZE = 7
PGBOUNCER_MAX_CLIENT_CONN = 200
PGBOUNCER_MAX_CLIENT_CONN = 1 (by default)

to follow (PGBOUNCER_RESERVE_POOL_SIZE + PGBOUNCER_DEFAULT_POOL_SIZE) * number of dynos <= postgres max_connections

So in my case it would be (1 + 7)x50 <= 400 which fits

However when I check my postgres dashboard on heroku I can see we are still only using around 140 connections out of 400. All 50 dynos are running so I guess pgbouncer should have at least 350 connections opened?

In newrelic I can see that our database times doubled since we started to use pgbouncer (which gives since since we are only using around 140 connections). Any idea why? Why is pgbouncer not using all connections available if pool is set to 7. I tried to set pool to 20 for a try and still nothing changed - database still had only around 140 connections.

How does PGBouncer work with rails?

Hey, basic question... at a high level, how does adding bin/start-pgbouncer-stunnel to my Procfile before bundle exec puma -C config/puma.rb actually work, in terms of the database connections?

It's my understanding that Rails / Puma will use the ENV variables for database host/username/password, and just access it directly. Does PGBouncer somehow override those variables or something?

Amazon RDS with failover

Amazon RDS provides the "Multi-AZ" feature for high-availability/automatic failover. Unlike Heroku Postgres, their "DATABASE_URL" doesn't change when this event occurs (as far as I know) and they use internal routing to route new connections to the new master.

Does pgbouncer handle this situation, or do the dynos need to be restarted after failover occurs on RDS?

disable stunnel?

Hi,

Is there a way to disable stunnel? We would like to be able to disable SSL when needed.

Thanks

crashing b/c of missing paths after update

2014-04-25T04:57:15.566184+00:00 app[web.1]: Setting DATABASE_URL_PGBOUNCER config var
2014-04-25T04:57:15.608657+00:00 app[web.1]: buildpack=pgbouncer at=config-gen-end
2014-04-25T04:57:15.608657+00:00 app[web.1]: buildpack=pgbouncer at=config-gen-override DATABASE_URL
2014-04-25T04:57:15.636126+00:00 app[web.1]: buildpack=pgbouncer at=stunnel-launched pid=21 signal=SIGINT
2014-04-25T04:57:15.639193+00:00 app[web.1]: buildpack=pgbouncer at=stunnel-start
2014-04-25T04:57:15.639421+00:00 app[web.1]: buildpack=pgbouncer at=pgbouncer-launched pid=22 signal=SIGINT
2014-04-25T04:57:15.640155+00:00 app[web.1]: buildpack=pgbouncer at=pgbouncer-start
2014-04-25T04:57:15.641503+00:00 app[web.1]: buildpack=pgbouncer at=app-launched pid=24
2014-04-25T04:57:15.643681+00:00 app[web.1]: buildpack=pgbouncer at=app-start
2014-04-25T04:57:15.644798+00:00 app[web.1]: bin/start-pgbouncer-stunnel: line 91: vendor/stunnel/bin/stunnel: No such file or directory
2014-04-25T04:57:15.647217+00:00 app[web.1]: buildpack=pgbouncer at=stunnel-end
2014-04-25T04:57:15.647340+00:00 app[web.1]: buildpack=pgbouncer at=exit process=stunnel
2014-04-25T04:57:15.648665+00:00 app[web.1]: bin/start-pgbouncer-stunnel: line 91: vendor/pgbouncer/bin/pgbouncer: No such file or directory
2014-04-25T04:57:15.666017+00:00 app[web.1]: buildpack=pgbouncer at=kill-app pid=24
2014-04-25T04:57:15.666614+00:00 app[web.1]: buildpack=pgbouncer at=app-end
2014-04-25T04:57:15.667979+00:00 app[web.1]: buildpack=pgbouncer at=wait-app pid=24
2014-04-25T04:57:15.668524+00:00 app[web.1]: buildpack=pgbouncer at=kill-aux name=pgbouncer pid=22 signal=SIGINT
2014-04-25T04:57:15.668838+00:00 app[web.1]: buildpack=pgbouncer at=kill-aux name=stunnel pid=21 signal=SIGINT
2014-04-25T04:57:15.669175+00:00 app[web.1]: bin/start-pgbouncer-stunnel: line 61: kill: (21) - No such process
2014-04-25T04:57:15.670344+00:00 app[web.1]: bin/start-pgbouncer-stunnel: line 102: /tmp/pgbouncer-buildpack-wait: Interrupted system call2014-04-25T04:57:15.670987+00:00 app[web.1]: bin/start-pgbouncer-stunnel: line 1: kill: (26) - No such process
2014-04-25T04:57:15.671185+00:00 app[web.1]: buildpack=pgbouncer at=pgbouncer-end

possibly just some weirdness on my end, but since this is gonna affect more people if it's true, just putting it out there ASAP and looking into it now.
-T

Allow postgresql:// db urls

Right now only postgres:// works and if you use postgresql:// you get the following error:
Error resolving '': Neither nodename nor servname known (EAI_NONAME)

This is especially confusing since SQLAlchemy recommends postgresql:// for db connections.

Inability to override settings on a per dyno basis

Greg,

We had assumed that we could do something like the following in our Procfile:

worker: env PGBOUNCER_DEFAULT_POOL_SIZE=2 bin/start-pgbouncer-stunnel env WORKER_CONCURRENCY=10 bundle exec resque-pool

Turns out, this doesn't work because the pgbouncer.ini is compiled at login via a .profile.d script. Any reason it's done there, and not as part of the start-pgbouncer-stunnel script?

We've currently got a Procfile line that looks like this, and it seems to be doing the right thing, but this is way too hackish:

worker: rm vendor/pgbouncer/pgbouncer.ini vendor/pgbouncer/users.txt vendor/stunnel/stunnel-pgbouncer.conf && env PGBOUNCER_DEFAULT_POOL_SIZE=2 .profile.d/gen-pgbouncer-conf.sh && bin/start-pgbouncer-stunnel env WORKER_CONCURRENCY=10 bundle exec resque-pool

Thoughts on how to do per-dyno settings in a cleaner way?

Can't start python shell with heroku run

I've been using the pgbouncer buildpack for a while and everything works as expected with my worker and web dynos. I was using the following command to access our python shell before I changed the buildpack:

heroku run python manage.py shell

I can't use that anymore since it doesn't start pgbouncer and hence the connection to the DB. I tried using the following command:

heroku run bin/start-pgbouncer-stunnel python manage.py shell

When I do that, the shell shows up and exits instantly. Here are the log lines:

Running `bin/start-pgbouncer-stunnel python manage.py shell` attached to terminal... up, run.8918
buildpack=pgbouncer at=config-gen-start
Setting DATABASE_URL_PGBOUNCER config var
buildpack=pgbouncer at=config-gen-end
Overriding DATABASE_URL config var
buildpack=pgbouncer at=stunnel-start
buildpack=pgbouncer at=pgbouncer-start
buildpack=pgbouncer at=app-start cmd=python manage.py shell
2014.04.08 17:11:51 LOG5[23]: stunnel 5.00 on x86_64-unknown-linux-gnu platform
2014.04.08 17:11:51 LOG5[23]: Compiled/running with OpenSSL 0.9.8k 25 Mar 2009
2014.04.08 17:11:51 LOG5[23]: Threading:PTHREAD Sockets:POLL,IPv6 SSL:ENGINE,OCSP
2014.04.08 17:11:51 LOG5[23]: Reading configuration from file vendor/stunnel/stunnel-pgbouncer.conf
2014.04.08 17:11:51 LOG5[23]: Configuration successful
2014-04-08 17:11:51.615 24 LOG File descriptor limit: 10000 (H:10000), max_client_conn: 1000, max fds possible: 1018
2014-04-08 17:11:51.619 24 LOG listening on 127.0.0.1:6000
2014-04-08 17:11:51.624 24 LOG listening on unix:/tmp/.s.PGSQL.6000
2014-04-08 17:11:51.624 24 LOG process up: pgbouncer 1.5.4, libevent 1.4.13-stable (epoll), adns: libc-2.11
Python 2.7.5 (default, May 17 2013, 06:45:09)
Type "copyright", "credits" or "license" for more information.

IPython 1.2.1 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]:
Do you really want to exit ([y]/n)?
buildpack=pgbouncer at=exit process=app
Sending SIGTERM to application
bin/start-pgbouncer-stunnel: line 64: kill: (22) - No such process
Sending SIGINT to pgbouncer
2014-04-08 17:11:54.072 24 LOG Got SIGINT, shutting down
Sending SIGINT to stunnel

Any idea how I can get back to using the python shell as well?

Heroku postgres followers

I'm currently using the Makara gem for read and write replication using Heroku Postgres followers. It would be great to add support for multiple databases.

Best practices for testing this buildpack locally?

I am struggling to find information on how other users of this buildpack are testing the setup locally.

I have pgbouncer setup on my dev machine (OS X, so I used brew to install and pgbouncer listening on port 6432). I am currently testing by pointing my DB connection at pgbouncer (just changing the port I'm connecting against). But

I just wanted to check to see if there is some way to better simulate the stunnel wrapper on a local dev machine? I guess trying to simulate a buildpack may be going a step too far?

Figured it was worth creating an issue for this question in case others have it in the future...

Readme issues

I see there is a version v0.2.2 in tags, but the readme still points to v0.2.1 under

$ cat .buildpacks
https://github.com/gregburek/heroku-buildpack-pgbouncer.git#v0.2.1

Possibly on purpose?

Under "Multiple Databases" the instructions say

bin/start-pgbench-stunnel env

but I think that is meant to be start-pgbouncer-stunnel.

Meaning of PGBOUNCER_DEFAULT_POOL_SIZE

I am confused with meaning of this variable.

Does this mean overall pool of available connections to database or is it available pool of connections per dyno?

Is the database.yml pool size used?

I understand how the PGBOUNCER_ ENV variables work, but what I'm missing is how this relates to the pool setting in database.yml aka DB_POOL ENV variable. I understand that other processes, like sidekiq, could use this setting... but will Puma use it for anything?

Does not work with cedar-14 stack

When deploying with the cedar-14 stack, the following occurs:

Starting process with command `bin/start-pgbouncer-stunnel bundle exec puma -C config/puma.rb`
buildpack=pgbouncer at=start-app cmd=bundle exec puma -C config/puma.rb
buildpack=pgbouncer at=pgbouncer-start
buildpack=pgbouncer at=stunnel-start
vendor/stunnel/bin/stunnel: error while loading shared libraries:
libssl.so.0.9.8: cannot open shared object file: No such file or directory
buildpack=pgbouncer at=exit process=stunnel
State changed from starting to crashed

According to Heroku support, libssl 0.9.8 isn't included on cedar-14 and the stunnel bundled in the pgbouncer buildpack needs to be recompiled with that support.

undefined method `fields' for nil:NilClass

After installing the buildpacks on production, in 1 of my controller when I try to save an entity in DB with ActiveRecord, it throws me this error. I have no idea why, this is something that happens inside the save method.

Here is what I'm doing:

poll = Poll.first
@question = Question.first
@answer = question.answers.new(answer_params)
@answer.poll = poll
@answer.save

Funny thing is that this problem never raises in our tests environments.
And this part of the code is located in the most used part of our website.
We check the integrity of the user's data afterward and everything was created.
Any help ? It raised around 15k error in 3h ...

A bit confused on config

Hey,

I have a very basic question, but I'm a bit confused what to put my values at. Here is my current setup in production and I'm trying to add PGBouncer (with as few problems as possible, hence my posting here :)

I'm using rails 5.0.1. My app deals with finance, and I don't trust that it is thread-safe, so unfortunately everything is 1 thread as you will see below.

Heroku Postgres instance:

  • 500 max connections

My Procfile:

  • worker (PGB Disabled) - 20x dynos, 1 worker, 1 thread, 1 connection per dyno
  • clock (PGB Disabled) - 1x dyno, 1 worker, 1 thread, 1 connection total
  • web - 8x PL dynos, 50 rails instances per dyno, 1 thread, 1 connection

So total, without PGBouncer configuration at all, its 20 + 1 + (50 * 8) = 421 connections (not taking into account buffer).

How should I use these numbers to configure PGBouncer? Thanks!

Here is my database.yml

default: &default
  adapter: postgresql
  encoding: unicode
  pool: 1
  prepared_statements: false

puma.rb

workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = 1
threads threads_count, threads_count
...

initializers/que.rb

Que.disable_prepared_statements = true # PGBouncer

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.