voxpupuli / puppet-redis Goto Github PK
View Code? Open in Web Editor NEWPuppet Module to manage Redis
Home Page: https://forge.puppet.com/puppet/redis
License: Apache License 2.0
Puppet Module to manage Redis
Home Page: https://forge.puppet.com/puppet/redis
License: Apache License 2.0
It is a common practice to have more than one instance of redis per host. We should move the config.pp into a define type so we can create multiple instances pointing into different ports.
Thoughts?
We're in the process of adding your puppet-redis module to openstack-puppet-modules and have decided it would be handy to have sentinel support in this module.
I'm happy to do this work myself and submit a pull-request but I wanted to check first on the following:
Thanks.
Config from version 1.2.3 doesn't start Redis service successfully anymore
The following section of a correct configuration file requires 64mb
as a default, not 64min
:
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
I found this problem in two areas:
https://github.com/arioch/puppet-redis/search?utf8=%E2%9C%93&q=64min
Hey,
is there a reason why the redis conf for RedHat systems is placed under:
/etc/redis.conf
instead of
/etc/redis/redis.conf?
For me it seems a little messy lying in this folder.
Thank you and kind regards,
Tobias
// ,
err: Could not retrieve catalog from remote server: Error 400 on SERVER: undefined method
<' for :undef:Symbol at /etc/puppet/modules/redis/manifests/preinstall.pp:10 on node redis1`
Perhaps it's missing a fact, or something, but this seems like a bug.
This may be related to CEntOS 6.5 using an older version of Facter, as here:
http://serverfault.com/questions/528878/puppet-facter-how-to-determine-if-running-cent-6-or-cent-5
Hi Arioch,
As soon as puppet runs it overwrites the redis.conf and trigger the restart.
I am missing something here and i am not sure how to tell puppet not to overwrite the redis.conf every time ?
-dir "/var/lib/redis"
+dir /var/lib/redis/
################################# REPLICATION #################################
@@ -388,7 +390,7 @@
auto-aof-rewrite-percentage 100
-auto-aof-rewrite-min-size 0
+auto-aof-rewrite-min-size 64min
################################ LUA SCRIPTING ###############################
@@ -526,6 +528,3 @@
include /etc/redis/redis-custom.conf
-# Generated by CONFIG REWRITE
-slaveof x.x.x.x 6379
-min-slaves-to-write 1
Info: /Stage[main]/Redis::Config/File[/etc/redis/redis.conf]: Filebucketed /etc/redis/redis.conf to main with sum 24086ae26aeb2008676489bf49a48260
Debian Wheezy installs redis-server 2.4.14 which service never starts.
If you try to start the service you get the following error:
$ sudo service redis-server start
Starting redis-server:
*** FATAL CONFIG FILE ERROR ***
Reading the configuration file, at line 110
'stop-writes-on-bgsave-error yes'
Bad directive or wrong number of arguments
failed
$ dpkg -l |grep redis
ii redis-server 2:2.4.14-1 amd64 Persistent key-value database with network interface
Most software now a days default to bind to local ip (127.0.0.1). Don't you think this is a good/safe road to take here as well?
Cheers
It'd be grand if the ulimit parameter could be extended to set the nofile limits in:
The former is for pure systemd OSes that do not make use of /etc/default/ and the latter is for when redis is invoked via a cluster manager (e.g. pacemaker, rgmanager)
Right now if you are using RHEL/CentOS, if you set the manage_repo
option you are forced to use two hardcoded repos (depending on your major version). There should be a way, like with Ubuntu, to specify a custom repo.
For Centos6 / EPEL ships redis 2.4, not 2.6. Configs are not backwards compatible.
You need to checkout Release 0.0.2 of this module in order to get it working on Centos6 with redis 2.4
At least does not work out of box...
It says:
Starting redis-server:
*** FATAL CONFIG FILE ERROR ***
Reading the configuration file, at line 108
'stop-writes-on-bgsave-error yes'
or
Reading the configuration file, at line 122
'rdbchecksum yes'
v.2.4.14-1 is latest package for debian.
You might drop 2.4.X support ?
EPEL on CentOS 6 provides Redis 2.4.10 whereas it appears that the tcp-keepalive feature was added in Redis 2.8.
So I get this when I start Redis:
[root@centos-66-x64 ~]# /etc/init.d/redis start
Starting redis-server:
*** FATAL CONFIG FILE ERROR ***
Reading the configuration file, at line 55
>>> 'tcp-keepalive 0'
Bad directive or wrong number of arguments
[FAILED]
When trying to install sentinel (class { 'redis::sentinel': }
) on Ubuntu 14.04, the following error appears:
Error: /Stage[main]/Redis::Sentinel/Service[redis-sentinel]: Could not find init script or upstart conf file for 'redis-sentinel'
I would expect that there would be an init or upstart script in place that would start the sentinel service.
Building a template for the sentinel service should probably do the trick.
Thank you for an awesome module!
Hi again,
Maybe it would be better if default config_owner of Redis is 'redis'. redis.conf must be writable by Redis or Sentinel when a failover happens (I'm not sure exactly which one, redis or sentinel, handles this action) and config-rewrite works. In the module config_owner is 'root' for all OS types by now, which causes "CONFIG REWRITE failed: Permission denied" error in failover time. Also it seems that redis rpm's do this by default.
No variable substitution on:
https://github.com/arioch/puppet-redis/blob/master/manifests/sentinel.pp#L174
Because the variable was enclosed within single quotes.
I will make a PR to solve this.
// , I want to do a trial adoption of this EXCELLENT Puppet module on some of our OpenStack machines.
Also, I may, later, want to run this on some legacy systems (I'm looking at you, CEntOS 6...).
InfoSec may raise some an eyebrow at powerstack.org.
If they do, I might want to switch out the the http://download.powerstack.org/6/ for an internal repo.
Should I just edit the https://github.com/arioch/puppet-redis/blob/master/manifests/preinstall.pp file, or is there a better way to switch this?
I can draft a pull request to make this more flexible, if desired.
Can we get a forge release? ๐
Current Travis config takes 20 mins, let's slim down the Matrix! ๐
Currently the installation of the server does not work properly on the AWS Linux AMI due to the preinstall.pp's usage of $::operatingsystemrelease.
Using the 2013.09 AMI, running
facter operatingsystem operatingsystemrelease
returns
operatingsystem => Amazon
operatingsystemrelease => 3.4.73-64.112.amzn1.x86_64
This results in the URL and GPG keys defaulting to the "Fail" option. After this, the "yum install" resorts back to the default EPEL repo, which has the 2.4.10 version of redis-server installed and ends up failing on the "service redis start" command due to a configuration issue.
Hello,
I came across a problem when i used this module for managing both redis and sentinel on same node. On RHEL, both needs redis package, so $::redis::package_name
in install.pp and $package_name
in sentinel.pp is same. This results in duplicate decleration:
install.pp:
ensure_resource('package', $::redis::package_name, {
'ensure' => $::redis::package_ensure
})
sentinel.pp:
ensure_resource('package', $package_name, {
'ensure' => $package_ensure
})
I worked around by adding unless defined
to them like,
install.pp:
unless defined(Package["$::redis::package_name"]) {
ensure_resource('package', $::redis::package_name, {
'ensure' => $::redis::package_ensure
})
}
sentinel.pp:
unless defined(Package["$package_name"]) {
ensure_resource('package', $package_name, {
'ensure' => $package_ensure
})
}
I think applying redis and sentinel to same node is a common behavior so this can be fixed.
Thanks for the nice module, by the way :)
powerstack.org appears to be taken down. Are there other providers for the RPM's for REHL based distros?
Any chance for this module support Ubuntu 16.04?
Puppet 3.X is EOL
So, we want to upgrade this module to have Puppet 4 support.
I worked around this, but it really should be part of the setup.
Ubuntu 12.04
package{'python-software-properties':
ensure => installed
} ->
package{'software-properties-common':
ensure => installed
} ->
class { 'redis':
manage_repo => true,
...
}
I've tried adding this to my Modulefile, also tried calling it in my Puppetfile but librarian-puppet always returns "Cannot resolve the dependencies". Is there a problem with this module on the forge?
Hi, please consider adding an option to configure redis access by unix socket. Currently the config templates disable the standard /var/run/redis/redis.sock and don't give an option to enable it. In a common use case of webserver->redis on the same server, socket access is faster.
On new minimal install of CentOS 7 I'm getting the following error during puppet runs:
Error: Failed to apply catalog: Parameter baseurl failed on Yumrepo[powerstack]: Validate method failed for class baseurl: bad URI(is not URI?): Fail[Operating at /etc/puppet/environments/development/modules/common/redis/manifests/preinstall.pp:28
puppet-3.8.1-1.el7.noarch
Thanks
If you install sentinel you are also running a redis instance and you cannot parametrize it.
I think that you should be able to run only a redis (cluster, standalone, master-slaves) and/or a redis sentinel on the same/different instances. I'm going to do a PR to solve that. The target is that you will be able to choose:
Bests,
After commit c450499 , a bunch of parameters in redis.conf are left without any value, for example:
syslog-facility
masterauth
...
If parameter is not passed:
I'm using following puppet manifest:
class { '::redis':
bind => '0.0.0.0',
port => 6379,
appendonly => true,
daemonize => false,
unixsocket => false,
}
With puppet-redis before commit c450499, it work fine. After this commit redis doesn't start as redis.conf.
Any plan for cluster support in 3.0.0?
Also in using cluster might be useful to run a master/slave on each node?
Not sure how you'd handle running slave of master(not the master instance on that node) but that could be useful in creating a fully redundant cluster with no data loss.
I'm not really sure how to classify this. I'm using librarian puppet to install the module from forge. Afterwards I see:
$ git status modules/redis
On branch redis
Untracked files:
(use "git add <file>..." to include in what will be committed)
modules/redis/
nothing added to commit but untracked files present (use "git add" to track)
$ git status modules/redis/
On branch redis
Untracked files:
(use "git add <file>..." to include in what will be committed)
modules/redis/Gemfile
modules/redis/LICENSE
modules/redis/README.md
modules/redis/Rakefile
modules/redis/checksums.json
modules/redis/manifests/
modules/redis/metadata.json
modules/redis/spec/
modules/redis/templates/
nothing added to commit but untracked files present (use "git add" to track)
$ git add modules/redis/
$ git status modules/redis
On branch redis
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
new file: modules/redis/Gemfile
new file: modules/redis/LICENSE
new file: modules/redis/README.md
new file: modules/redis/Rakefile
new file: modules/redis/checksums.json
new file: modules/redis/manifests/config.pp
new file: modules/redis/manifests/init.pp
new file: modules/redis/manifests/install.pp
new file: modules/redis/manifests/params.pp
new file: modules/redis/manifests/preinstall.pp
new file: modules/redis/manifests/sentinel.pp
new file: modules/redis/manifests/service.pp
new file: modules/redis/metadata.json
new file: modules/redis/spec/acceptance/nodesets/default.yml
new file: modules/redis/spec/acceptance/redis_spec.rb
new file: modules/redis/spec/classes/redis_sentinel_spec.rb
new file: modules/redis/spec/classes/redis_spec.rb
new file: modules/redis/spec/fixtures/manifests/site.pp
new file: modules/redis/spec/fixtures/modules/apt
new file: modules/redis/spec/fixtures/modules/epel
new file: modules/redis/spec/fixtures/modules/stdlib
new file: modules/redis/spec/spec.opts
new file: modules/redis/spec/spec_helper.rb
new file: modules/redis/spec/spec_helper_acceptance.rb
new file: modules/redis/templates/redis-sentinel.conf.erb
new file: modules/redis/templates/redis-sentinel.init.erb
new file: modules/redis/templates/redis.conf.erb
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
(commit or discard the untracked or modified content in submodules)
modified: modules/redis/spec/fixtures/modules/stdlib (modified content)
$ cd modules/redis/spec/fixtures/modules/stdlib/
$ git status
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
(use "git add/rm <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
deleted: spec/fixtures/modules/stdlib/lib
deleted: spec/fixtures/modules/stdlib/manifests
no changes added to commit (use "git add" and/or "git commit -a")
Don't know how those two were deleted. Any ideas?
This might be a result of my misunderstanding but using the service_user
keyword, I'm only able to modify the owner of the redis-sentinel process but not the redis-server process. I can see this from the compiled /etc/init.d files. If I try to run redis-server, I get an error because the logfile is owned by the user I specified in service_user
but the server process is run as the service_user
default (redis:redis). Is there a plan to add support for this in the redis-server?
My Vagrantfile
calls upon configure_cache.pp
, which calls a custom module redis.pp
:
class package::redis {
class { 'redis::install':
redis_version => '3.2.0',
}
}
Note: I attempted to install redis via an older suggestion of yours.
However, the following traceback occurs when vagrant runs configure_cache.pp
:
...
==> default: Running provisioner: puppet...
==> default: Running Puppet with environment development...
==> default: Error: Evaluation Error: Error while evaluating a Resource Statemen
t, Could not find declared class redis at /tmp/vagrant-puppet/modules-f71316dd46
7cca918424590c4186206a/package/manifests/redis.pp:6:5 on node drupal-demonstrati
on.com
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
ATM it's not possible to disable the bind parameter of the configuration file, it's also not possible to specify multiple binding address using the recipe. So if you want your server to listen to every available network interfaces, it's not possible ...
I propose bringing the configuration options up to date with https://github.com/antirez/redis/blob/3.0/redis.conf.
Basically, this is not up to date enough with the latest configuration options to support aof-load-truncated yes
and aof-rewrite-incremental-fsync yes
, and some additional cluster settings.
I noticed a lack of anything about aof-load-truncated
in the repo.
In most cases, I think people will set this to 'yes'.
# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes
# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes
# Cluster Slave Validity Factor
# A slave of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a slave to actually have a exact measure of
# its "data age", so the following two checks are performed:
#
#1) If there are multiple slaves able to failover, they exchange messages
# in order to try to give an advantage to the slave with the best
# replication offset (more data from the master processed).
# Slaves will try to get their rank by offset, and apply to the start
# of the failover a delay proportional to their rank.
#
#2) Every single slave computes the time of the last interaction with
# its master. This can be the last ping or command received (if the master
# is still in the "connected" state), or the time that elapsed since the
# disconnection with the master (if the replication link is currently down).
# If the last interaction is too old, the slave will not try to failover
# at all.
#
# The point "2" can be tuned by user. Specifically a slave will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
# (node-timeout * slave-validity-factor) + repl-ping-slave-period
#
# So for example if node-timeout is 30 seconds, and the slave-validity-factor
# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
# slave will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large slave-validity-factor may allow slaves with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a slave at all.
#
# For maximum availability, it is possible to set the slave-validity-factor
# to a value of 0, which means, that slaves will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
cluster-slave-validity-factor 0
# cluster-require-full-coverage
# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
#2015.10.19 n8 We don't want a partially working cluster. This will make
# diagnostics confusing, i.e. "it works for my test user"
cluster-require-full-coverage yes
May I make a pull request to add this feature, or is it unnecessary for reasons of which I'm not aware?
I ran into this, recently:
[beta1redis213] out: 17662:M 28 Aug 12:39:14.595 # Can't save in background: fork: Cannot allocate memory
Turns out it's a common enough problem that someone put it in the ReDiS FAQ, and in somebody stuck a solution for it in their Puppet module for ReDiS:
ReDiS FAQ on overcommit: http://redis.io/topics/faq#background-saving-is-failing-with-a-fork-error-under-linux-even-if-i39ve-a-lot-of-free-ram
https://github.com/giosakti/puppet-redis/blob/master/manifests/overcommit.pp
Is this over-engineering, or is it worth looking in to?
Would it be possible to get a bit more explanation or examples on how to setup a 3 node redis 3.0 cluster? I'm trying to read through various tutorials online and they all show manual commands that must be ran to get the cluster started.
What's there is kind of confusing:
class { 'redis':
bind => '10.0.1.2',
appendonly => true,
cluster_enabled => true,
cluster_config_file => 'nodes.conf',
cluster_node_timeout => 5000,
}
I understand that bind is the ip of the node but do we still need to specify "slaveof" so that the two "slave" nodes connect to the master?
If we add "class { 'redis::sentinel':}" do we still need to add the redis class?
the init script for Redhat says
REDIS_CONFIG="/etc/redis.conf"
while you specify /etc/redis/redis.conf here https://github.com/arioch/puppet-redis/blob/master/manifests/params.pp#L73
The $daemonize
param is defaulted to false for RedHat based OS'.
This causes service redis start
to hang indefinately.
PR incoming to fix default value on RedHat OS'.
On CentOS, ::operatingsystemrelease is 7.1.1503, and so it doesn't match the predicate defined in preinstall.pp, resulting in EPEL repository not being configured.
Hello. I'm trying to use create_resources() or ensure_resource() with this module and I'm running into issues with resource type 'redis' not found issues. I'm able to instantiate the module using "class { 'redis':}" but not with those two functioned mentioned. Is this a known limitation? Thanks
I'm new to Puppet, so I apologize in advance if I am not doing something correctly. I'm attempting to use the module through Puppet Enterprise. I'd like to deploy a Redis cluster to several nodes and am attempting to make the bind variable map to $ipaddress in the Node Management console in Enterprise.
When I do this, the module installs as bind 127.0.0.1 in redis.conf. If I perform an "rpm -e redis" on the node, then re-run puppet-agent -t on the node, it will correct the bind variable in redis.conf to the real IP address. The problem is that after I do this, I can no longer start redis. Just looking for guidance.
Thanks!
I can't seem to find this anywhere, but how should I handle things if I want to install a specific version of Redis that's crucial to my server setup?
// ,
Our deployment team is getting a lot of these:
30287:M 04 Dec 07:11:37.785 # Can't open /etc/redis.d/nodes-eth3.conf in order to acquire a lock: Permission denied
30332:M 04 Dec 07:11:56.693 # Can't open /etc/redis-conf/nodes-eth3.conf in order to acquire a lock: Permission denied
We then have to create a nodes-eth3.conf file manually.
Does puppet-redis handle the creation and permission of the file specified by cluster_config, or do we have to create that separately?
After commit edd7cb5 , I seem to be getting undef params into the redis.conf file, like:
syslog-facility undef
slaveof undef
masterauth undef
requirepass undef
maxmemory undef
maxmemory-policy undef
maxmemory-samples undef
include undef
Hi.
While reading the module's code, I was wondering why you don't use puppetlabs-inifile instead of relying on exec { "cp -p" }
?
You could call ini_settings
(or create_ini_settings
) with section => undef
and key_val_separator => ' '
.
What do you think?
With redis-cluster it looks like it's a common use pattern to invoke multiple instances of redis on the same server, do this on multiple servers, and let redis-trib distribute the masters and slaves across these instances.
But this puppet module is a singleton class so can't be invoked more than once (unless I've misunderstood, which is entirely possible), perhaps it can be altered to work as a parameterised defined type instead so multiple instances can be invoked - and you get the benefits of hiera etc baked in as well.
This is a bit of a showstopper with cluster.
On Ubuntu 14.04, redis-server
has an init script that does chown redis:redis $RUNDIR $PIDFILE
(where RUNDIR=/var/run/redis
and PIDFILE=$RUNDIR/redis-server.pid
) at startup. This causes puppet to "fix" the ownership of /var/run/redis
(to redis:root
unless config_owner
and config_group
have been set differently), which triggers a restart of redis-server
, which changes the ownership of /var/run/redis
, etc. forever.
Is it possible to tell puppet not to care about the owner or group of /var/run/redis
beyond the initial creation?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.