markt-de / puppet-galera Goto Github PK
View Code? Open in Web Editor NEWPuppet Module to setup Galera/XtraDB clusters on MySQL/MariaDB
License: BSD 2-Clause "Simplified" License
Puppet Module to setup Galera/XtraDB clusters on MySQL/MariaDB
License: BSD 2-Clause "Simplified" License
...because the installation of "xtrabackup" requires the "percona" repo to be enabled, which is not the case if the vendor is "galera" or "mariadb".
The keyserver keys.gnupg.net is not available anymore and key has changed for percona ubuntu
https://www.percona.com/blog/2016/10/13/new-signing-key-for-percona-debian-and-ubuntu-packages/
There's no point forcing really old application versions on users that are evaluating this module.
Existing users should specify the required/expected version. Everyone should do this in a production environment.
Some refactoring :)
Hi,
i get some errors using this class in combination with puppetlabs/mysql
root user get´s created with empty password and does not allow login with pass only with empty pass also error on Clustercheck, /var/log/mariadb folder doesn´t get created
Distributor ID: CentOS
Description: CentOS Linux release 7.7.1908 (Core)
Release: 7.7.1908
Codename: Core
puppet --version 5.5.16
mysql Ver 14.14 Distrib 5.7.27, for Linux (x86_64) using EditLine wrapper
Always done a Clean Setup via Cobbler
my node config:
class { 'galera':
galera_servers => ['xxx.xxx.xxx.xxx', 'xxx.xxx.xxx.xxx', 'xxx.xxx.xxx.xxx'],
galera_master => 'poc-db01',
create_root_user => 'true',
root_password => 'pa$$w0rd',
status_password => 'pa$$w0rd',
vendor_type => 'codership',
local_ip => $facts['networking']['ip'],
configure_firewall => true,
mysql_port => 3306,
wsrep_state_transfer_port => 4444,
wsrep_inc_state_transfer_port => 4568,
wsrep_group_comm_port => 4567,
override_options => {
'mysqld' => {
'bind_address' => '0.0.0.0',
},
},
}
if vendor_type is "codership" or "mariadb" "/var/log/mariadb" folder ist not created
i added this code to init.pp to prevent puppet run fail
.........
service_name => $params['mysql_service_name'],
restart => $mysql_restart,
}
if (($vendor_type == 'codership') or ($vendor_type == 'mariadb')) {
file { '/var/log/mariadb':
ensure => 'directory',
owner => 'mysql',
group => 'mysql',
require => Class['mysql::server::install'],
before => Class['mysql::server::installdb'],
}
}
file { $rundir:
ensure => directory,
...........
}
Next Error msg:
Error: Could not prefetch mysql_user provider 'mysql':
Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT CONCAT(User, '@',Host) AS User FROM mysql.user' returned 1:
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
Error: Failed to apply catalog: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT CONCAT(User, '@',Host) AS User FROM mysql.user' returned 1:
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
i can connect to mysql via mysql -uroot -p it requests a pass i just press enter and i'm in.
(root@localhost) [mysql]> select * from user where user='root'\G;
*************************** 1. row ***************************
Host: localhost
User: root
plugin: mysql_native_password
...........
authentication_string: **<<<< is empty**
password_expired: N
password_last_changed: 2019-10-06 19:58:59
password_lifetime: NULL
account_locked: N
1 row in set (0.00 sec)
ERROR:
No query specified
(root@localhost) [mysql]>
if i set it manually Password:
(root@localhost) [(none)]> UPDATE mysql.user SET authentication_string = PASSWORD('pa$$w0rd') WHERE User = 'root' AND Host = 'localhost';
Query OK, 1 row affected, 1 warning (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 1
(root@localhost) [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
(root@localhost) [(none)]> quit
Bye
it runs till clustercheck and ends with different error, manual login for user "clustercheck" work too
Notice: /Stage[main]/Galera::Validate/Exec[validate_connection]/returns: mysql: [Warning] Using a password on the command line interface can be insecure.
Notice: /Stage[main]/Galera::Validate/Exec[validate_connection]/returns: ERROR 1045 (28000): Access denied for user 'clustercheck'@'localhost' (using password: YES)
Error: /Stage[main]/Galera::Validate/Exec[validate_connection]: Failed to call refresh: 'mysql --host=localhost --user=clustercheck --password=pa$$w0rd -e 'select count(1);'' returned 1 instead of one of [0]
Error: /Stage[main]/Galera::Validate/Exec[validate_connection]: 'mysql --host=localhost --user=clustercheck --password=pa$$w0rd -e 'select count(1);'' returned 1 instead of one of [0]
Notice: /Stage[main]/Mysql::Server/Anchor[mysql::server::end]: Dependency Exec[validate_connection] has failures: true
Warning: /Stage[main]/Mysql::Server/Anchor[mysql::server::end]: Skipping because of failed dependencies
Info: Stage[main]: Unscheduling all events on Stage[main]
Password Hash for "clustercheck" is same as "root":
(root@localhost) [(none)]> select user,host,authentication_string from mysql.user;
+---------------+-----------+-------------------------------------------+
| user | host | authentication_string |
+---------------+-----------+-------------------------------------------+
| root | localhost | *A32931B9B0D4C7CF9173CAA42B07BD2EA453EF94 |
| mysql.session | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE |
| mysql.sys | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE |
| clustercheck | % | *A32931B9B0D4C7CF9173CAA42B07BD2EA453EF94 |
| clustercheck | localhost | *A32931B9B0D4C7CF9173CAA42B07BD2EA453EF94 |
+---------------+-----------+-------------------------------------------+
5 rows in set (0.00 sec)
(root@localhost) [(none)]>
i also debuged down to mysql::server::root_password (root_password.pp),
mysql::Server::create_root_user == true
mysql::server::root_password is set
did i missed some or is this a bug
kind regards
I'm guessing this broke from #106, moving from params.pp
to Hiera. It looks like this override hash is used directly from the Hiera structure and $bind_address
and $local_ip
are never merged in to the options hash. the lookup()
function will only lookup other values in Hiera, not values passed to the class (which I'm guessing is what happened here).
I believe what needs to be done here is something similar to $_wsrep_cluster_address
, and add that to the mysql_deepmerge()
before $override_options
in init.pp
, so that values passed into the class take priority over the defaults (they can actually probably be removed from the defaults hash in the Hiera data at that point, though), but not over anything in the override options hash.
Hi Frank.
there is no concept of "master" that can be pre-assigned, and that can always be used to bootstrap the cluster.
There is one node in the cluster which can be considered safe to bootstrap. This node is the last node that was alive in the cluster and it can be determined by grepping as following:
grep safe_to_bootstrap ~mysql/grastate.dat
safe_to_bootstrap: 0
If it's 1
you can bootstrap from that node.
Of course you can bootstrap for whatever node, but the risk of loosing data is considerably high, depending how long the last node was alive, while the other nodes were down!!
For you reference: Galera Safe To Bootstrap
Hope it helps!
@michaeltchapman I'd love to relicense the module under the 2-Clause BSD license. It's the license I'm using in most other projects. No warranty/liability/trademark issues. Please let me know if you agree.
In the way, how /usr/local/bin/clustercheck is called by xinetd, the password of the MySQL user is visible in the command line (via 'ps -ef') during the execution of this script. But passwords should never be visible on the command line.
A better way for my opinion should be to generate a special 'my.somehow.cnf' somewhere in the filesystem, include the MySQL host, user and password as key-value-pairs inside the [client]-division of this file, restrict the access to this file, so that only the clustercheck-user is able to read this file and call mysql in the clustercheck-script with this file as a '--defaults-extra-file' parameter without giving host, user and password separately.
I see on the status manifest on every line that has mysql_password() that it's an unknown function:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Unknown function: 'mysql_password'. (file: /etc/puppetlabs/code/environments/production/modules/galera/manifests/status.pp, line: 42, column: 24)
This also on line 27.
This on CentOS btw.
We have a cluster that is set up using an earlier version of this module. When upgrading to 1.0.3
we noticed that xtrabackup-v2
was no longer an allowed value for the wsrep_sst_method
parameter.
I believe the change makes sense when running percona 5.7, but using just xtrabackup
in 5.6 it breaks our state transfers.
Could xtrabackup-v2
still be allowed when using vendor_version: 5.6
?
validate.pp
status.pp
is working as expected (hints: clustercheck, xinetd)If I run puppet agent it fails with
Error: Failed to apply catalog: Found 1 dependency cycle:
(Exec[clean_up_ubuntu] => Class[Mysql::Server::Config] => File[/etc/mysql/conf.d] => Class[Mysql::Server::Config] => Class[Mysql::Server::Install] => Package[mysql-server] => Exec[clean_up_ubuntu])
Try the '--graph' option and opening the resulting '.dot' file in OmniGraffle or GraphViz
My configuration is
class { 'galera':
status_password => $db_cluster['status_password'],
root_password => $root_password,
galera_servers => $db_cluster['nodes'],
galera_master => $db_cluster['master'],
local_ip => $servers[$::fqdn]['ip'],
vendor_type => 'percona',
vendor_version => '5.6',
wsrep_sst_method => 'xtrabackup-v2',
configure_firewall => false,
override_options => {
mysqld => {
wsrep_cluster_name => $db_cluster['group_name'],
wsrep_provider_options => '"gcache.size=8192M;socket.checksum=1;"',
skip_name_resolve => true,
innodb_flush_method => 'O_DIRECT',
innodb_log_files_in_group => 2,
innodb_log_file_size => '64M',
innodb_flush_log_at_trx_commit => 1,
innodb_file_per_table => 1,
innodb_buffer_pool_size => '592M',
tmp_table_size => '32M',
max_heap_table_size => '32M',
max_connections => 500,
thread_cache_size => 50,
open_files_limit => 65535,
table_definition_cache => 4096,
table_open_cache => 4096,
max_allowed_packet => '16M',
max_connect_errors => 1000000,
key_buffer_size => '32M',
log_bin => '/var/lib/mysql/mysql-bin',
expire_logs_days => 14,
sync_binlog => 1,
bind_address => '0.0.0.0'
}
}
}
I'm using the master branch.
Optional[]
for optional parameters (verify!)Hi,
Starting with Debian 9 (Stretch), and Debian Unstable (Sid), and Ubuntu 16.04 LTS (Xenial), the id of our GPG public key is 0xF1656F24C74CD1D8. The full key fingerprint is: 177F 4010 FE56 CA33 3630 0305 F165 6F24 C74C D1D8
(see Installing MariaDB)
On the master node, mysql is initially started with:
service mysql start --wsrep_cluster_address=gcomm://
Unfortunately the init script on Debian/Ubuntu does not allow extra parameters to be passed to it, so it will fail to start MySQL. Have to start it by hand now, to get the rest of the cluster running.
I'm running into
because other modules already installed nmap. It should be possible to skip nmap installation.
The correct url is eg http://yum.mariadb.org/10.3/rhel7-amd64
not http://yum.mariadb.org/10.3/redhat7-amd64
ie using os_name_lc
fact is not correct for this OS.
Hi,
At the end of init.pp, the bootstrap_galera_cluster initialize the cluster with the command defined in params.pp. The issue is, under Debian Stretch, we get:
$bootstrap_command = 'service mysql start --wsrep_cluster_address=gcomm://'
when really, it should be:
$bootstrap_command = '/usr/bin/galera_new_cluster'
After fixing params.pp, it worked for me with the default mariadb-server and galera-3 packages. FYI, here's the way I called the galera class:
class { 'galera':
galera_servers => $all_masters_ip,
galera_master => $first_master,
mysql_package_name => 'mariadb-server',
vendor_type => 'mariadb',
root_password => 'myPaSs',
status_password => 'myPaSs2',
configure_repo => false,
configure_firewall => false,
galera_package_name => 'galera-3',
}
which worked if fixing $bootstrap_command, but didn't by default.
Not completely urgent, but something small that should be looked at.
Basically, the question there is, should the current configuration be replicated, or should galera only open the port it needs to? If I'm not mistaken only the destination port needs to be opened specifically.
Once that is decided, it's a small change to fix.
Error: Failed to apply catalog: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT CONCAT(User, '@',Host) AS User FROM mysql.user' returned 1: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
The error occurs while using the following node definition:
class{'galera':
galera_servers => ['192.168.1.33','192.168.1.34','192.168.1.35','192.168.1.36'],
galera_master => '192.168.1.33',
vendor_type => 'mariadb',
status_password => 'mariadb',
bind_address => $::ipaddress_enp1s0,
}
If i uncomment out the root_password, sections the entire thing fails as well.
Hi,
We have a little request for the development process of your excellent puppet module we currently using. Can you use the release tags from git when you provide new version in github?
'Cause when we using on our env without this information we get all the modification since our last install and sometime it broke :D like today.
If you use a versioning naming, like in semver.org, we can rely on a precise one and make migration process to a new version with more security.
Thanks a lot for the time provided to the community
With release 8.0.0 of puppetlabs/mysql the function mysql::deepmerge was dropped among others since Puppet 5 (and 6) can do this without Ruby.
puppetlabs/puppetlabs-mysql#1145
https://github.com/puppetlabs/puppetlabs-mysql/blob/master/CHANGELOG.md#800-2019-01-18
should be something like
$options = $_default_options.deep_merge($_wsrep_cluster_address.deep_merge($override_options))
I'm not sure weither the datatype function can merge multiple hashes like the old one, so I cascaded it instead.
AFAIK they are working on a wrapper function to still support calls to mysql::deepmerge. Though this should be changed either way. At the moment this module is incompatible with puppetlabs/mysql 8.0.0
Are there any plans to upgrade to Percona 5.6? I need FULLTEXT Index for innodb which is available at 5.6!
I forked new branch where I introduced "vendor_type" "percona56" and edited files. I made it only for debian systems. How would you do that? Introducing new parameter like "version" or like I did new "vendor_type"?
Can you change the status.pp file so that augeas checks for tcp and the port as on CenOS 6 there already exists both a TCP and UDP service and therefore adds nothing.
The following works for me, but I haven't checked to see what happens if there is TCP only, UDP only or no service on port 9200.
augeas { 'mysqlchk':
context => '/files/etc/services',
changes => [
"set /files/etc/services/service-name[port='9200' and protocol='tcp']/port 9200",
"set /files/etc/services/service-name[port='9200' and protocol='tcp'] mysqlchk",
"set /files/etc/services/service-name[port='9200' and protocol='tcp']/protocol tcp",
],
}
The variable deb_sysmaint_password in /etc/mysql/debian.cnf
is empty as it's scope in the template file seems to be wrong:
password = <%= $deb_sysmaint_password %>
I guess it should be (at least it works):
password = <%= $galera::deb_sysmaint_password %>
The scope handling in manifests/debian.pp
is no longer needed afterwards.
Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install mariadb-galera-server-5.5' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
Package mariadb-galera-server-5.5 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
mariadb-server-10.0:i386 mariadb-server-10.0
E: Package 'mariadb-galera-server-5.5' has no installation candidate
Error: /Stage[main]/Mysql::Server::Install/Package[mysql-server]/ensure: change from purged to present failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install mariadb-galera-server-5.5' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
Package mariadb-galera-server-5.5 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
mariadb-server-10.0:i386 mariadb-server-10.0
E: Package 'mariadb-galera-server-5.5' has no installation candidate
OS: CentOS 7.7.1908
Puppet version: 5.5.10
Puppet-galera version: 1.0.6
init.pp
class { 'galera':
galera_servers => ['xxx.xxx.xxx.xxx','xxx.xxx.xxx.xxx', 'xxx.xxx.xxx.xxx'],
galera_master => 'pxc1.test.domain',
root_password => 'passw0rd',
status_password => 'passw0rd',
configure_firewall => false,
vendor_type => 'percona',
}
puppet agent output:
Info: Applying configuration version '1574841653'
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install Percona-XtraDB-Cluster-server-57' returned 1: Transaction check error:
file /usr/lib64/galera3/libgalera_smm.so from install of Percona-XtraDB-Cluster-server-57-5.7.27-31.39.1.el7.x86_64 conflicts with file from package Percona-XtraDB-Cluster-galera-3-3.36-1.el7.x86_64
Error Summary
-------------
Error: /Stage[main]/Mysql::Server::Install/Package[mysql-server]/ensure: change from 'purged' to 'present' failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install Percona-XtraDB-Cluster-server-57' returned 1: Transaction check error:
file /usr/lib64/galera3/libgalera_smm.so from install of Percona-XtraDB-Cluster-server-57-5.7.27-31.39.1.el7.x86_64 conflicts with file from package Percona-XtraDB-Cluster-galera-3-3.36-1.el7.x86_64
Error Summary
-------------
Notice: /Stage[main]/Galera/File[/var/run/mysqld]: Dependency Package[mysql-server] has failures: true
Warning: /Stage[main]/Galera/File[/var/run/mysqld]: Skipping because of failed dependencies
Warning: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]: Skipping because of failed dependencies
Warning: /Stage[main]/Galera::Status/Xinetd::Service[mysqlchk]/File[/etc/xinetd.d/mysqlchk]: Skipping because of failed dependencies
Warning: /Stage[main]/Xinetd/Service[xinetd]: Skipping because of failed dependencies
Notice: /Stage[main]/Galera/Exec[bootstrap_galera_cluster]: Dependency Mysql_datadir[/var/lib/mysql] has failures: true
Warning: /Stage[main]/Galera/Exec[bootstrap_galera_cluster]: Skipping because of failed dependencies
Warning: /Stage[main]/Mysql::Server::Service/Service[mysqld]: Skipping because of failed dependencies
Warning: /Stage[main]/Galera/Exec[create /root/.my.cnf]: Skipping because of failed dependencies
Warning: /Stage[main]/Mysql::Server::Service/Exec[wait_for_mysql_socket_to_open]: Skipping because of failed dependencies
Warning: /Stage[main]/Mysql::Server::Root_password/Exec[remove install pass]: Skipping because of failed dependencies
Notice: /Stage[main]/Mysql::Server::Root_password/File[/root/.my.cnf]: Dependency Mysql_user[root@localhost] has failures: true
Warning: /Stage[main]/Mysql::Server::Root_password/File[/root/.my.cnf]: Skipping because of failed dependencies
Notice: /Stage[main]/Galera::Validate/Exec[validate_connection]: Dependency Mysql_user[clustercheck@%] has failures: true
Notice: /Stage[main]/Galera::Validate/Exec[validate_connection]: Dependency Mysql_grant[clustercheck@%/*.*] has failures: true
Notice: /Stage[main]/Galera::Validate/Exec[validate_connection]: Dependency Mysql_user[clustercheck@localhost] has failures: true
Notice: /Stage[main]/Galera::Validate/Exec[validate_connection]: Dependency Mysql_grant[clustercheck@localhost/*.*] has failures: true
Warning: /Stage[main]/Galera::Validate/Exec[validate_connection]: Skipping because of failed dependencies
Warning: /Stage[main]/Mysql::Server/Anchor[mysql::server::end]: Skipping because of failed dependencies
Error: Could not find a suitable provider for mysql_datadir
Error: Could not find a suitable provider for mysql_user
Error: Could not find a suitable provider for mysql_grant
Notice: Applied catalog in 5.98 seconds
Starting with module version 1.0 the default value of $galera_servers
is specified in module data. This makes it difficult to replace the default value, possibly breaking some configurations.
For example, when migrating such setups from module version 0.7.x to 1.0 the following undesired changes may be introduced:
Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content:
--- /etc/my.cnf 2018-08-10 23:08:52.166942987 +0200
+++ /tmp/puppet-file20181101-19421-1a7wci9 2018-11-01 22:42:25.060436453 +0100
@@ -75,7 +75,7 @@
tmpdir = /tmp
transaction-isolation = READ-COMMITTED
user = mysql
-wsrep_cluster_address = gcomm://node1.example.com,node2.example.com,node3.example.com/
+wsrep_cluster_address = gcomm://10.10.10.1,node1.example.com,node2.example.com,node3.example.com/
In this case the IP address of "node1" was added to the list, because the default value is $facts['networking']['ip']
.
Previous module versions assigned the default value directly in the class parameter and thus did not have this issue, because values were not merged, but simply replaced.
The module seems to get itself into a dependency cycle when run against centos7:
==> control1: Error: Could not apply complete catalog: Found 1 dependency cycle:
==> control1: (Anchor[mysql::server::start] => Class[Mysql::Server::Install] => Package[mysql-server] => Class[Mysql::Server::Install] => File[/var/run/mariadb] => Class[Galera::Repo] => Class[Mysql::Server] => Anchor[mysql::server::start])
I can't get my head around the various anchors in the init script to figure out why this occurs. What's weird is that this doesn't happen when I test against centos6.
It would be nice to have support for Ubuntu 14.04 and MariaDB 10.x.
(including the vagrant file)
Hi,
There seems to be an issue with init.pp on Centos 7. It would appear that nc done not have the -z command on that platform. My workaround is to replace the "onlyif" call for the bootstrap_galera_cluster with:
"ret=1; for i in ${server_list}; do nc \$i ${wsrep_group_comm_port} < /dev/null; if [ \"\$?\" = \"0\" ]; then ret=0; fi; done; /bin/echo \$ret | /bin/grep 1 -q"
which seems to work on my system.
On CentOS 7 puppetlabs-mysql will default to MariaDB, which results in some MariaDB-specific configurations/paths, even when using Percona XtraDB or Codership Galera Cluster.
This is mostly a cosmetic issue, the MySQL server is still fully functional.
We may overwrite some puppetlabs-mysql parameters to fix this.
I changed my code similar to PR 61 ( #61 ) and all seems well.
Until it tries as a part of galera::init to run exec { "create ${::root_home}/.my.cnf" from what I can tell - which then calls mysql::root_password and its "remove install pass" resource. Later in mysql::server::root_password, it then tries to set the root password (as I've passed in my manifest to the galera module) which seems to fail.
Here's the debug output from the puppet run below. The way I fixed it is to do a manual password recovery except adding the --wsrep-new-cluster line (per http://dev.mysql.com/doc/refman/5.6/en/resetting-permissions.html) to the password that I told the galera manifest I wanted to use. Puppet's next run then does all the other DB creation and stuff I told it to and all seems happy. This only breaks on the initial puppet run for a node.
So it seems that the root password is not being set as it would seem to be.
A snippet from my mysql manifest:
class { 'galera':
galera_servers => ['10.0.0.2,10.0.0.3,10.0.0.4'],
galera_master => 'firstnode.example.com',
local_ip => $::ipaddress_eth0,
root_password => 'MyPassword',
configure_repo => false,
configure_firewall => true,
wsrep_sst_method => 'xtrabackup-v2',
validate_connection => false,
status_check => false,
}
First run output:
Debug: Executing: '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' percona-xtrabackup'
Debug: Executing: '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install percona-xtrabackup'
Notice: /Stage[main]/Galera/Package[percona-xtrabackup]/ensure: created
Debug: /Package[percona-xtrabackup]: The container Class[Galera] will propagate my refresh event
Debug: Executing: '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' percona-xtradb-cluster-galera-3.x'
Debug: Executing: '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install percona-xtradb-cluster-galera-3.x'
Notice: /Stage[main]/Galera/Package[percona-xtradb-cluster-galera-3.x]/ensure: created
Debug: /Package[percona-xtradb-cluster-galera-3.x]: The container Class[Galera] will propagate my refresh event
Debug: Executing: '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install percona-xtradb-cluster-server-5.6'
Notice: /Stage[main]/Mysql::Server::Install/Package[mysql-server]/ensure: created
Info: /Package[mysql-server]: Scheduling refresh of Exec[clean_up_ubuntu]
Debug: /Package[mysql-server]: The container Class[Mysql::Server::Install] will propagate my refresh event
Debug: Class[Mysql::Server::Install]: The container Stage[main] will propagate my refresh event
Debug: Execclean_up_ubuntu: Executing 'service mysql stop'
Debug: Executing: 'service mysql stop'
Notice: /Stage[main]/Galera::Debian/Exec[clean_up_ubuntu]: Triggered 'refresh' from 1 events
Debug: /Stage[main]/Galera::Debian/Exec[clean_up_ubuntu]: The container Class[Galera::Debian] will propagate my refresh event
Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/ensure: defined content as '{md5}6bd09508c53bb37b90ad3c08f3969a92'
Debug: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: The container Class[Mysql::Server::Config] will propagate my refresh event
Debug: Class[Mysql::Server::Config]: The container Stage[main] will propagate my refresh event
Debug: Execbootstrap_galera_cluster: Executing check '["/bin/sh", "-c", "nmap -p 4567 10.0.0.1 10.0.0.2 10.0.0.3 | grep -q '4567/tcp open'"]'
Debug: Executing: '/bin/sh -c nmap -p 4567 10.0.0.1 10.0.0.2 10.0.0.3 | grep -q '4567/tcp open''
Debug: Execbootstrap_galera_cluster: Executing '["/bin/sh", "-c", "/etc/init.d/mysql bootstrap-pxc"]'
Debug: Executing: '/bin/sh -c /etc/init.d/mysql bootstrap-pxc'
Notice: /Stage[main]/Galera/Exec[bootstrap_galera_cluster]/returns: executed successfully
Debug: /Stage[main]/Galera/Exec[bootstrap_galera_cluster]: The container Class[Galera] will propagate my refresh event
Notice: /Stage[main]/Mysql::Server::Service/File[/var/log/mysql/error.log]/group: group changed 'adm' to 'mysql'
Debug: /Stage[main]/Mysql::Server::Service/File[/var/log/mysql/error.log]: The container Class[Mysql::Server::Service] will propagate my refresh event
Debug: Servicemysqld: Could not find mysql.conf in /etc/init
Debug: Servicemysqld: Could not find mysql.conf in /etc/init.d
Debug: Servicemysqld: Could not find mysql in /etc/init
Debug: Executing: '/etc/init.d/mysql status'
Debug: Execcreate /root/.my.cnf: Executing check '/usr/bin/mysql --user=root --password=pa$$word -e 'select count(1);''
Debug: Executing: '/usr/bin/mysql --user=root --password=pa$$word -e 'select count(1);''
Debug: /Stage[main]/Galera/Exec[create /root/.my.cnf]/onlyif: Warning: Using a password on the command line interface can be insecure.
Debug: /Stage[main]/Galera/Exec[create /root/.my.cnf]/onlyif: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
Debug: Class[Galera]: The container Stage[main] will propagate my refresh event
Debug: Class[Mysql::Server::Service]: The container Stage[main] will propagate my refresh event
Debug: Execremove install pass: Executing check 'test -f /.mysql_secret'
Debug: Executing: 'test -f /.mysql_secret'
Debug: Prefetching mysql resources for mysql_user
Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT CONCAT(User, '@',Host) AS User FROM mysql.user'
Debug: Storing state
Debug: Stored state in 0.04 seconds
Error: Failed to apply catalog: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT CONCAT(User, '@',Host) AS User FROM mysql.user' returned 1: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
Using release 0.7.1 of the module and with mysql module version 5.4.0, with these settings in hiera:
galera::galera_package_name: 'galera-3'
galera::mysql_package_name: 'mariadb-server'
galera::vendor_type: 'mariadb'
galera::vendor_version: '10.3'
galera::repo::apt_mariadb_repo_key_server: 'keyserver.ubuntu.com'
galera::repo::apt_mariadb_repo_key: '177F4010FE56CA3336300305F1656F24C74CD1D8'
galera::repo::apt_mariadb_repo_location: "http://mirror.aarnet.edu.au/pub/MariaDB/repo/%{lookup('galera::vendor_version')}/ubuntu"
I get a dependency cycle:
(File[/etc/mysql/conf.d] => Class[Mysql::Server::Config] => Package[rsync] => Class[Mysql::Server::Install] => Package[mysql-server] => Class[Mysql::Server::Install] => Class[Mysql::Server::Config] => File[/etc/mysql/conf.d])
The core of this to me looks like
mysql::server::config => package installs => mysql::server::install
and
mysql::server::install => mysql::server::config
I solved this by changing manifests/init.pp
to remove require => Class['mysql::server::config']
from both the additional_packages
and galera_package_name
package installations.
Does this change make sense as a solution to the problem? I could submit a PR to do so.
It looks like a recent commit (a9c6bce) has introduced a circular dependency cycle.
Error: Failed to apply catalog: Found 1 dependency cycle:
(File[/etc/mysql/conf.d] => Class[Mysql::Server::Config] => Package[rsync] => Class[Mysql::Server::Install] => Package[mysql-server] => Class[Mysql::Server::Install] => Class[Mysql::Server::Config] => File[/etc/mysql/conf.d])
Hi, I am trying to install galera on Ubuntu 14.04:
class { 'galera':
status_password => 'hs4jk2t',
galera_servers => ['10.0.0.11', '10.0.0.12', '10.0.0.13'],
galera_master => 'controller1.openstacklocal',
local_ip => '10.0.0.11',
bind_address => '10.0.0.11',
mysql_port => '3306',
wsrep_group_comm_port => '4567',
wsrep_state_transfer_port => '4444',
wsrep_inc_state_transfer_port => '4568',
wsrep_sst_method => 'rsync',
root_password => 'hs4jk2t',
override_options => {},
vendor_type => 'mariadb',
configure_repo => true,
configure_firewall => false,
deb_sysmaint_password => 'abcdefghijkl',
mysql_restart => true,
package_ensure => 'installed',
}
Applying the manifest, I get an empty password in /etc/mysql/debian.cnf:
ubuntu@controller1:~$ sudo cat /etc/mysql/debian.cnf
[client]
host = localhost
user = debian-sys-maint
password =
socket = /var/run/mysqld/mysqld.sock
[mysql_upgrade]
host = localhost
user = debian-sys-maint
password =
socket = /var/run/mysqld/mysqld.sock
basedir = /usr
ubuntu@controller1:~$
As a result, I keep getting this trying to start the service:
ubuntu@controller1:~$ sudo /etc/init.d/mysql start
* Starting MariaDB database server mysqld
Nov 15 14:35:18 controller1 mysqld: 151115 14:35:18 [Warning] Access denied for user 'debian-sys-maint'@'localhost' (using password: NO)
Nov 15 14:35:21 controller1 mysqld: 151115 14:35:21 [Warning] Access denied for user 'debian-sys-maint'@'localhost' (using password: NO)
...done.
ubuntu@controller1:~$
Any ideas?
I have seen on some other modules that the GPG keys are updates, as I don't see any here and have the following issue on CentOS I would like to ask if we need this as well:
Debug: Executing: '/usr/bin/yum -d 0 -e 0 -y install Percona-XtraDB-Cluster-shared-compat-57'
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install Percona-XtraDB-Cluster-shared-compat-57' returned 1: warning: /var/cache/yum/x86_64/7/galera_percona/packages/Percona-XtraDB-Cluster-shared-compat-57-5.7.25-31.35.1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID 8507efa5: NOKEY
Public key for Percona-XtraDB-Cluster-shared-compat-57-5.7.25-31.35.1.el7.x86_64.rpm is not installed
Importing GPG key 0xCD2EFD2A:
Userid : "Percona MySQL Development Team <[email protected]>"
Fingerprint: 430b df5c 56e7 c94e 848e e60c 1c4c bdcd cd2e fd2a
From : http://www.percona.com/downloads/percona-release/RPM-GPG-KEY-percona
Public key for Percona-XtraDB-Cluster-shared-compat-57-5.7.25-31.35.1.el7.x86_64.rpm is not installed
Failing package is: Percona-XtraDB-Cluster-shared-compat-57-5.7.25-31.35.1.el7.x86_64
GPG Keys are configured as: http://www.percona.com/downloads/percona-release/RPM-GPG-KEY-percona
Error: /Stage[main]/Galera::Repo/Package[Percona-XtraDB-Cluster-shared-compat-57]/ensure: change from purged to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install Percona-XtraDB-Cluster-shared-compat-57' returned 1: warning: /var/cache/yum/x86_64/7/galera_percona/packages/Percona-XtraDB-Cluster-shared-compat-57-5.7.25-31.35.1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID 8507efa5: NOKEY
Public key for Percona-XtraDB-Cluster-shared-compat-57-5.7.25-31.35.1.el7.x86_64.rpm is not installed
Importing GPG key 0xCD2EFD2A:
Userid : "Percona MySQL Development Team <[email protected]>"
Fingerprint: 430b df5c 56e7 c94e 848e e60c 1c4c bdcd cd2e fd2a
From : http://www.percona.com/downloads/percona-release/RPM-GPG-KEY-percona
Public key for Percona-XtraDB-Cluster-shared-compat-57-5.7.25-31.35.1.el7.x86_64.rpm is not installed
Failing package is: Percona-XtraDB-Cluster-shared-compat-57-5.7.25-31.35.1.el7.x86_64
GPG Keys are configured as: http://www.percona.com/downloads/percona-release/RPM-GPG-KEY-percona
As the title says, it appears that $status_password
is a required parameter for this class? But the README examples have this omitted.
Hi,
I try to install the mysql +galera with puppet with
192.168.207.138 db1.test
192.168.207.139 db2.test
192.168.207.140 db3.test
a) in the /etc/puppetlabs/code/environments/production/modules , I use the git clone https://your.git galera to create the module
b) puppet module list can show the entry of "michaeltchapman-galera (v0.5.0)", and actually, there is a warning for the 'puppetlabs-mysql(v3.6.2)' since your requirement are >3.8.0,
c) I change the site.pp with
node default {
class { 'galera':
galera_servers => ['192.168.207.139','192.168.207.140'],
galera_master => 'db1.test',
vendor_type => 'mariadb',
status_password => 'mariadb',
}
}
d) and I issue the command ' puppet agent --test' on db1.test, but it failed with
(File[mysql-config-file] => Class[Mysql::Server::Config] => Package[rsync] => Class[Mysql::Server::Install] => Package[mysql-server] => Class[Mysql::Server::Install] => Class[Mysql::Server::Config] => File[mysql-config-file])
Try the '--graph' option and opening the resulting '.dot' file in OmniGraffle or GraphViz
It should be noted that this class is by default dependent on puppetlabs-xinetd. If status and validate are set to true. The README does not reflect this.
Makes it so that the xinetd dependencies does not exist.
status_check => false,
validate_connection => false,
Hello,
I've got a problem with bootstraping the cluster.
The dependency on Class['mysql::server::installdb'] is not found.
Where should it be declared?
And secondly, on Debian jessie the bootstrap command
service mysql start --wsrep_cluster_address=gcomm://
doesn't work
I have to use
/etc/init.d/mysql start --wsrep_cluster_address=gcomm://
It could be helpful to set this command within the parameters
Thanks
The module is unable to bootstrap a new cluster if the data directory does not exist:
Notice: /Stage[main]/Galera::Repo/Yumrepo[galera_percona]/ensure: created
Notice: /Stage[main]/Galera::Repo/Yumrepo[galera_epel]/ensure: created
Notice: /Stage[main]/Galera::Repo/Package[Percona-XtraDB-Cluster-shared-compat-57]/ensure: created
Notice: /Stage[main]/Galera/Package[percona-xtrabackup-24]/ensure: created
Notice: /Stage[main]/Mysql::Server::Install/Package[mysql-server]/ensure: created
Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/ensure: defined content as '{md5}63be06247231f5789304856324bcf3b4'
Notice: /Stage[main]/Mysql::Server::Installdb/Mysql_datadir[/var/lib/mysql]/ensure: created
Notice: /Stage[main]/Galera/Exec[bootstrap_galera_cluster]/returns: executed successfully
Info: /Stage[main]/Galera/Exec[bootstrap_galera_cluster]: Scheduling refresh of Exec[validate_connection]
Error: Systemd start for mysql failed!
Error: /Stage[main]/Mysql::Server::Service/Service[mysqld]/ensure: change from 'stopped' to 'running' failed: Systemd start for mysql failed!
Notice: /Stage[main]/Galera/Exec[create /root/.my.cnf]: Dependency Service[mysqld] has failures: true
In this case, mysql::server::installdb
notifies Class['mysql::server::service']
... as soon as Exec[bootstrap_galera_cluster]
does it's thing, it breaks.
When trying to install this module I get the following:
puppet module install fraenki-galera
Notice: Preparing to install into /etc/puppetlabs/code/environments/production/modules ...
Notice: Downloading from https://forgeapi.puppet.com ...
Error: Could not install module 'fraenki-galera' (???)
No version of 'fraenki-galera' can satisfy all dependencies
Usepuppet module install --ignore-dependencies
to install only this module
Any idea how to fix this ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.