Giter VIP home page Giter VIP logo

chef-gluster's Introduction

gluster Cookbook

Build Status Cookbook Version

This cookbook is used to install and configure Gluster on both servers and clients. This cookbook makes several assumptions when configuring Gluster servers:

  1. This cookbook is being run on at least two nodes, the exact number depends on the Gluster Volume type
  2. A second physical disk has been added, unformatted, to the server. This cookbook will install lvm and configure the disks automatically.
  3. All peers for a volume will be configured with the same number of bricks

Platforms

This cookbook has been tested on:

  • Ubuntu 14.04
  • Ubuntu 16.04
  • Centos 6.8
  • Centos 7.2

As this cookbook uses Semantic Versioning, major version number bumps are not backwards compatible. Especially the change from v4 to v5 will require a rebuild of the gluster nodes.

Attributes

gluster::default

  • node['gluster']['version'] - version to install, defaults to 3.8
  • node['gluster']['repo'] - repo to install from: can be public or private, defaults to public, private requires a so-called "private" repo to be configured in a wrapper cookbook for example

gluster::client

Node attributes to specify volumes to mount. This has been deprecated in favor of using the 'gluster_mount' LWRP.

  • node['gluster']['client']['volumes'][VOLUME_NAME]['server'] - server to connect to
  • node['gluster']['client']['volumes'][VOLUME_NAME]['backup_server'] - name of the backup volfile server to mount the client. When the first volfile server fails, then the server specified here is used as volfile server and is mounted by the client. This can be a String or Array of Strings.
  • node['gluster']['client']['volumes'][VOLUME_NAME]['mount_point'] - mount point to use for the Gluster volume

gluster::server

Node attributes to specify server volumes to create

The absolute minimum configuration is:

  • node['gluster']['server']['disks'] - an array of disks to create partitions on and format for use with Gluster, (for example, ['/dev/sdb', '/dev/sdc'])
  • node['gluster']['server']['volumes'][VOLUME_NAME]['peers'] - an array of FQDNs for peers used in the volume
  • node['gluster']['server']['volumes'][VOLUME_NAME]['volume_type'] - the volume type to use; this value can be 'replicated', 'distributed-replicated', 'distributed', 'striped' or 'distributed-striped'
  • node['gluster']['server']['volumes'][VOLUME_NAME]['size'] - The size of the gluster volume you would like to create, for example, 100M or 5G. This is passed through to the lvm cookbook and uses the syntax defined here: https://github.com/chef-cookbooks/lvm .

gluster::geo_replication

Node attributes to specify mountbroker details.

  • node['gluster']['mountbroker]['path'] - The mountbroker path. Defaults to /var/mountbroker-root. This does not need to exist beforehand.
  • node['gluster']['mountbroker]['group'] - The mountbroker group. Defaults to geogroup. This will be created as a system group if it does not exist already.
  • node['gluster']['mountbroker]['users'] - A hash of users to volumes for allowing access. Empty by default. Multiple volumes can be given as an array. Neither the user or volume needs to exist beforehand. Removing entries does not drop access rights, this must be done manually or via the custom resource.

Other attributes include:

  • node['gluster']['server']['enable'] - enable or disable server service (default enabled)
  • node['gluster']['server']['server_extend_enabled'] - enable or disable server extending support (default enabled)
  • node['gluster']['server']['brick_mount_path'] - default path to use for mounting bricks
  • node['gluster']['server']['disks'] - an array of disks to create partitions on and format for use with Gluster, (for example, ['/dev/sdb', '/dev/sdc'])
  • node['gluster']['server']['peer_retries'] - attempt to connect to peers up to N times
  • node['gluster']['server']['peer_retry_delays'] - number of seconds to wait between attempts to initially attempt to connect to peers
  • node['gluster']['server']['volumes'][VOLUME_NAME]['allowed_hosts'] - an optional array of IP addresses to allow access to the volume
  • node['gluster']['server']['volumes'][VOLUME_NAME]['peer_names'] - an optional array of Chef node names for peers used in the volume
  • node['gluster']['server']['volumes'][VOLUME_NAME]['peers'] - an array of FQDNs for peers used in the volume
  • node['gluster']['server']['volumes'][VOLUME_NAME]['quota'] - an optional disk quota to set for the volume, such as '10GB'
  • node['gluster']['server']['volumes'][VOLUME_NAME]['replica_count'] - the number of replicas to create
  • node['gluster']['server']['volumes'][VOLUME_NAME]['volume_type'] - the volume type to use; this value can be 'replicated', 'distributed-replicated', 'distributed', 'striped' or 'distributed-striped'
  • node['gluster']['server']['volumes'][VOLUME_NAME]['size'] - The size of the gluster volume you would like to create, for example, 100M or 5G. This is passed through to the lvm cookbook.
  • node['gluster']['server']['volumes'][VOLUME_NAME]['filesystem'] - The filesystem to use. This defaults to xfs.
  • node['gluster']['server']['volumes'][VOLUME_NAME]['options'] - Optional options to configure on volume

Custom Resources

gluster_volume

Use this resource to start, stop, or delete volumes:

gluster_volume 'volume_name' do
  action :start
end
gluster_volume 'volume_name' do
  action :stop
end
gluster_volume 'volume_name' do
  action :delete
end

It is also useful for checking existence in only_if blocks:

volume = gluster_volume 'volume_name' do
  action :nothing
end

some_resource 'foo' do
  only_if { volume.current_value }
end
gluster_volume 'volume_name' do
  action :start
  only_if { current_value }
end

Parameters

  • volume_name - The volume name. Defaults to the resource name.

gluster_mount

Use this resource to mount volumes on clients:

gluster_mount 'volume_name' do
  server 'gluster1.example.com'
  backup_server 'gluster2.example.com'
  mount_point '/mnt/gluster/volume_name'
  action [:mount, :enable]
end
gluster_mount 'volume_name' do
  server 'gluster1.example.com'
  backup_server ['gluster2.example.com', 'gluster3.example.com']
  mount_point '/mnt/gluster/volume_name'
  action [:mount, :enable]
end

Parameters

  • server - The primary server to fetch the volfile from. Required.

  • backup_server - Backup servers to obtain the volfile from. Optional.

  • mount_point - The mount point on the local server to mount the glusterfs volume on. Created if non-existing. Required.

  • mount_options - Additional mount options added to the default options set defaults,_netdev. Optional.

  • owner - Owner of the underlying mount point directory. Defaults to nil. Optional.

  • group - Group of the underlying mount point directory. Defaults to nil. Optional.

  • mode - File mode of the underlying mount point directory. Defaults to nil. Optional.

gluster_volume_option

Use this resource to set or reset volume options:

gluster_volume_option 'volume_name/changelog.rollover-time' do
  value 5
  action :set
end
gluster_volume_option 'volume_name/changelog.rollover-time' do
  action :reset
end

Parameters

  • key - Volume option to change. Required. Derived from after the / of resource name if not given.
  • value - The value to set for the given option. Required for the set action. Booleans are mapped to on or off.
  • volume - Volume to chnage. Required. Derived from before the / of resource name if not given.

gluster_mountbroker_user

Use this resource to allow or disallow the given user access to the given volume:

gluster_mountbroker_user 'user/volume_name' do
  action :add
end
gluster_mountbroker_user 'user/volume_name' do
  action :remove
end

Parameters

  • user - The user to grant permission to. Required. Derived from before the / of resource name if not given.
  • volume - The volume to grant the permission for. Required. Derived from after the / of resource name if not given.

Usage

On two or more identical systems, attach the desired number of dedicated disks to use for Gluster storage. Add the gluster::server recipe to the node's run list and add any appropriate attributes, such as volumes to the ['gluster']['server']['volumes'] attribute. If the cookbook will be used to manage disks, add the disks to the ['gluster']['server']['disks'] attribute; otherwise format the disks appropriately and add them to the ['gluster']['server']['volumes'][VOLUME_NAME]['disks'] attribute. Once all peers for a volume have configured their bricks, the 'master' peer (the first in the array) will create and start the volume.

For example, to create a replicated gluster volume named gv0 with 2 bricks on two nodes, add the following to your attributes/default.rb and include the gluster::server recipe:

default['gluster']['server']['brick_mount_path'] = "/data"
default['gluster']['server']['volumes'] = {
                'gv0' => {
                        'peers' => ['gluster1.example.com','gluster2.example.com'],
                        'replica_count' => 2,
                        'volume_type' => "replicated"
                }
}

To create a distributed-replicated volume with 4 bricks and a replica count of two:

default['gluster']['server']['brick_mount_path'] = "/data"
default['gluster']['server']['volumes'] = {
                'gv0' => {
                        'peers' => ['gluster1.example.com','gluster2.example.com','gluster3.example.com','gluster4.example.com'],
                        'replica_count' => 2,
                        'volume_type' => "distributed-replicated"
                }
}

To create a replicated volume with 4 bricks:

default['gluster']['server']['brick_mount_path'] = "/data"
default['gluster']['server']['volumes'] = {
                'gv0' => {
                        'peers' => ['gluster1.example.com','gluster2.example.com','gluster3.example.com','gluster4.example.com'],
                        'replica_count' => 4,
                        'volume_type' => "replicated"
                }
}

For clients, add the gluster::default or gluster::client recipe to the node's run list, and mount volumes using the gluster_mount LWRP. The Gluster volume will be mounted on the next chef-client run (provided the volume exists and is available) and added to /etc/fstab.

This cookbook cannot currently perform all the steps required for geo-replication but it can configure the mountbroker. The gluster::mountbroker recipe calls upon the gluster::geo_replication_install recipe to install the necessary package before configuring the mountbroker according to the ['gluster']['mountbroker'] attributes. User access can be defined via the attributes or you can use the gluster_mountbroker_user custom resource directly. Both the recipe and resource require Gluster 3.9 or later.

Testing

There is a kitchen file provided to allow testing of the various versions. Examples of tests are:

(Depending on your shell, you may or may not need the \ in the RegEx)

To test a replicated volume on Ubuntu 16.04: kitchen converge replicated[12]-ubuntu-1604 kitchen verify replicated2-ubuntu-1604

To test a distributed-replicated volume on CentOS 7.2: kitchen converge distributed-repl[1234]-centos-72 kitchen verify distributed-repl4-centos-72

To test a striped volume on CentOS 6.8: kitchen converge striped[12]-centos-68 kitchen verify striped2-centos-68

To test a fuse client on Ubuntu 14.04: kitchen converge client[12]-ubuntu-1404 kitchen verify client2-ubuntu-1404

Please note that at present the kitchen setup only supports Virtualbox

chef-gluster's People

Contributors

acholt avatar andyrepton avatar bjozet avatar cbarraford avatar chewi avatar dpattmann avatar jking916 avatar kareiva avatar lefthand avatar philbrookes avatar seth-karlo avatar shortdudey123 avatar tcharl avatar theundefined avatar thoutenbos avatar tmoreau-sbp avatar tomzo avatar vchung-nz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chef-gluster's Issues

server_setup creates bogus nodes

server_setup includes a call to Chef::Node.find_or_create. This results in the creation of a bogus node if the FQDN of a node does not match the node name. I have a potential fix, but have not yet submitted a pull request as I have not properly tested this branch yet.

https://github.com/jdblack/chef-gluster/tree/dont_create_nodes

Support to install specific patch version

Is there any interest in adding an optional attribute to "pin" the client / server to a specific version? Something like this for client_install.rb

include_recipe 'gluster::repository' unless node['gluster']['repo'] == 'private'

# Install the client package
package node['gluster']['client']['package'] do
  version node['gluster']['client']['version']
end

Peer brick listing always uses fqdn of peers

My chef node names don't include the domain name (e.g. node.name is "server1"), so in node attributes I specify the gluster peer names to match so that the chef node lookup of peers works correctly. However these chef nodes do also have an fqdn node attribute set (e.g. node.fqdn == "server1.example.net") When compiling the list of bricks it's currently relying on the "chef_fqdn" variable which is defined as:

chef_fqdn = chef_node['fqdn'] || chef_node['hostname']

Which in my case will specify the brick names as "server1.example.net:/gluster/xvdf1/myvolume" and later when it executes gluster volume create... command it will bomb with

volume create: dataroot: failed: Host aze-tept-dat02.tresearch.net is not in 'Peer in Cluster' state

Provisioning multiple disks

I am able to run the cookbook successfully first time which volumes were created to from /dev/sdb. But if later I want to re-use the cookbook again and add another disks like /dev/sdc and create volumes. It seems that it doesn't pick up.

Current version of chef-gluster need peers specified by hostname

I found, that current version of chef-gluster need peers specified by hostname. When hostnames and fqdns aren't globaly resolvable - sometimes is more usefull to use it by ip. Using gluster by ip's is also needed for example by kubernetes.

I made small PR(#107) to fix it - please include it :)
Works, and should not break anything.
I also replaced multiple || checks by one "[].include?(element)" - it's recomended way by rubocop.

Doesn't Create Volumes

Hi,

I am creating a Gluster cluster of two servers. I bootstrap the peer first and then the master. Both servers build ok and the peer probe works displaying the servers in the cluster but the volumes aren't created.

I have a default of

default['gluster']['server']['brick_mount_path'] = "/"

and the following set in the environment file

`"gluster": {
      "server": {
        "volumes": {
          "amq": {
            "peers": ["stg-inf-gfs-01.matchesremote.com", "stg-inf-gfs-02.matchesremote.com"],
            "replica_count": "2",
            "volume_type": "replicated",
            "lvm_volumes": ["share"]
          },
          "api": {
            "peers": ["stg-inf-gfs-01.matchesremote.com", "stg-inf-gfs-02.matchesremote.com"],
            "replica_count": "2",
            "volume_type": "replicated",
            "lvm_volumes": ["share"]
          },
          "hybris_dataimport": {
            "peers": ["stg-inf-gfs-01.matchesremote.com", "stg-inf-gfs-02.matchesremote.com"],
            "replica_count": "2",
            "volume_type": "replicated",
            "lvm_volumes": ["share"]
          },
          "hybris_media": {
            "peers": ["stg-inf-gfs-01.matchesremote.com", "stg-inf-gfs-02.matchesremote.com"],
            "replica_count": "2",
            "volume_type": "replicated",
            "lvm_volumes": ["share"]
          },
          "map": {
            "peers": ["stg-inf-gfs-01.matchesremote.com", "stg-inf-gfs-02.matchesremote.com"],
            "replica_count": "2",
            "volume_type": "replicated",
            "lvm_volumes": ["share"]
          },
          "mids": {
            "peers": ["stg-inf-gfs-01.matchesremote.com", "stg-inf-gfs-02.matchesremote.com"],
            "replica_count": "2",
            "volume_type": "replicated",
            "lvm_volumes": ["share"]
          },
          "nav": {
            "peers": ["stg-inf-gfs-01.matchesremote.com", "stg-inf-gfs-02.matchesremote.com"],
            "replica_count": "2",
            "volume_type": "replicated",
            "lvm_volumes": ["share"]
          }
        }
      }
    }`

Any ideas where I'm going wrong?

Thanks

Incorrect case statement in providers/mount.rb

The mount provider will never use backupvolfile-server during mount because the case statement is not correct:

def mount_options_for_backup_server
  case new_resource.backup_server.class  # will not === match either when statement
  when String
    ',backupvolfile-server=' + new_resource.backup_server
  when Array
    ',backupvolfile-server=' + new_resource.backup_server.join(',backupvolfile-server=')
  end
end

Code: https://github.com/shortdudey123/chef-gluster/blob/master/providers/mount.rb#L84

Unable to create a replicated gluster volume on non-root FS

It looks like the gluster volume command needs a echo y.

Cookbook Version: 6.2.0

Configuration of chef-gluster (removed names specific to our system):

default['gluster']['repo'] = 'https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.11/'
default['gluster']['server']['brick_mount_path'] = '/var/lib/glusterfs'
default['gluster']['server']['enable'] = true
default['gluster']['version'] = '3.11'
default['gluster']['server']['disks'] = ['/dev/xvdf']
default['gluster']['server']['volumes']['myvolume']['peers'] = ['node01', 'node02']
default['gluster']['server']['volumes']['myvolume']['peer_names'] = ['node01.chef.name', 'node02.chef.name']
default['gluster']['server']['volumes']['myvolume']['size'] = '100%FREE'
default['gluster']['server']['volumes']['myvolume']['volume_type'] = 'replicated'

Log of chef-client (removed names specific to our system):

================================================================================
Error executing action `run` on resource 'execute[gluster volume create]'
================================================================================

Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '1'
---- Begin output of gluster volume create myvolume replica 2 node01:/var/lib/glusterfs/myvolume/brick node02:/var/lib/glusterfs/myvolume/brick ----
STDOUT: Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See:  https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
 (y/n) gluster cli read error
Invalid input, please enter y/n
STDERR: Usage: volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT> [arbiter <COUNT>]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>?<vg_name>... [force]
---- End output of gluster volume create myvolume replica 2 node01:/var/lib/glusterfs/myvolume/brick node02:/var/lib/glusterfs/myvolume/brick ----
Ran gluster volume create myvolume replica 2 node01:/var/lib/glusterfs/myvolume/brick node02:/var/lib/glusterfs/myvolume/brick returned 1

Cookbook Trace:
---------------
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/monkeypatches/chef/runner.rb:78:in `run_action'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/monkeypatches/chef/runner.rb:106:in `block (2 levels) in converge'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/monkeypatches/chef/runner.rb:106:in `each'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/monkeypatches/chef/runner.rb:106:in `block in converge'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/monkeypatches/chef/runner.rb:105:in `converge'

Resource Declaration:
---------------------
# In /var/chef/cache/cookbooks/gluster/recipes/server_setup.rb

156:       execute 'gluster volume create' do
157:         command lazy { # rubocop:disable Lint/AmbiguousBlockAssociation
158:           if force
159:             "echo y | #{volume_create_cmd} force"
160:           elsif system("df #{node['gluster']['server']['brick_mount_path']}/#{volume_name}/ --output=target |grep -q '^/$'") && node['gluster']['server']['disks'].empty?
161:             Chef::Log.warn("Directory #{node['gluster']['server']['brick_mount_path']}/ on root filesystem, force creating volume #{volume_name}")
162:             "echo y | #{volume_create_cmd} force"
163:           else
164:             volume_create_cmd
165:           end
166:         }

Compiled Resource:
------------------
# Declared in /var/chef/cache/cookbooks/gluster/recipes/server_setup.rb:156:in `block in from_file'

execute("gluster volume create") do
  action [:run]
  retries 0
  retry_delay 2
  default_guard_interpreter :execute
  command #<Chef::DelayedEvaluator:0x000000099adf90@/var/chef/cache/cookbooks/gluster/recipes/server_setup.rb:157>
  backup 5
  returns 0
  declared_type :execute
  cookbook_name "gluster"
  recipe_name "server_setup"
end

GPT disklabel and partition alignment

In server_setup recipe, fdisk could be replaced with parted to allow for proper usage of disks larger than 2TB, by using GPT disklabels. The partitions that is created should also be properly aligned for best performance.

I'm no longer using glusterfs, but thought I'd leave this here for you to think about, if you'd wish to implement it in the future.

This blog post contains lots of good information and links to other resources:
http://rainbow.chard.org/2013/01/30/how-to-align-partitions-for-best-performance-using-parted/

Allow create gluster without manage lvm volumes

I would like to create glusterfs on previously created volumes. I've added empty gluster.server.disks table to not create partitions there, then I've added brick_mount_path to correct dir, set volumes... but it doesn't work. I'ave received error:

 NoMethodError
  -------------
  undefined method `[]' for nil:NilClass

on line:
24>> peer_bricks = chef_node['gluster']['server']['volumes'][volume_name]['bricks'].select { |brick| brick.include? volume_name }

bricks_waiting_to_join - undefined method `empty?' for nil:NilClass

Hi,

after the first chef-run I get this error message from my chef-client. (12.7.2)

  ================================================================================
  Recipe Compile Error in /var/chef/cache/cookbooks/gluster/recipes/server.rb
  ================================================================================

  NoMethodError
  -------------
  undefined method `empty?' for nil:NilClass

  Cookbook Trace:
  ---------------
    /var/chef/cache/cookbooks/gluster/recipes/server_extend.rb:40:in `block in from_file'
    /var/chef/cache/cookbooks/gluster/recipes/server_extend.rb:1:in `each'
    /var/chef/cache/cookbooks/gluster/recipes/server_extend.rb:1:in `from_file'
    /var/chef/cache/cookbooks/gluster/recipes/server.rb:24:in `from_file'

  Relevant File Content:
  ----------------------
  /var/chef/cache/cookbooks/gluster/recipes/server_extend.rb:

   33:        unless brick_in_volume?(peer_name, brick, volume_name)
   34:          node.default['gluster']['server']['volumes'][volume_name]['bricks_waiting_to_join'] << " #{peer_name}:#{brick}"
   35:        end
   36:      end
   37:    end
   38:  
   39:    replica_count = volume_values['replica_count']
   40>>   next if node['gluster']['server']['volumes'][volume_name]['bricks_waiting_to_join'].empty?
   41:    # The number of bricks in bricks_waiting_to_join has to be a modulus of the replica_count we are using for our gluster volume
   42:    if (brick_count % replica_count) == 0
   43:      Chef::Log.info("Attempting to add new bricks into volume #{volume_name}")
   44:      execute "gluster volume add-brick #{volume_name} #{node['gluster']['server']['volumes'][volume_name]['bricks_waiting_to_join']}" do
   45:        action :run
   46:      end
   47:      node.set['gluster']['server']['volumes'][volume_name]['bricks_waiting_to_join'] = ''
   48:    elsif volume_values['volume_type'] == 'striped'
   49:      Chef::Log.warn("#{volume_name} is a striped volume, adjusting replica count to match new number of bricks")


  Running handlers:
[2016-05-25T15:43:08+00:00] ERROR: Running exception handlers
  Running handlers complete
[2016-05-25T15:43:08+00:00] ERROR: Exception handlers complete
  Chef Client failed. 1 resources updated in 07 seconds
[2016-05-25T15:43:08+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
[2016-05-25T15:43:08+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[2016-05-25T15:43:08+00:00] ERROR: undefined method `empty?' for nil:NilClass
[2016-05-25T15:43:09+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)

I fixed this error by setting bricks_waiting_to_join for each volume to'', but it looks like a bug ๐Ÿ›

'peers' attribute it's case sensitive, it should be not?

Hello

I've got some issues with the 'peers' attribute that broke my usage of this good cookbook.
in /recipes/server_setup.rb

...
31:  if volume_values['peers'].include?(node['fqdn']) || volume_values['peers'].include?(node['hostname'])
...
58: if volume_values['peers'].first == node['fqdn'] || volume_values['peers'].first == node['hostname']
...

We see a strict comparison of the domain name into the peers and the node domain name. But if I check the RFC 1034

3. DOMAIN NAME SPACE and RESOURCE RECORDS
3.1. Name space specifications and terminology
...
By convention, domain names can be stored with arbitrary case, but
domain name comparisons for all present domain functions are done in a
case-insensitive manner, assuming an ASCII character set, and a high
order zero bit.  This means that you are free to create a node with
label "A" or a node with label "a", but not both as brothers; you could
refer to either using "a" or "A".  When you receive a domain name or
label, you should preserve its case.  The rationale for this choice is
that we may someday need to add full binary domain names for new
services; existing services would not be changed.
...

Could you check it and fix it please?

Brick count matching Replica count

From: https://github.com/biola/chef-gluster/blob/master/recipes/server_setup.rb#L111

if brick_count != volume_values['replica_count']

We have a scenario where we have more bricks than we want replicas. For example, we may have a 10 brick cluster. But we may only care about replicating the data 3 times.

With the above code setup, they always have to be equal. Is this intended behavior?

I am thinking something like this should be good...

if brick_count < volume_values['replica_count']

Replacing a peer causes server_extend.rb to bomb

If you remove a peer server along with the chef node and add a new host, the recipe will run gluster volume add-brick instead of gluster volume replace-brick. This causes the chef-run to bomb out since the exit code from gluster is 1.

requier 'lvm' error

I am getting following error when trying to build new gluster replica. Cookbook was installed using berks. Any recommendations for fixing this?

Relevant File Content:

/var/chef/cache/cookbooks/gluster/recipes/volume_extend.rb:

18: # limitations under the License.
19: #
20: node['gluster']['server']['volumes'].each do |volume_name, volume_values|
21: # 1. Get the current size of the logical volume
22: # 2. Compare to the size set for the gluster volume
23: # 3. If different, run a resize action against that volume
24: # ToDO: change hardcoded VG name gluster into an attribute
25>> require 'lvm'
26:
27: LVM::LVM.new do |lvm|
28: lvm.logical_volumes.each do |lv|
29: # I'm ignoring these as I think this layout helps readability
30: # rubocop:disable Style/Next, Style/CaseIndentation, Lint/EndAlignment
31: if lv.full_name.to_s == "gluster/#{volume_name}"
32: lv_size_cur = lv.size.to_i
33: # Borrowed from the lvm cookbook
34: volume_lv_size_req = case volume_values['size']

Backwards compatibility issue between v2 and v3 upgrade in regards to PPA

We upgraded from v2 to v3 of this cookbook and we are now experiencing the following error during our server's chef-client runs:

remote: Fetched 4899 kB in 11s (409 kB/s)
remote: STDERR: W: Failed to fetch http://ppa.launchpad.net/semiosis/ubuntu-glusterfs-3.4/ubuntu/dists/precise/main/source/Sources  404  Not Found
remote: 
remote: W: Failed to fetch http://ppa.launchpad.net/semiosis/ubuntu-glusterfs-3.4/ubuntu/dists/precise/main/binary-amd64/Packages  404  Not Found
remote: 
remote: W: Failed to fetch http://ppa.launchpad.net/semiosis/ubuntu-glusterfs-3.4/ubuntu/dists/precise/main/binary-i386/Packages  404  Not Found
remote: 
remote: E: Some index files failed to download. They have been ignored, or old ones used instead.
remote: ---- End output of apt-get update ----
remote: Ran apt-get update returned 100
remote: 
remote: Resource Declaration:
remote: ---------------------
remote: # In /var/chef/cache/cookbooks/apt/recipes/default.rb
remote: 
remote:  75: execute 'apt-get-update-periodic' do
remote:  76:   command 'apt-get update'
remote:  77:   ignore_failure true
remote:  78:   only_if do
remote:  79:     apt_installed? &&
remote:  80:     ::File.exists?('/var/lib/apt/periodic/update-success-stamp') &&
remote:  81:     ::File.mtime('/var/lib/apt/periodic/update-success-stamp') < Time.now - node['apt']['periodic_update_min_delay']
remote:  82:   end
remote:  83: end
remote:  84: 
remote: 
remote: Compiled Resource:
remote: ------------------
remote: # Declared in /var/chef/cache/cookbooks/apt/recipes/default.rb:75:in 'from_file'
remote: 
remote: execute("apt-get-update-periodic") do
remote:   action "run"
remote:   ignore_failure true
remote:   retries 0
remote:   retry_delay 2
remote:   guard_interpreter :default
remote:   command "apt-get update"
remote:   backup 5
remote:   returns 0
remote:   cookbook_name "apt"
remote:   recipe_name "default"
remote:   only_if { #code block }
remote: end
remote: 

The changelog specifies that the repo changed, but is unclear on what needs to be corrected to use the new version.

After successful initial run, subsequent runs blow up at server_extend

Initial run of gluster::server is successful. Volume created and started.
When gluster::server runs again, the following gets vomited out:

NoMethodError
-------------
private method `select' called for nil:NilClass

...

Relevant File Content:
----------------------
/var/chef/cache/cookbooks/gluster/recipes/server_extend.rb:

   17:        next
   18:      end
   19:
   20:      unless node.default['gluster']['server']['volumes'][volume_name].attribute?('bricks_waiting_to_join')
   21:        node.default['gluster']['server']['volumes'][volume_name]['bricks_waiting_to_join'] = ''
   22:      end
   23:
   24>>     peer_bricks = chef_node['gluster']['server']['volumes'][volume_name]['bricks'].select { |brick| brick.include? volume_name }
   25:      brick_count += (peer_bricks.count || 0)
   26:      peer_bricks.each do |brick|
   27:        Chef::Log.info("Checking #{peer}:#{brick}")
   28:        unless brick_in_volume?(peer, brick, volume_name)
   29:          node.default['gluster']['server']['volumes'][volume_name]['bricks_waiting_to_join'] << " #{peer}:#{brick}"
   30:        end
   31:      end
   32:    end
   33:

Fork this repo

Hi guy,

I took a look at Grant Ridder on Github, Linkedin, Twitter, SOF, Facebook...

There's no activity on all his accounts since 23 May 2017 : I hope he is fine and that nothing wrong happened (maybe he is working in a protected area without any Internet access nor the right to communicate ;-))

However, can someone fork and maintain this repo? On my side I'm a Java engineer doing IAC as part of his weekend: I'm not enough serious neither rubyist to maintain this thing.

Best regards,

Charlie

backupvolfile-server invalids option

Hi,

Problem :

mount -t glusterfs -o defaults,_netdev,backupvolfile-servers=[HOST2] [HOST1]:/[VOLUME] [MOUNT_POINT]

Invalid option: backupvolfile-servers

I thinks the problem du in gluster/providers/mount.rb:88

backupvolfile-servers instead of backupvolfile-server

client.rb with centos 6.6 and glusterfs 3.7.4

I was able to reproduce my problem with glusterfs package installed only.
Please see below and advise.

############ system
> uname -a
Linux eggxws032 2.6.32-504.23.4.el6.x86_64 #1 SMP Tue Jun 9 20:57:37 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

> cat /etc/issue
CentOS release 6.6 (Final)
Kernel \r on an \m

############ glusterfs package
> yum info glusterfs
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
Available Packages
Name        : glusterfs
Arch        : x86_64
Version     : 3.7.4
Release     : 2.el6
Size        : 416 k
Repo        : gluster
Summary     : Cluster File System
URL         : http://www.gluster.org/docs/index.php/GlusterFS
License     : GPLv2 or LGPLv3+
Description : GlusterFS is a distributed file-system capable of scaling to several
            : petabytes. It aggregates various storage bricks over Infiniband RDMA
            : or TCP/IP interconnect into one large parallel network file
            : system. GlusterFS is one of the most sophisticated file systems in
            : terms of features and extensibility.  It borrows a powerful concept
            : called Translators from GNU Hurd kernel. Much of the code in GlusterFS
            : is in user space and easily manageable.
            : 
            : This package includes the glusterfs binary, the glusterfsd daemon and the
            : libglusterfs and glusterfs translator modules common to both GlusterFS server
            : and client framework


############ current cookbook
> sudo yum install -y glusterfs 1>/dev/null

> sudo rpm -qa | grep gluster
glusterfs-libs-3.7.4-2.el6.x86_64
glusterfs-3.7.4-2.el6.x86_64

> sudo mount -t glusterfs -o defaults,_netdev,backupvolfile-server=eggxnas001 eggxnas002:/data /sites/shared
mount: unknown filesystem type 'glusterfs'

############ working mount
> sudo yum install -y glusterfs-fuse 1>/dev/null

> sudo rpm -qa | grep gluster
glusterfs-libs-3.7.4-2.el6.x86_64
glusterfs-client-xlators-3.7.4-2.el6.x86_64
glusterfs-3.7.4-2.el6.x86_64
glusterfs-fuse-3.7.4-2.el6.x86_64

> sudo mount -t glusterfs -o defaults,_netdev,backupvolfile-server=eggxnas001 eggxnas002:/data /sites/shared

> mount | grep gluster
eggxnas002:/data on /sites/shared type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.