Giter VIP home page Giter VIP logo

gstatus's Introduction

gstatus

Overview

gstatus is a commandline utility to report the health and other statistics related to a GlusterFS cluster. gstatus consolidates the volume, brick, and peer information of a GlusterFS cluster.

At volume level, gstatus reports detailed infromation on quota usage, snapshots, self-heal and rebalance status.

Motivation

A gluster trusted storage pool (aka cluster), consists of several key components viz. nodes, volumes, and bricks. In glusterfs, there isn't a single command that can provide an overview of the cluster's health. This means that administrators currently assess the cluster health by looking at several commands to piece together a picture of the cluster's state.

This isn't ideal - so 'gstatus' is an attempt to provide an easy to use, reliable, highlevel view of a cluster's health through a single command. The tool gathers information by calling the glustercli library (https://github.com/gluster/glustercli-python) and displays on the screen.

Dependencies

  • python 3.0 or above
  • gluster version 3.12 or above
  • gluster CLI

Install

Download the latest release with the command

$ curl -fsSL https://github.com/gluster/gstatus/releases/latest/download/install.sh | sudo bash -x
$ gstatus --version

Installating from source

  • Installing glustercli-python
git clone https://github.com/gluster/glustercli-python.git
cd glustercli-python
python3 setup.py install

Installing the gstatus tool:

  • Using python-setuptools
git clone https://github.com/gluster/gstatus.git
cd gstatus
VERSION=1.0.6 make gen-version
python3 setup.py install

Running the tool

NOTE: The tool has to be run as root or sudo . This requirement is imposed by gluster than gstatus. Since gstatus internally calls the gluster command, running as superuser is a necessity.

root@master-node:~# gstatus -h
Usage: gstatus [options]

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit
  -v, --volume          volume info (default is ALL, or supply a volume name)
  -a, --all             Print all available details on volumes
  -b, --bricks          Print the list of bricks
  -q, --quota           Print the quota information
  -s, --snapshots       Print the snapshot information
  -u UNITS, --units=UNITS
                        display storage size in given units
  -o OUTPUT_MODE, --output-mode=OUTPUT_MODE
                        Output mode, only json is supported currently. Default
                        is to print to console.
root@master-node:~#

Listing Volumes

By default gstatus prints an overview of all the volumes available in the cluster. However, user can filter the volumes by specifying -v , more than one volume can be specified by repeated invocation of -v. Or a regular expression can be provided with -v option. For example:

gstatus -v '.*perf'

Ensure to use single quotes around the pattern, else shell file globbing will include unnecessary input. The above pattern fetches all the volumes whose name ends with perf. Any standard regular expression can be provided.

User can request more detailed volume information by providing the -a option. Other volume options include -b, -q, -s which provides brick, quota, and snapshot details respectively.

Quota

By default gstatus reports `Quota: on' if quota is set. With options -a or -q the list of all quota entries, size, and usage is reported.

Understanding the output

gstatus output is made up of two parts:

a. Cluster infromation. b. Volume information.

(There will be more as we add self-heal, rebalance, geo-replication ...)

a. Cluster information

Cluster information provides the health of the cluster and reports the number of nodes reachable, the number of volumes in the cluster and the number of volumes which are up.

b. Volume Information

There are three columns in the volume section namely volume name, volume type (Replicate, Distribute, Distributed-Disperse, Disperse), and additional volume related information. The third column provides wealth of volume information which includes:

  1. Status - (Started/Stopped)
  2. Health (Displayed only when volume is started): i) UP - All bricks are up and volume is healthy. ii) DOWN - All bricks are down (needs immediate attention). iii) PARTIAL - Only some of the bricks are up. (volume is functional). iv) DEGRADED - Some of the sub-volumes are down. (In case of distribute data might not be accessible)
  3. Capacity - Volume capacity. Displayed units can be controlled by -u switch.
  4. Snapshots - By default only snapshot count is shown. Detailed information can be viewed by using -s or -a switch to gstatus.
  5. Bricks - Brick list is not shown by default. Can be viewed by using -b or -a switch.
  6. Quota - List of directories on which quota is set can be viewed by running gstatus with -q or -a switch. By default just the quota status is shown which is on/off.

Output formats

By default output is displayed on screen in a pretty printed format. Alternatively, user can generate JSON output by passing -o json to gstatus command.

gstatus's People

Contributors

amarts avatar aravindavk avatar koi8 avatar kshithijiyer avatar nixpanic avatar pcuzner avatar sac avatar sac-urs avatar tleguern avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gstatus's Issues

AttributeError: 'NoneType' object has no attribute 'text'

Hi, no matter what i do, either using source or binary version, always getting this error:

sudo gstatus -v
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/glustercli/cli/parsers.py", line 218, in parse_volume_info
volumes.append(_parse_a_vol(volume_el))
File "/usr/local/lib/python3.8/dist-packages/glustercli/cli/parsers.py", line 137, in _parse_a_vol
'stripe': int(volume_el.find('stripeCount').text),
AttributeError: 'NoneType' object has no attribute 'text'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/bin/gstatus", line 11, in
load_entry_point('gstatus==1.0.6', 'console_scripts', 'gstatus')()
File "/usr/local/lib/python3.8/dist-packages/gstatus-1.0.6-py3.8.egg/gstatus/main.py", line 73, in main
File "/usr/local/lib/python3.8/dist-packages/gstatus-1.0.6-py3.8.egg/gstatus/glusterlib/cluster.py", line 37, in gather_data
File "/usr/local/lib/python3.8/dist-packages/glustercli/cli/volume.py", line 188, in status_detail
info(volname),
File "/usr/local/lib/python3.8/dist-packages/glustercli/cli/volume.py", line 152, in info
return parse_volume_info(volume_execute_xml(cmd),
File "/usr/local/lib/python3.8/dist-packages/glustercli/cli/parsers.py", line 220, in parse_volume_info
raise GlusterCmdOutputParseError(err)
glustercli.cli.parsers.GlusterCmdOutputParseError: 'NoneType' object has no attribute 'text'

Split Brain - out of sync files

Hi ,

this tools is amazing.

easy to use and it gives the most important infos - without to have massive overhead. simple.

what i recognized is,.... or i may doing something wrong.

/usr/bin/gstatus -ab -o keyvalue

brick_count=3,bricks_active=3,glfs_version='3.12.15',node_count=3,nodes_active=3,over_commit='No',product_name='Community',raw_capacity=5736648228864,sh_active=3,sh_enabled=3,snapshot_count=0,status='healthy',usable_capacity=1912216076288,used_capacity=1266406785024,volume_count=1,volume_summary_down=0,volume_summary_degraded=0,volume_summary_partial=0,volume_summary_up=1

it doesn't show up if Files are all in Sync or not.

Seems if any split-brain or file unhealed,.... it's not recognize it.

UP - 3/3 bricks up - Replicate
Capacity: (22% used) 393.00 GiB/1.70 TiB (used/total)
Snapshots: 0
Self Heal: 3/ 3 Heal backlog of 28 files
Tasks Active: None
Protocols: glusterfs:on NFS:on SMB:on
Gluster Connectivty: 3 hosts, 24 tcp connections

If Gluster is in Sync/Heal Mode > then it's not really a Healthy state.

Normally it shows up

Self Heal: 3/ 3 All files in sync

Would this somehow be able to implement,.. so that this is easy to pull from keyvalue? i use icinga to get status of it. not sure if it's recognize Split-brain as well?

thx

Max

[CENTOS 8] gstatus not working with gluster v8

I have some rpms build from 8.0 source and it seems that gstatus fails to work:

RPMS:

[root@glustera gstatus]# rpm -qa | grep -E "gluster|python" | sort
glusterfs-8.0-0.0.el8.x86_64
glusterfs-cli-8.0-0.0.el8.x86_64
glusterfs-client-xlators-8.0-0.0.el8.x86_64
glusterfs-fuse-8.0-0.0.el8.x86_64
glusterfs-gnfs-8.0-0.0.el8.x86_64
glusterfs-server-8.0-0.0.el8.x86_64
libglusterd0-8.0-0.0.el8.x86_64
libglusterfs0-8.0-0.0.el8.x86_64
platform-python-3.6.8-23.el8.x86_64
platform-python-pip-9.0.3-16.el8.noarch
platform-python-setuptools-39.2.0-5.el8.noarch
policycoreutils-python-utils-2.9-9.el8.noarch
python2-2.7.17-1.module_el8.2.0+381+9a5b3c3b.x86_64
python2-libs-2.7.17-1.module_el8.2.0+381+9a5b3c3b.x86_64
python2-pip-9.0.3-16.module_el8.2.0+381+9a5b3c3b.noarch
python2-pip-wheel-9.0.3-16.module_el8.2.0+381+9a5b3c3b.noarch
python2-setuptools-39.0.1-11.module_el8.2.0+381+9a5b3c3b.noarch
python2-setuptools-wheel-39.0.1-11.module_el8.2.0+381+9a5b3c3b.noarch
python36-3.6.8-2.module_el8.1.0+245+c39af44f.x86_64
python3-audit-3.0-0.17.20191104git1c2f876.el8.x86_64
python3-cairo-1.16.3-6.el8.x86_64
python3-configobj-5.0.6-11.el8.noarch
python3-dateutil-2.6.1-6.el8.noarch
python3-dbus-1.2.4-15.el8.x86_64
python3-decorator-4.2.1-2.el8.noarch
python3-dmidecode-3.12.2-15.el8.x86_64
python3-dnf-4.2.17-6.el8.noarch
python3-dnf-plugins-core-4.0.12-3.el8.noarch
python3-firewall-0.8.0-4.el8.noarch
python3-gobject-3.28.3-1.el8.x86_64
python3-gobject-base-3.28.3-1.el8.x86_64
python3-gpg-1.10.0-6.el8.0.1.x86_64
python3-hawkey-0.39.1-5.el8.x86_64
python3-libcomps-0.1.11-4.el8.x86_64
python3-libdnf-0.39.1-5.el8.x86_64
python3-libs-3.6.8-23.el8.x86_64
python3-libselinux-2.9-3.el8.x86_64
python3-libsemanage-2.9-2.el8.x86_64
python3-libxml2-2.9.7-7.el8.x86_64
python3-linux-procfs-0.6-7.el8.noarch
python3-nftables-0.9.3-12.el8.x86_64
python3-perf-4.18.0-193.14.2.el8_2.x86_64
python3-pip-9.0.3-16.el8.noarch
python3-pip-wheel-9.0.3-16.el8.noarch
python3-policycoreutils-2.9-9.el8.noarch
python3-pyudev-0.21.0-7.el8.noarch
python3-pyxattr-0.5.3-18.el8.x86_64
python3-rpm-4.14.2-37.el8.x86_64
python3-schedutils-0.6-6.el8.x86_64
python3-setools-4.2.2-2.el8.x86_64
python3-setuptools-39.2.0-5.el8.noarch
python3-setuptools-wheel-39.2.0-5.el8.noarch
python3-six-1.11.0-8.el8.noarch
python3-slip-0.6.4-11.el8.noarch
python3-slip-dbus-0.6.4-11.el8.noarch
python3-syspurpose-1.26.17-1.el8_2.x86_64
python3-systemd-234-8.el8.x86_64
samba-vfs-glusterfs-4.11.11-1.el8.x86_64

Python2 fails with:

[root@glustera gstatus]# /usr/bin/python2.7 ./gstatus.py 
 
Traceback (most recent call last):  
  File "./gstatus.py", line 244, in <module>
    main()
  File "./gstatus.py", line 132, in main
    cluster.initialise()
  File "/root/gstatus/gstatus/libgluster/cluster.py", line 97, in initialise
    self.define_nodes()
  File "/root/gstatus/gstatus/libgluster/cluster.py", line 170, in define_nodes
    local_ip_list = get_ipv4_addr()  # Grab all IP's
  File "/root/gstatus/gstatus/libutils/network.py", line 130, in get_ipv4_addr
    namestr = names.tobytes()
AttributeError: 'array.array' object has no attribute 'tobytes'

Python3 fails with:

Traceback (most recent call last):bmeta                     
  File "./gstatus.py", line 142, in main
    cluster.update_state(self_heal_backlog, client_status)
  File "/root/gstatus/gstatus/libgluster/cluster.py", line 564, in update_state
    task.status_str = task.find('./statusStr').text
AttributeError: 'xml.etree.ElementTree.Element' object has no attribute 'status_str'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./gstatus.py", line 244, in <module>
    main()
  File "./gstatus.py", line 143, in main
    except [GlusterFailedVolume, GlusterNotPeerNode] as e:
TypeError: catching classes that do not inherit from BaseException is not allowed

Gluster is operational:

[root@glustera gstatus]# gluster pool list
UUID                                    Hostname        State
9ff2f29c-78a0-4490-9c60-32a752ffd80a    glusterb        Connected 
45854556-04d2-4c62-937f-326ca2cb8677    glusterc        Connected 
25e3c9b9-17e0-43c2-bd31-276381a4450f    glusterd        Connected 
dd6d1704-7049-4748-9916-b66b748656bb    glustere        Connected 
1f699dfd-15f8-4f14-83c0-62b695fad7e0    localhost       Connected 
[root@glustera gstatus]# gluster volume list
ctdbmeta
custdata

single node error : Unable to associate brick with a peer in the cluster

Hi, I have a single node gluster server, as setup by oVirt in single node hyperconverged mode.
When running gstatus 0.66, I get :

~]# gstatus --all
 
Unable to associate brick server.fqdn:/gluster_bricks/engine/engine with a peer in the cluster, possibly dueto name lookup failures. If the nodes are not registered (fwd & rev)to dns, add local entries for your cluster nodes in the the /etc/hosts file

No such error on a 3 node cluster

gstatus source install missing version

Hi,

I'm trying to install gstatus from source via pip install .

I get the following error:

    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-req-build-iu5z4zm1/setup.py", line 7, in <module>
        from gstatus import version
    ImportError: cannot import name 'version' from 'gstatus' (/tmp/pip-req-build-iu5z4zm1/gstatus/__init__.py)

Can this version file be included in the package?

Fails with cryptic XML-related error

Hi.

I've just cloned the repo to Ubunu 14.04-based and to Fedora 22-based gluster nodes.
I try to run the script without installation and on both systems I get:
$ ./gstatus.py -s

Traceback (most recent call last):
File "./gstatus.py", line 221, in
main()
File "./gstatus.py", line 132, in main
cluster.initialise()
File "/home/afunix/opt/gstatus/gstatus/libgluster/cluster.py", line 91, in initialise
self.define_nodes()
File "/home/afunix/opt/gstatus/gstatus/libgluster/cluster.py", line 140, in define_nodes
cmd.run()
File "/home/afunix/opt/gstatus/gstatus/libcommand/glustercmd.py", line 100, in run
xmldoc = ETree.fromstring(''.join(self.stdout))
File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1301, in XML
return parser.close()
File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1654, in close
self._raiseerror(v)
File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
raise err
xml.etree.ElementTree.ParseError: no element found: line 1, column 0

The commit is:
commit 858d2f2
Author: Paul Cuzner [email protected]
Date: Mon May 25 17:12:07 2015 +1200

Added licence file

Ubuntu:
$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.2 LTS"

$ gluster --version
glusterfs 3.6.3 built on Jun 12 2015 17:41:05
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.

Fedora:
$ cat /etc/fedora-release
Fedora release 22 (Twenty Two)

$ gluster --version
glusterfs 3.6.3 built on May 5 2015 14:18:23
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.

gstatus fails when unable to get self-heal status

I'm running gstatus in the latest gluster centos container image.

It seems that the gluster volume heal <VOL> info command is only available for replicate/disperse volumes:

# gluster vol heal gv0 info
Volume gv0 is not of type replicate/disperse
Volume heal failed.

Is it possible to allow gstatus to skip displaying self-heal info if not present (for example I added a replicate volume (gv1), but gstatus will throw an error because of gv0 is a distribute volume type? This currently blocks us from using gstatus at all since the -a, -b and -v flags are all throwing the same error from /gstatus/glusterlib/display_status.py.

When attempting to view the gstatus -a of my cluster, I get the following traceback:

Note: Unable to get self-heal status for one or more volumes
Traceback (most recent call last):
  File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/bin/gstatus/__main__.py", line 74, in <module>
  File "/usr/local/bin/gstatus/__main__.py", line 71, in main
  File "/usr/local/bin/gstatus/glusterlib/display_status.py", line 11, in display_status
  File "/usr/local/bin/gstatus/glusterlib/display_status.py", line 58, in _build_status
KeyError: 'healinfo'

My volume info:

Volume Name: gv0
Type: Distribute
Volume ID: 1360cf07-5a64-4452-aedc-9d0d8aba1280
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gluster-node-1:/export
Options Reconfigured:
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet

Gstatus version:

#  gstatus --version
gstatus 1.0.4

Gluster version:

# gluster --version
glusterfs 7.9
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

AttributeError: 'NoneType' object has no attribute 'get'

Hi everyone,

Just as info.. also with latest gstatus and gluster 10.1 once inside an replication 1 brick is down > then other Gluster nodes getting an error. even if all is fine (apart of the 1 Brick down). As long the Volume is up > means is not emergency.

may an easy fix?

Traceback (most recent call last):
File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/bin/gstatus/main.py", line 77, in
File "/usr/bin/gstatus/main.py", line 73, in main
File "/usr/bin/gstatus/glusterlib/cluster.py", line 37, in gather_data
File "/usr/bin/gstatus/glustercli/cli/volume.py", line 189, in status_detail
File "/usr/bin/gstatus/glustercli/cli/parsers.py", line 307, in parse_volume_status
AttributeError: 'NoneType' object has no attribute 'get'

thx

Output not included

Hello, im using gstatus v1 and there is no option for output... that is a bug or just -o became depracated?

[root@aaaa~]# gstatus --version
gstatus 1.0.0

[root@aaaa~]gstatus -o json
Usage: gstatus [options]

gstatus: error: no such option: -o
[root@aaaa~]# gstatus -h
Usage: gstatus [options]

Options:
--version show program's version number and exit
-h, --help show this help message and exit
-v, --volume Supply a volume or a list of volumes by repeated
invocation of -v. A regular expression can be provided
in place of a volume name (ensure to use single quotes
around the expression)
-a, --all Print all available details on volumes
-b, --bricks Print the list of bricks
-q, --quota Print the quota information
-s, --snapshots Print the snapshot information
-u UNITS, --units=UNITS
display storage size in given units

[Issue] gstatus returning before specified timeout and giving parseError exception

The gstatus command failing if gluster command takes more time to get xml data:

gstatus -a -t 180

...
...
Gluster_Command. Response from glusterd has exceeded the 240 secs timeout, terminating the request
Traceback (most recent call last):
File "/usr/bin/gstatus", line 221, in
main()
File "/usr/bin/gstatus", line 135, in main
cluster.update_state(self_heal_backlog)
File "/usr/lib/python2.7/site-packages/gstatus/libgluster/cluster.py", line 638, in update_state
self.calc_connections()
File "/usr/lib/python2.7/site-packages/gstatus/libgluster/cluster.py", line 730, in calc_connections
xml_root = ETree.fromstring(xml_string)
File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1301, in XML
return parser.close()
File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1654, in close
self._raiseerror(v)
File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
raise err
xml.etree.ElementTree.ParseError: no element found: line 1, column 0

It seems gstatus does not comply with specified timeout and returns before timeout (within 5 seconds).
Sometimes gluster command takes more time to finish which resulting gstatus failing with above ParseError exception.

Reported capacity values seem nonsencical

Hello!

Capacity values seem nonsensical to me.. See below:

`# gstatus --all

Cluster:
Status: Healthy GlusterFS: 10.3
Nodes: 5/5 Volumes: 5/5

Volumes:

backups
Replicate Started (UP) - 5/5 Bricks Up
Capacity: (86.62% used) 3.00 TiB/4.00 TiB (used/total)
Bricks:
Distribute Group 1:
10.100.13.71:/data/brick2/backups (Online)
10.100.13.73:/data/brick2/backups (Online)
10.100.13.72:/data/brick2/backups (Online)
10.100.13.75:/data/brick1/backups (Online)
10.100.13.74:/data/brick2/backups (Online)

engine
Replicate Started (UP) - 5/5 Bricks Up
Capacity: (557.29% used) 3.00 TiB/634.00 GiB (used/total)
Bricks:
Distribute Group 1:
10.100.13.72:/data/brick1/engine (Online)
10.100.13.71:/data/brick1/engine (Online)
10.100.13.73:/data/brick1/engine (Online)
10.100.13.75:/data/brick1/engine (Online)
10.100.13.74:/data/brick1/engine (Online)

guests
Replicate Started (UP) - 5/5 Bricks Up
Capacity: (557.29% used) 3.00 TiB/634.00 GiB (used/total)
Bricks:
Distribute Group 1:
10.100.13.71:/data/brick1/guests (Online)
10.100.13.73:/data/brick1/guests (Online)
10.100.13.72:/data/brick1/guests (Online)
10.100.13.75:/data/brick1/guests (Online)
10.100.13.74:/data/brick1/guests (Online)

kotidata
Replicate Started (UP) - 5/5 Bricks Up
Capacity: (86.62% used) 3.00 TiB/4.00 TiB (used/total)
Bricks:
Distribute Group 1:
10.100.13.71:/data/brick2/kotidata (Online)
10.100.13.73:/data/brick2/kotidata (Online)
10.100.13.72:/data/brick2/kotidata (Online)
10.100.13.75:/data/brick1/kotidata (Online)
10.100.13.74:/data/brick2/kotidata (Online)

nvr
Replicate Started (UP) - 2/2 Bricks Up
Capacity: (544.19% used) 3.00 TiB/650.00 GiB (used/total)
Bricks:
Distribute Group 1:
10.100.13.75:/data/brick1/nvr (Online)
10.100.13.74:/data/brick1/nvr (Online)
`

distutils package is deprecated and slated for removal in Python 3.12.

Running gstatus script throws a DeprecationWarning

/usr/bin/gstatus/main.py:7: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives

gstatus --version : gstatus v1.0.8
Dist : Ubuntu 22.04.1 LTS
python3 version : Python 3.10.4

Feature Req: Quota Info

Hi - like the utility and it's being used in a Zabbix monitoring project. Wondering if it's possible to also pull in the Gluster quota limits on a volume? Thanks.

Make a release

Please, make a new release of the code, with new version, new tag and new tarball for that new tag.

Capacity calculated incorrectly for arbitrated volumes

in gstatus/libgluster/volume.py:

self.raw_capacity is calculated for all bricks, data or arbiter. The gluster vol info --xml output has an <isArbiter> flag for each brick, which can easily be used to identify data bricks vs. arbiter bricks.

self.raw_used is calculated similarly, so the same adjustment should be made.

self.replicaCount is taken directly from the gluster vol info --xml output, but for arbiter volumes that count includes the arbiter brick. The XML also provides an <arbiterCount> value, which should be subtracted from the <replicaCount> value to give the correct data replica count.

With the above changes made, the self.usable_capacity and self.used_capacity calculations should then provide correct values.

Unable to associate brick gluster1.mydomain.com:/bricks/brick1/data with a peer in the cluster

I'm getting error when trying to run gstatus on nodes[2:3] of the cluster. It only works on node1.

[root@gluster2 ~]# gstatus -a -o json
Unable to associate brick gluster1.mydomain.com:/bricks/brick1/data with a peer in the cluster, possibly due
to name lookup failures. If the nodes are not registered (fwd & rev)
to dns, add local entries for your cluster nodes in the the /etc/hosts file

Here is my peer status:

[root@gluster2 ~]# gluster peer status
Number of Peers: 2

Hostname: 10.10.10.5
Uuid: bbb19d4b-c9fc-4f14-8a9c-9cedf1fac367
State: Peer in Cluster (Connected)
Other names:
gluster1.mydomain.com

Hostname: gluster3.mydomain.com
Uuid: da74fc61-eb2a-4f0d-906b-9fc2c287908f
State: Peer in Cluster (Connected)

As you may notice my node1 hostname is IP address. According to documentation:

Note: When using hostnames, the first server needs to be probed from one other server to set its hostname.

I did that, and the hostname appeared in Other names

And here is my gluster volume config:

[root@gluster2 ~]# gluster volume info
Volume Name: data
Type: Replicate
Volume ID: 8f8285bc-c8b2-4112-b88e-49df45bf7e87
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster1.mydomain.com:/bricks/brick1/data
Brick2: gluster2.mydomain.com:/bricks/brick1/data
Brick3: gluster3.mydomain.com:/bricks/brick1/data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

Re-Add output mode keyvalue ...

Hey, in former versions there was an outputmode keyvalue which was easiely to use with nagios and such ... Could that be re-added please?

gstatus output values as strings

when running gstatus -a -o json
to get gluster json

it return my object field number values as strings (type)

example object:
"healinfo": [
{
"name": "device:/rhgs/node1_brick1/disk_vol2",
"status": "Connected",
"host_uuid": "7c99b0ca-d912-4fc7-b4c0-535686744dc3",
"nr_entries": "21" >> this value "21" is returned with type of "String" instead of "Integer"

we would like to get it as Intger value

however some of them returns the "-" sign instead of actual values

run gstatus in a crontab

Hi, I would like to use gstatus to generate a json that is delivered via a local webserver. When I start gstatus manually with the -o option, the output as json can be generated.

If I start gstatus in a one line script via crontab, the json can not be generated (file size = 0B).

My proxmox host then sends me the following error by email:

image

/usr/local/bin/gstatus -a -o json > /tmp/test.json

Any Ideas?

Thanks and regards,
Thorsten

Any plan to support glusterfs 7?

Hello,

Today I upgraded glusterfs to 7.0 ( yay ).
However, after the upgrade I noticed that gstatus is not working any more :(
Is there any plan to make a release to gstatus in order to support gluster 7?

root@demo2:~# gluster volume status
Status of volume: gfs
Gluster process TCP Port RDMA Port Online Pid

Brick demo1:/gluster/bricks/1/brick 49152 0 Y 1846
Brick demo2:/gluster/bricks/2/brick 49153 0 Y 24646
Brick arbiter:/gluster/bricks/3/brick 49152 0 Y 16762
Self-heal Daemon on localhost N/A N/A Y 26784
Self-heal Daemon on demo1 N/A N/A Y 1931
Self-heal Daemon on arbiter N/A N/A Y 7603

Task Status of Volume gfs

There are no active volume tasks

root@demo2:~# gstatus -a
gstatus is not compatible with this version of glusterfs 7.0

Thanks!

KeyError Exception on Missing Brick

I'm running the gstatus package for CentOS 8 from COPR with glusterfs 7.6, and with bricks missing it hits an exception:

# gstatus -a
 
Traceback (most recent call last):             
  File "/usr/bin/gstatus", line 245, in <module>
    main()
  File "/usr/bin/gstatus", line 143, in main
    cluster.update_state(self_heal_backlog, client_status)
  File "/usr/lib/python2.7/site-packages/gstatus/libgluster/cluster.py", line 531, in update_state
    self.volume[volume_name].update(xml_obj)
  File "/usr/lib/python2.7/site-packages/gstatus/libgluster/volume.py", line 128, in update
    self.brick_update(node)
  File "/usr/lib/python2.7/site-packages/gstatus/libgluster/volume.py", line 99, in brick_update
    node_info['fsName'],
KeyError: 'fsName'

The drive was in an i/o error status when this exception was occurring, if that's relevant:

# ll /data/glusterfs/volname1/
ls: cannot access '/data/glusterfs/volname1/brick2': Input/output error
total 0
drwxr-xr-x. 5 root root 70 Feb 17 11:59 brick1
d?????????? ? ?    ?     ?            ? brick2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.